Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
25
result(s) for
"proprietary features"
Sort by:
Science and Technology Policy in the United States
2006
During the latter half of the twentieth century, federal funding in the United States for scientific research and development increased dramatically. Yet despite the infusion of public funds into research centers, the relationship between public policy and research and development remains poorly understood. How does the federal government attempt to harness scientific knowledge and resources for the nation's economic welfare and competitiveness in the global marketplace? Who makes decisions about controversial scientific experiments, such as genetic engineering and space exploration? Who is held accountable when things go wrong? In this lucidly-written introduction to the topic, Sylvia Kraemer draws upon her extensive experience in government to develop a useful and powerful framework for thinking about the American approach to shaping and managing scientific innovation. Kraemer suggests that the history of science, technology, and politics is best understood as a negotiation of ongoing tensions between open and closed systems. Open systems depend on universal access to information that is complete, verifiable, and appropriately used. Closed systems, in contrast, are composed of unique and often proprietary features, which are designed to control usage. From the Constitution's patent clause to current debates over intellectual property, stem cells, and internet regulation, Kraemer shows the promise-as well as the limits-of open systems in advancing scientific progress as well as the nation's economic vitality.
An Improved Forest Smoke Detection Model Based on YOLOv8
This study centers on leveraging smoke detection for preemptive forest smoke detection. Owing to the inherent ambiguity and uncertainty in smoke characteristics, existing smoke detection algorithms suffer from reduced detection accuracy, elevated false alarm rates, and occurrences of omissions. To resolve these issues, this paper employs an efficient YOLOv8 network and integrates three novel detection modules for enhancement. These modules comprise the edge feature enhancement module, designed to identify smoke ambiguity features, alongside the multi-feature extraction module and the global feature enhancement module, targeting the detection of smoke uncertainty features. These modifications improve the accuracy of smoke area identification while notably lowering the rate of false alarms and omission phenomenon occurrences. Meanwhile, a large forest smoke dataset is created in this paper, which includes not only smoke images with normal forest backgrounds but also a considerable quantity of smoke images with complex backgrounds to enhance the algorithm’s robustness. The proposed algorithm in this paper achieves an AP of 79.1%, 79.2%, and 93.8% for the self-made dataset, XJTU-RS, and USTC-RF, respectively. These results surpass those obtained by the current state-of-the-art target detection-based and neural network-based improved smoke detection algorithms.
Journal Article
A Machine Learning Prediction Model to Identify Individuals at Risk of 5-Year Incident Stroke Based on Retinal Imaging
2025
Stroke is a leading cause of death and disability in developed countries. We validated an AI-based prediction model for incident stroke using sensors such as fundus cameras and ophthalmoscopes for retinal images, along with socio-demographic data and traditional risk factors. The model was trained on a proprietary dataset of over 6500 participants, including 171 with 5-year incident strokes and 242 with 10-year incident strokes. The model provides separate 5-year and 10-year risk scores. The model was externally validated on the UK Biobank dataset (3000 subjects with 5-year incident strokes). Using retinal imaging, our models identified individuals with 5-year incident strokes with 80% sensitivity, 82% specificity, and an AUC of 0.83, and predicted 10-year incidents with 72% sensitivity, 78% specificity, and an AUC of 0.79. In comparison, for the 10-year model, the AUC for the Framingham score was 0.73, and the CHADS2 score was 0.74. On the Biobank external dataset, our 5-year model (without retinal features) demonstrated moderate but lower sensitivity (69.3%) and specificity (66.4%) compared to its performance on the proprietary dataset (with retinal features). Using a multi-ethnic dataset, we developed and validated a prediction model that improves stroke risk identification for 5-year and 10-year incidences by incorporating retinal features.
Journal Article
Accurate and efficient data-driven psychiatric assessment using machine learning
2026
Background
Accurate assessment of mental disorders and learning disabilities is essential for timely intervention. Machine learning and feature selection techniques have potential for improving the accuracy and efficiency of mental health assessments. However, limited research has explored the use of large transdiagnostic datasets, as well as the application of these techniques in developing quick, briefer, question-based assessments. This study applies machine learning and feature selection techniques to a large transdiagnostic dataset featuring a high number of assessment items, and to create a tool for construction of streamlined, efficient, and effective assessments from existing data.
Methods
Using the Healthy Brain Network dataset (
n
= 4,136 at the time this study was conducted) containing over 1000 questionnaire items, a two-stage feature selection approach, with Elastic Net models, was used to identify optimal, parsimonious item subsets for assessing various disorders and symptoms, as well as custom test-based outcome measures for learning disabilities. The study then compared model performance to existing assessments through rigorous cross-validation.
Results
Machine learning models using parsimonious item subsets significantly outperformed traditional assessments (
p
= 0.004). Models for specific learning disorders achieved AUC values up to 0.855. Importantly, restricting analysis to non-proprietary assessment items did not significantly reduce performance.
Discussion
This study demonstrates the feasibility of using existing datasets to create efficient, effective assessment tools for mental disorders and learning disabilities. Our open-source, modular software architecture facilitates adaptation to diverse datasets, though external validation remains necessary before clinical implementation. The ability to achieve strong performance using only non-proprietary items supports the development of accessible assessment tools.
Key points and relevance
Data-driven methods, such as machine learning and feature selection, have shown promise in improving the accuracy and efficiency of mental health assessment.
Few studies have investigated these methods for the creation of learning disability assessments, and in application to transdiagnostic datasets consisting of a large number of assessments.
Using the Healthy Brain Network dataset, we have built a tool for the creation of accurate and efficient machine-learning based assessments for common mental disorders and learning disabilities.
The modular design of the tool ensures its easy application to other datasets, addressing a variety of clinical and research opportunities.
Journal Article
Research on Named Entity Recognition Methods in Chinese Forest Disease Texts
2022
Named entity recognition of forest diseases plays a key role in knowledge extraction in the field of forestry. The aim of this paper is to propose a named entity recognition method based on multi-feature embedding, a transformer encoder, a bi-gated recurrent unit (BiGRU), and conditional random fields (CRF). According to the characteristics of the forest disease corpus, several features are introduced here to improve the method’s accuracy. In this paper, we analyze the characteristics of forest disease texts; carry out pre-processing, labeling, and extraction of multiple features; and construct forest disease texts. In the input representation layer, the method integrates multi-features, such as characters, radicals, word boundaries, and parts of speech. Then, implicit features (e.g., sentence context features) are captured through the transformer’s encoding layer. The obtained features are transmitted to the BiGRU layer for further deep feature extraction. Finally, the CRF model is used to learn constraints and output the optimal annotation of disease names, damage sites, and drug entities in the forest disease texts. The experimental results on the self-built data set of forest disease texts show that the precision of the proposed method for entity recognition reached more than 93%, indicating that it can effectively solve the task of named entity recognition in forest disease texts.
Journal Article
Security Service Function Chain Based on Graph Neural Network
2022
With the rapid development and wide application of cloud computing, security protection in cloud environment has become an urgent problem to be solved. However, traditional security service equipment is closely coupled with the network topology, so it is difficult to upgrade and expand the security service, which cannot change with the change of network application security requirements. Building a security service function chain (SSFC) makes the deployment of security service functions more dynamic and scalable. Based on a software defined network (SDN) and network function virtualization (NFV) environment, this paper proposes a solution to the particularity optimization algorithm of network topology feature extraction using graph neural network. The experimental results show that, compared with the shortest path, greedy algorithm and hybrid bee colony algorithm, the average success rate of the graph neural network algorithm in the construction of the security service function chain is more than 90%, far more than other algorithms, and far less than other algorithms in construction time. It effectively reduces the end-to-end delay and increases the network throughput.
Journal Article
Keep on rating – on the systematic rating and comparison of authentication schemes
by
Mayer, Peter
,
Zimmermann, Verena
,
von Preuschen, Alexandra
in
Authentication
,
Authenticity
,
Biometrics
2019
PurposeSix years ago, Bonneau et al. (2012) proposed a framework to compare authentication schemes to the ubiquitous text password. Even though their work did not reveal an alternative outperforming the text password on every criterion, the framework can support decision makers in finding suitable solutions for specific authentication contexts. The purpose of this paper is to extend and update the database, thereby discussing benefits, limitations and suggestions for continuing the development of the framework.Design/methodology/approachThis paper revisits the rating process and describes the application of an extended version of the original framework to an additional 40 authentication schemes identified in a literature review. All schemes were rated in terms of 25 objective features assigned to the three main criteria: usability, deployability and security.FindingsThe rating process and results are presented along with a discussion of the benefits and pitfalls of the rating process.Research limitations/implicationsWhile the extended framework, in general, proves suitable for rating and comparing authentication schemes, ambiguities in the rating could be solved by providing clearer definitions and cut-off values. Further, the extension of the framework with subjective user perceptions that sometimes differ from objective ratings could be beneficial.Originality/valueThe results of the rating are made publicly available in an authentication choice support system named ACCESS to support decision makers and researchers and to foster the further extension of the knowledge base and future development of the extended rating framework.
Journal Article
Private equity ownership and nursing home financial performance
by
Laberge, Alex
,
Harman, Jeffrey S.
,
Weech-Maldonado, Robert
in
Business ownership
,
Censuses
,
Costs
2013
Private equity has acquired multiple large nursing home chains within the last few years; by 2009, it owned nearly 1,900 nursing homes. Private equity is said to improve the financial performance of acquired facilities. However, no study has yet examined the financial performance of private equity nursing homes, ergo this study.
The primary purpose of this study is to understand the financial performance of private equity nursing homes and how it compares with other investor-owned facilities. It also seeks to understand the approach favored by private equity to improve financial performance-for instance, whether they prefer to cut costs or maximize revenues or follow a mixed approach.
Secondary data from Medicare cost reports, the Online Survey, Certification and Reporting, Area Resource File, and Brown University's Long-term Care Focus data set are combined to construct a longitudinal data set for the study period 2000-2007. The final sample is 2,822 observations after eliminating all not-for-profit, independent, and hospital-based facilities. Dependent financial variables consist of operating revenues and costs, operating and total margins, payer mix (census Medicare, census Medicaid, census other), and acuity index. Independent variables primarily reflect private equity ownership. The study was analyzed using ordinary least squares, gamma distribution with log link, logit with binomial family link, and logistic regression.
Private equity nursing homes have higher operating margin as well as total margin; they also report higher operating revenues and costs. No significant differences in payer mix are noted.
Results suggest that private equity delivers superior financial performance compared with other investor-owned nursing homes. However, causes for concern remain particularly with the long-term financial sustainability of these facilities.
Journal Article
Document Classification: A Technical Review
Document classification is used for identify the proprietary of complex document. Identify proprietary of any document is very difficult stuff in the area of image processing. There are various way to identify the proprietary of document like, based on signature, logo, seal and many more. We have searched for document classification based on logo and seal, and we surveyed literature related to our work. Majority authors used texture feature extraction author used Discrete Wavelet Transformation (DWT) and Fast Fourier Transform (FFT) for feature extraction and for classification they were used Know Nearest Neighbor (KNN), Neural Network (NN) and support vector machine (SVM).
Journal Article
Financial performance, employee well-being, and client well-being in for-profit and not-for-profit nursing homes
by
Boselie, Paul
,
Bos, Aline
,
Trappenburg, Margo
in
Features
,
Financial Management - economics
,
Financial performance
2017
Expanding the opportunities for for-profit nursing home care is a central theme in the debate on the sustainable organization of the growing nursing home sector in Western countries.
We conducted a systematic review of the literature over the last 10 years in order to determine the broad impact of nursing home ownership in the United States. Our review has two main goals: (a) to find out which topics have been studied with regard to financial performance, employee well-being, and client well-being in relation to nursing home ownership and (b) to assess the conclusions related to these topics. The review results in two propositions on the interactions between financial performance, employee well-being, and client well-being as they relate to nursing home ownership.
Five search strategies plus inclusion and quality assessment criteria were applied to identify and select eligible studies. As a result, 50 studies were included in the review. Relevant findings were categorized as related to financial performance (profit margins, efficiency), employee well-being (staffing levels, turnover rates, job satisfaction, job benefits), or client well-being (care quality, hospitalization rates, lawsuits/complaints) and then analyzed based on common characteristics.
For-profit nursing homes tend to have better financial performance, but worse results with regard to employee well-being and client well-being, compared to not-for-profit sector homes. We argue that the better financial performance of for-profit nursing homes seems to be associated with worse employee and client well-being.
For policy makers considering the expansion of the for-profit sector in the nursing home industry, our findings suggest the need for a broad perspective, simultaneously weighing the potential benefits and drawbacks for the organization, its employees, and its clients.
Journal Article