Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
361
result(s) for
"Service recommendation model"
Sort by:
Exploring a Student-Centered One-Stop Community Service Model
2024
The student community in colleges and universities is based on students’ common living areas, and the community service model in colleges and universities should be student-oriented and centered on students’ development. This paper proposes a one-stop community service model from a student-oriented perspective, with the service community model and service recommendation model being the main component modules. In the service community model, a context-based association relationship mining algorithm is proposed to add time and location contexts to the collaborative filtering algorithm in order to obtain a collection of similar users and services. After constructing the one-stop service community, a service recommendation algorithm based on a trusted coalition is proposed to introduce student credibility and service usage frequency to achieve personalized recommendations of services. University H’s student community implemented the one-stop community service model. After the practice, the mean value of each dimension of the community’s service mode and content evaluation was greater than 3, and the overall satisfaction evaluation value of the community was 39.49, which was extremely significant compared with the evaluation value of University C (P<0.01). The mean value of students’ mental health evaluation reached 3.33.
Journal Article
Large Language Models for Therapy Recommendations Across 3 Clinical Specialties: Comparative Study
by
Kaczmarczyk, Robert
,
Roos, Jonas
,
Wilhelm, Theresa Isabelle
in
Agnosticism
,
Ambiguity
,
Antibiotics
2023
As advancements in artificial intelligence (AI) continue, large language models (LLMs) have emerged as promising tools for generating medical information. Their rapid adaptation and potential benefits in health care require rigorous assessment in terms of the quality, accuracy, and safety of the generated information across diverse medical specialties. This study aimed to evaluate the performance of 4 prominent LLMs, namely, Claude-instant-v1.0, GPT-3.5-Turbo, Command-xlarge-nightly, and Bloomz, in generating medical content spanning the clinical specialties of ophthalmology, orthopedics, and dermatology. Three domain-specific physicians evaluated the AI-generated therapeutic recommendations for a diverse set of 60 diseases. The evaluation criteria involved the mDISCERN score, correctness, and potential harmfulness of the recommendations. ANOVA and pairwise t tests were used to explore discrepancies in content quality and safety across models and specialties. Additionally, using the capabilities of OpenAI’s most advanced model, GPT-4, an automated evaluation of each model’s responses to the diseases was performed using the same criteria and compared to the physicians’ assessments through Pearson correlation analysis. Claude-instant-v1.0 emerged with the highest mean mDISCERN score (3.35, 95% CI 3.23-3.46). In contrast, Bloomz lagged with the lowest score (1.07, 95% CI 1.03-1.10). Our analysis revealed significant differences among the models in terms of quality (P<.001). Evaluating their reliability, the models displayed strong contrasts in their falseness ratings, with variations both across models (P<.001) and specialties (P<.001). Distinct error patterns emerged, such as confusing diagnoses; providing vague, ambiguous advice; or omitting critical treatments, such as antibiotics for infectious diseases. Regarding potential harm, GPT-3.5-Turbo was found to be the safest, with the lowest harmfulness rating. All models lagged in detailing the risks associated with treatment procedures, explaining the effects of therapies on quality of life, and offering additional sources of information. Pearson correlation analysis underscored a substantial alignment between physician assessments and GPT-4’s evaluations across all established criteria (P<.01). This study, while comprehensive, was limited by the involvement of a select number of specialties and physician evaluators. The straightforward prompting strategy (“How to treat…”) and the assessment benchmarks, initially conceptualized for human-authored content, might have potential gaps in capturing the nuances of AI-driven information. The LLMs evaluated showed a notable capability in generating valuable medical content; however, evident lapses in content quality and potential harm signal the need for further refinements. Given the dynamic landscape of LLMs, this study’s findings emphasize the need for regular and methodical assessments, oversight, and fine-tuning of these AI tools to ensure they produce consistently trustworthy and clinically safe medical advice. Notably, the introduction of an auto-evaluation mechanism using GPT-4, as detailed in this study, provides a scalable, transferable method for domain-agnostic evaluations, extending beyond therapy recommendation assessments.
Journal Article
What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT
by
Pozzo, Nicole S
,
Palitsky, Roman
,
Arconada Alvarez, Santiago J
in
Artificial Intelligence
,
Female
,
Humans
2024
Artificial intelligence chatbots such as ChatGPT (OpenAI) have garnered excitement about their potential for delegating writing tasks ordinarily performed by humans. Many of these tasks (eg, writing recommendation letters) have social and professional ramifications, making the potential social biases in ChatGPT's underlying language model a serious concern.
Three preregistered studies used the text analysis program Linguistic Inquiry and Word Count to investigate gender bias in recommendation letters written by ChatGPT in human-use sessions (N=1400 total letters).
We conducted analyses using 22 existing Linguistic Inquiry and Word Count dictionaries, as well as 6 newly created dictionaries based on systematic reviews of gender bias in recommendation letters, to compare recommendation letters generated for the 200 most historically popular \"male\" and \"female\" names in the United States. Study 1 used 3 different letter-writing prompts intended to accentuate professional accomplishments associated with male stereotypes, female stereotypes, or neither. Study 2 examined whether lengthening each of the 3 prompts while holding the between-prompt word count constant modified the extent of bias. Study 3 examined the variability within letters generated for the same name and prompts. We hypothesized that when prompted with gender-stereotyped professional accomplishments, ChatGPT would evidence gender-based language differences replicating those found in systematic reviews of human-written recommendation letters (eg, more affiliative, social, and communal language for female names; more agentic and skill-based language for male names).
Significant differences in language between letters generated for female versus male names were observed across all prompts, including the prompt hypothesized to be neutral, and across nearly all language categories tested. Historically female names received significantly more social referents (5/6, 83% of prompts), communal or doubt-raising language (4/6, 67% of prompts), personal pronouns (4/6, 67% of prompts), and clout language (5/6, 83% of prompts). Contradicting the study hypotheses, some gender differences (eg, achievement language and agentic language) were significant in both the hypothesized and nonhypothesized directions, depending on the prompt. Heteroscedasticity between male and female names was observed in multiple linguistic categories, with greater variance for historically female names than for historically male names.
ChatGPT reproduces many gender-based language biases that have been reliably identified in investigations of human-written reference letters, although these differences vary across prompts and language categories. Caution should be taken when using ChatGPT for tasks that have social consequences, such as reference letter writing. The methods developed in this study may be useful for ongoing bias testing among progressive generations of chatbots across a range of real-world scenarios.
OSF Registries osf.io/ztv96; https://osf.io/ztv96.
Journal Article
QoS Prediction for Service Recommendation with Deep Feature Learning in Edge Computing Environment
2020
Along with the popularity of intelligent services and mobile services, service recommendation has become a key task, especially the task based on quality-of-service (QoS) in edge computing environment. Most existing service recommendation methods have some serious defects, and cannot be directly adopted in edge computing environment. For example, most of existing methods cannot learn deep features of users or services, but in edge computing environment, there are a variety of devices with different configurations and different functions, and it is necessary to learn deep features behind those complex devices. In order to fully utilize hidden features, this paper proposes a new matrix factorization (MF) model with deep features learning, which integrates a convolutional neural network (CNN). The proposed mode is named Joint CNN-MF (JCM). JCM is capable of using the learned deep latent features of neighbors to infer the features of a user or a service. Meanwhile, to improve the accuracy of neighbors selection, the proposed model contains a novel similarity computation method. CNN learns the neighbors features, forms a feature matrix and infers the features of the target user or target service. We conducted experiments on a real-world service dataset under a batch of cases of data densities, to reflect the complex invocation cases in edge computing environment. The experimental results verify that compared to counterpart methods, our method can consistently achieve higher QoS prediction results.
Journal Article
Utility of ChatGPT in Clinical Practice
2023
ChatGPT is receiving increasing attention and has a variety of application scenarios in clinical practice. In clinical decision support, ChatGPT has been used to generate accurate differential diagnosis lists, support clinical decision-making, optimize clinical decision support, and provide insights for cancer screening decisions. In addition, ChatGPT has been used for intelligent question-answering to provide reliable information about diseases and medical queries. In terms of medical documentation, ChatGPT has proven effective in generating patient clinical letters, radiology reports, medical notes, and discharge summaries, improving efficiency and accuracy for health care providers. Future research directions include real-time monitoring and predictive analytics, precision medicine and personalized treatment, the role of ChatGPT in telemedicine and remote health care, and integration with existing health care systems. Overall, ChatGPT is a valuable tool that complements the expertise of health care providers and improves clinical decision-making and patient care. However, ChatGPT is a double-edged sword. We need to carefully consider and study the benefits and potential dangers of ChatGPT. In this viewpoint, we discuss recent advances in ChatGPT research in clinical practice and suggest possible risks and challenges of using ChatGPT in clinical practice. It will help guide and support future artificial intelligence research similar to ChatGPT in health.
Journal Article
Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions
by
Powell, Byron J.
,
Abadie, Brenton
,
Waltz, Thomas J.
in
Analysis
,
Cervical cancer
,
Consolidated Framework for Implementation Research
2019
Background
A fundamental challenge of implementation is identifying contextual determinants (i.e., barriers and facilitators) and determining which implementation strategies will address them. Numerous conceptual frameworks (e.g., the Consolidated Framework for Implementation Research; CFIR) have been developed to guide the identification of contextual determinants, and compilations of implementation strategies (e.g., the Expert Recommendations for Implementing Change compilation; ERIC) have been developed which can support selection and reporting of implementation strategies. The aim of this study was to identify which ERIC implementation strategies would best address specific CFIR-based contextual barriers.
Methods
Implementation researchers and practitioners were recruited to participate in an online series of tasks involving matching specific ERIC implementation strategies to specific implementation barriers. Participants were presented with brief descriptions of barriers based on CFIR construct definitions. They were asked to rank up to seven implementation strategies that would best address each barrier. Barriers were presented in a random order, and participants had the option to respond to the barrier or skip to another barrier. Participants were also asked about considerations that most influenced their choices.
Results
Four hundred thirty-five invitations were emailed and 169 (39%) individuals participated. Respondents had considerable heterogeneity in opinions regarding which ERIC strategies best addressed each CFIR barrier. Across the 39 CFIR barriers, an average of 47 different ERIC strategies (SD = 4.8, range 35 to 55) was endorsed at least once for each, as being one of seven strategies that would best address the barrier. A tool was developed that allows users to specify high-priority CFIR-based barriers and receive a prioritized list of strategies based on endorsements provided by participants.
Conclusions
The wide heterogeneity of endorsements obtained in this study’s task suggests that there are relatively few consistent relationships between CFIR-based barriers and ERIC implementation strategies. Despite this heterogeneity, a tool aggregating endorsements across multiple barriers can support taking a structured approach to consider a broad range of strategies given those barriers. This study’s results point to the need for a more detailed evaluation of the underlying determinants of barriers and how these determinants are addressed by strategies as part of the implementation planning process.
Journal Article
DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network
by
Cloninger, Alexander
,
Jiang, Tingting
,
Shaham, Uri
in
Algorithms
,
Artificial neural networks
,
Data analysis
2018
Background
Medical practitioners use survival models to explore and understand the relationships between patients’ covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems.
Methods
We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient’s covariates and treatment effectiveness in order to provide personalized treatment recommendations.
Results
We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient’s covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient’s features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it’s personalized treatment recommendations would increase the survival time of a set of patients.
Conclusions
The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient’s characteristics on their risk of failure.
Journal Article
Research-paper recommender systems: a literature survey
2016
In the last 16 years, more than 200 research articles were published about
research-paper recommender systems
. We reviewed these articles and present some descriptive statistics in this paper, as well as a discussion about the major advancements and shortcomings and an overview of the most common recommendation concepts and approaches. We found that more than half of the recommendation approaches applied content-based filtering (55 %). Collaborative filtering was applied by only 18 % of the reviewed approaches, and graph-based recommendations by 16 %. Other recommendation concepts included stereotyping, item-centric recommendations, and hybrid recommendations. The content-based filtering approaches mainly utilized papers that the users had authored, tagged, browsed, or downloaded. TF-IDF was the most frequently applied weighting scheme. In addition to simple terms, n-grams, topics, and citations were utilized to model users’ information needs. Our review revealed some shortcomings of the current research. First, it remains unclear which recommendation concepts and approaches are the most promising. For instance, researchers reported different results on the performance of content-based and collaborative filtering. Sometimes content-based filtering performed better than collaborative filtering and sometimes it performed worse. We identified three potential reasons for the ambiguity of the results. (A) Several evaluations had limitations. They were based on strongly pruned datasets, few participants in user studies, or did not use appropriate baselines. (B) Some authors provided little information about their algorithms, which makes it difficult to re-implement the approaches. Consequently, researchers use different implementations of the same recommendations approaches, which might lead to variations in the results. (C) We speculated that minor variations in datasets, algorithms, or user populations inevitably lead to strong variations in the performance of the approaches. Hence, finding the most promising approaches is a challenge. As a second limitation, we noted that many authors neglected to take into account factors other than accuracy, for example overall user satisfaction. In addition, most approaches (81 %) neglected the user-modeling process and did not infer information automatically but let users provide keywords, text snippets, or a single paper as input. Information on runtime was provided for 10 % of the approaches. Finally, few research papers had an impact on research-paper recommender systems in practice. We also identified a lack of authority and long-term research interest in the field: 73 % of the authors published no more than one paper on research-paper recommender systems, and there was little cooperation among different co-author groups. We concluded that several actions could improve the research landscape: developing a common evaluation framework, agreement on the information to include in research papers, a stronger focus on non-accuracy aspects and user modeling, a platform for researchers to exchange information, and an open-source framework that bundles the available recommendation approaches.
Journal Article
Understanding determinants of cloud computing adoption using an integrated TAM-TOE model
by
Ramaswamy, R
,
Gangwar, Hemlata
,
Date, Hema
in
Academic achievement
,
Access control
,
Adoption of innovations
2015
Purpose
– The purpose of this paper is to integrate TAM model and TOE framework for cloud computing adoption at organizational level.
Design/methodology/approach
– A conceptual framework was developed using technological and organizational variables of TOE framework as external variables of TAM model while environmental variables were proposed to have direct impact on cloud computing adoption. A questionnaire was used to collect the data from 280 companies in IT, manufacturing and finance sectors in India. The data were analyzed using exploratory and confirmatory factor analyses. Further, structural equation modeling was used to test the proposed model.
Findings
– The study identified relative advantage, compatibility, complexity, organizational readiness, top management commitment, and training and education as important variables for affecting cloud computing adoption using perceived ease of use (PEOU) and perceived usefulness (PU) as mediating variables. Also, competitive pressure and trading partner support were found directly affecting cloud computing adoption intentions. The model explained 62 percent of cloud computing adoption.
Practical implications
– The model can be used as a guideline to ensure a positive outcome of the cloud computing adoption in organizations. It also provides relevant recommendations to achieve conducive implementation environment for cloud computing adoption.
Originality/value
– This study integrates two of the information technology adoption models to improve predictive power of resulting model.
Journal Article
Optimization of Personalized Service System for Student Work in Smart Campus Colleges and Universities with the Aid of Artificial Intelligence
2025
In this paper, the LSTM model and DTW algorithm are combined to construct a recommendation model based on a combinatorial algorithm. In order to realize the model, a student employment work recommendation service system is designed using B/S architecture. And the system is applied to colleges and universities in order to analyze its optimization effect on personalized service of student work in colleges and universities. The recommendation model in this paper converges to a loss rate of about 0.12 when trained for 40 rounds. The algorithm in this paper achieves a precision rate, recall rate, and F1 score of 97.15%, 92.26%, and 91.77%, respectively, and maintains a precision rate of approximately 51% when the recommended employment units reach 50. The system in this paper can provide accurate employment unit recommendations for students with different characteristics. In addition, the overall average satisfaction of students for this paper’s system is high, with an average score of 4.314. In conclusion, the employment job recommendation service system constructed in this paper provides a scientific and effective solution for optimizing personalized student services.
Journal Article