Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,440
result(s) for
"large language models (LLMs)"
Sort by:
Beyond black‐box AI: Interpretable hybrid systems for dementia care
by
Malpas, Charles B.
,
Roberts, Monica R.
,
Yang, Wenli
in
Accuracy
,
Alzheimer's disease
,
Artificial intelligence
2026
Foundation models (FMs) are entering Alzheimer's disease and related dementias (ADRD), yet bedside impact remains limited. We analyze the interpretability and reliability gaps that impede adoption, including opaque inference, hallucinations in generative models, and weak causal grounding. We argue for hybrid artificial intelligence (AI) that pairs statistical learning with computable clinical knowledge and clinician oversight. We propose a three‐level framework for hybrid AI integration: (1) knowledge retrieval with linked citations; (2) contextualized decision support that combines predictive models with actionable plans derived from expert rules; and (3) adaptive optimization through continuous feedback. We demonstrate the potential of hybrid AI using clinical examples, including individual‐specific interpretation of novel biomarkers, integration of multimodal data including speech and text, as well as patient‐centered digital therapeutics. Finally, we outline pragmatic evaluation aligned with reporting standards, prioritizing adoption, safety, equity, workload, and patient outcomes. This roadmap aims to convert benchmark gains into accountable, interpretable tools for ADRD care. Highlights Map artificial intelligence (AI) applications across clinical fields and reveal key findings in current literature. Identify how AI supports and advances next‐generation technologies in dementia care. Present real‐world cases where AI assists clinicians in context‐specific scenarios. Examine major challenges and future opportunities in clinical AI adoption.
Journal Article
RS-LLaVA: A Large Vision-Language Model for Joint Captioning and Question Answering in Remote Sensing Imagery
by
Ricci, Riccardo
,
Bazi, Yakoub
,
Al Rahhal, Mohamad Mahmoud
in
captioning
,
Data analysis
,
data collection
2024
In this paper, we delve into the innovative application of large language models (LLMs) and their extension, large vision-language models (LVLMs), in the field of remote sensing (RS) image analysis. We particularly emphasize their multi-tasking potential with a focus on image captioning and visual question answering (VQA). In particular, we introduce an improved version of the Large Language and Vision Assistant Model (LLaVA), specifically adapted for RS imagery through a low-rank adaptation approach. To evaluate the model performance, we create the RS-instructions dataset, a comprehensive benchmark dataset that integrates four diverse single-task datasets related to captioning and VQA. The experimental results confirm the model’s effectiveness, marking a step forward toward the development of efficient multi-task models for RS image analysis.
Journal Article
Embodied intelligence in manufacturing: leveraging large language models for autonomous industrial robotics
2025
This paper delves into the potential of Large Language Model (LLM) agents for industrial robotics, with an emphasis on autonomous design, decision-making, and task execution within manufacturing contexts. We propose a comprehensive framework that includes three core components: (1) matches manufacturing tasks with process parameters, emphasizing the challenges in LLM agents’ understanding of human-imposed constraints; (2) autonomously designs tool paths, highlighting the LLM agents’ proficiency in planar tasks and challenges in 3D spatial tasks; and (3) integrates embodied intelligence within industrial robotics simulations, showcasing the adaptability of LLM agents like GPT-4. Our experimental results underscore the distinctive performance of the GPT-4 agent, especially in Component 3, where it is outstanding in task planning and achieved a success rate of 81.88% across 10 samples in task completion. In conclusion, our study accentuates the transformative potential of LLM agents in industrial robotics and suggests specific avenues, such as visual semantic control and real-time feedback loops, for their enhancement.
Journal Article
Integrating Retrieval-Augmented Generation with Large Language Models in Nephrology: Advancing Practical Applications
by
Suppadungsuk, Supawadee
,
Cheungpasitporn, Wisit
,
Garcia Valencia, Oscar A.
in
Accuracy
,
Artificial intelligence
,
Care and treatment
2024
The integration of large language models (LLMs) into healthcare, particularly in nephrology, represents a significant advancement in applying advanced technology to patient care, medical research, and education. These advanced models have progressed from simple text processors to tools capable of deep language understanding, offering innovative ways to handle health-related data, thus improving medical practice efficiency and effectiveness. A significant challenge in medical applications of LLMs is their imperfect accuracy and/or tendency to produce hallucinations—outputs that are factually incorrect or irrelevant. This issue is particularly critical in healthcare, where precision is essential, as inaccuracies can undermine the reliability of these models in crucial decision-making processes. To overcome these challenges, various strategies have been developed. One such strategy is prompt engineering, like the chain-of-thought approach, which directs LLMs towards more accurate responses by breaking down the problem into intermediate steps or reasoning sequences. Another one is the retrieval-augmented generation (RAG) strategy, which helps address hallucinations by integrating external data, enhancing output accuracy and relevance. Hence, RAG is favored for tasks requiring up-to-date, comprehensive information, such as in clinical decision making or educational applications. In this article, we showcase the creation of a specialized ChatGPT model integrated with a RAG system, tailored to align with the KDIGO 2023 guidelines for chronic kidney disease. This example demonstrates its potential in providing specialized, accurate medical advice, marking a step towards more reliable and efficient nephrology practices.
Journal Article
A teologia e os LLMs. Verdade, viés e mediação
2026
Este artigo analisa as implicações teológicas e pastorais da utilização de modelos de linguagem de grande escala no contexto da vida e missão da Igreja Católica. Parte de uma preocupação crescente: de que modo os sistemas de inteligência artificial podem influenciar a mediação da Verdade revelada e a integridade da Tradição eclesial. O objetivo principal é identificar os riscos e potencialidades destas tecnologias aplicadas à teologia, particularmente no que diz respeito à fidelidade doutrinal, à autoridade da Igreja e à transmissão da Tradição. A investigação segue uma metodologia empírica e interdisciplinar, combinando análise comparativa das respostas geradas por seis modelos de linguagem a doze perguntas sobre temas doutrinalmente sensíveis, com reflexão teológica e ética. Os temas abordados incluem a nomeação de bispos na China, o ensino da Igreja sobre o aborto e a sua posição face ao casamento entre pessoas do mesmo sexo. Os dados revelam conhecimento factual consistente em muitos aspetos, mas também casos de censura temática, instabilidade doutrinal e enviesamento ideológico, influenciados pelo contexto cultural ou político dos modelos. O enquadramento conceptual assenta na teologia católica, na ética digital e nas recentes propostas de regulação da inteligência artificial. Defende-se a importância do discernimento teológico e propõe-se a necessidade de supervisão responsável, diversidade de fontes e desenvolvimento de modelos que promovam o seu uso crítico e sirvam a missão da Igreja sem comprometer a verdade, a comunidade ou o bem comum. Este artículo analiza las implicaciones teológicas y pastorales del uso de modelos de lenguaje a gran escala en el contexto de la vida y misión de la Iglesia Católica. Parte de una preocupación creciente: cómo los sistemas de inteligencia artificial pueden influir en la mediación de la Verdad revelada y en la integridad de la Tradición eclesial. El objetivo principal es identificar los riesgos y potencialidades de estas tecnologías aplicadas a la teología, especialmente en lo que respecta a la fidelidad doctrinal, la autoridad de la Iglesia y la transmisión de la Tradición. La investigación sigue una metodología empírica e interdisciplinaria, combinando el análisis comparativo de las respuestas generadas por seis modelos de lenguaje a doce preguntas sobre temas doctrinalmente sensibles, con una reflexión teológica y ética. Los temas abordados incluyen la designación de obispos en China, la enseñanza de la Iglesia sobre el aborto y su posición respecto al matrimonio entre personas del mismo sexo. Los datos revelan un conocimiento factual consistente en muchos aspectos, pero también casos de censura temática, inestabilidad doctrinal y sesgo ideológico, influenciados por el contexto cultural o político de los modelos. El marco conceptual se basa en la teología católica, la ética digital y las recientes propuestas de regulación de la inteligencia artificial. Se defiende la importancia del discernimiento teológico y se propone la necesidad de una supervisión responsable, diversidad de fuentes y desarrollo de modelos que promuevan un uso crítico y estén al servicio de la misión de la Iglesia sin comprometer la verdad, la comunidad o el bien común. This article analyzes the theological and pastoral implications of using large language models in the context of the life and mission of the Catholic Church. It arises from a growing concern: how artificial intelligence systems may influence the mediation of revealed Truth and the integrity of ecclesial Tradition. The main objective is to identify the risks and potentialities of these technologies when applied to theology, particularly regarding doctrinal fidelity, Church authority, and the transmission of Tradition. The research follows an empirical and interdisciplinary methodology, combining a comparative analysis of the responses generated by six language models to twelve questions on doctrinally sensitive topics, with theological and ethical reflection. The topics addressed include the appointment of bishops in China, the Church’s teaching on abortion, and its position regarding same-sex marriage. The data reveal consistent factual knowledge in many areas, but also instances of thematic censorship, doctrinal instability, and ideological bias, influenced by the cultural or political context of the models. The conceptual framework is grounded in Catholic theology, digital ethics, and recent proposals for AI regulation. The study highlights the importance of theological discernment and proposes the need for responsible oversight, diversity of sources, and the development of models that promote critical use and serve the Church’s mission without compromising truth, community, or the common good.
Journal Article
Revolutionizing personalized medicine with generative AI: a systematic review
by
Zaki, Nazar
,
Damseh, Rafat
,
Ghebrehiwet, Isaias
in
Accuracy
,
Artificial Intelligence
,
Bioinformatics
2024
Background
Precision medicine, targeting treatments to individual genetic and clinical profiles, faces challenges in data collection, costs, and privacy. Generative AI offers a promising solution by creating realistic, privacy-preserving patient data, potentially revolutionizing patient-centric healthcare.
Objective
This review examines the role of deep generative models (DGMs) in clinical informatics, medical imaging, bioinformatics, and early diagnostics, showcasing their impact on precision medicine.
Methods
Adhering to PRISMA guidelines, the review analyzes studies from databases such as Scopus and PubMed, focusing on AI's impact in precision medicine and DGMs' applications in synthetic data generation.
Results
DGMs, particularly Generative Adversarial Networks (GANs), have improved synthetic data generation, enhancing accuracy and privacy. However, limitations exist, especially in the accuracy of foundation models like Large Language Models (LLMs) in digital diagnostics.
Conclusion
Overcoming data scarcity and ensuring realistic, privacy-safe synthetic data generation are crucial for advancing personalized medicine. Further development of LLMs is essential for improving diagnostic precision. The application of generative AI in personalized medicine is emerging, highlighting the need for more interdisciplinary research to advance this field.
Journal Article
A survey on augmenting knowledge graphs (KGs) with large language models (LLMs): models, evaluation metrics, benchmarks, and challenges
by
Ibrahim, Ahmed
,
Ibrahim, Nourhan
,
Kashef, Rasha
in
Artificial Intelligence
,
Computer Science
,
Datasets
2024
Integrating Large Language Models (LLMs) with Knowledge Graphs (KGs) enhances the interpretability and performance of AI systems. This research comprehensively analyzes this integration, classifying approaches into three fundamental paradigms: KG-augmented LLMs, LLM-augmented KGs, and synergized frameworks. The evaluation examines each paradigm’s methodology, strengths, drawbacks, and practical applications in real-life scenarios. The findings highlight the substantial impact of these integrations in fundamentally improving real-time data analysis, efficient decision-making, and promoting innovation across various domains. In this paper, we also describe essential evaluation metrics and benchmarks for assessing the performance of these integrations, addressing challenges like scalability and computational overhead, and providing potential solutions. This comprehensive analysis underscores the profound impact of these integrations on improving real-time data analysis, enhancing decision-making efficiency, and fostering innovation across various domains.
Journal Article
From Large Language Models to Large Multimodal Models: A Literature Review
2024
With the deepening of research on Large Language Models (LLMs), significant progress has been made in recent years on the development of Large Multimodal Models (LMMs), which are gradually moving toward Artificial General Intelligence. This paper aims to summarize the recent progress from LLMs to LMMs in a comprehensive and unified way. First, we start with LLMs and outline various conceptual frameworks and key techniques. Then, we focus on the architectural components, training strategies, fine-tuning guidance, and prompt engineering of LMMs, and present a taxonomy of the latest vision–language LMMs. Finally, we provide a summary of both LLMs and LMMs from a unified perspective, make an analysis of the development status of large-scale models in the view of globalization, and offer potential research directions for large-scale models.
Journal Article
Chain of Thought Utilization in Large Language Models and Application in Nephrology
by
Suppadungsuk, Supawadee
,
Krisanapan, Pajaree
,
Cheungpasitporn, Wisit
in
Artificial Intelligence
,
Awareness
,
chain-of-thought prompting
2024
Chain-of-thought prompting enhances the abilities of large language models (LLMs) significantly. It not only makes these models more specific and context-aware but also impacts the wider field of artificial intelligence (AI). This approach broadens the usability of AI, increases its efficiency, and aligns it more closely with human thinking and decision-making processes. As we improve this method, it is set to become a key element in the future of AI, adding more purpose, precision, and ethical consideration to these technologies. In medicine, the chain-of-thought prompting is especially beneficial. Its capacity to handle complex information, its logical and sequential reasoning, and its suitability for ethically and context-sensitive situations make it an invaluable tool for healthcare professionals. Its role in enhancing medical care and research is expected to grow as we further develop and use this technique. Chain-of-thought prompting bridges the gap between AI’s traditionally obscure decision-making process and the clear, accountable standards required in healthcare. It does this by emulating a reasoning style familiar to medical professionals, fitting well into their existing practices and ethical codes. While solving AI transparency is a complex challenge, the chain-of-thought approach is a significant step toward making AI more comprehensible and trustworthy in medicine. This review focuses on understanding the workings of LLMs, particularly how chain-of-thought prompting can be adapted for nephrology’s unique requirements. It also aims to thoroughly examine the ethical aspects, clarity, and future possibilities, offering an in-depth view of the exciting convergence of these areas.
Journal Article
Artificial Intelligence-Enabled Intelligent Assistant for Personalized and Adaptive Learning in Higher Education
by
Sermet, Yusuf
,
Sajja, Ramteja
,
Cwiertny, David
in
Adaptive learning
,
Adaptive systems
,
Artificial intelligence
2024
This paper presents a novel framework, artificial intelligence-enabled intelligent assistant (AIIA), for personalized and adaptive learning in higher education. The AIIA system leverages advanced AI and natural language processing (NLP) techniques to create an interactive and engaging learning platform. This platform is engineered to reduce cognitive load on learners by providing easy access to information, facilitating knowledge assessment, and delivering personalized learning support tailored to individual needs and learning styles. The AIIA’s capabilities include understanding and responding to student inquiries, generating quizzes and flashcards, and offering personalized learning pathways. The research findings have the potential to significantly impact the design, implementation, and evaluation of AI-enabled virtual teaching assistants (VTAs) in higher education, informing the development of innovative educational tools that can enhance student learning outcomes, engagement, and satisfaction. The paper presents the methodology, system architecture, intelligent services, and integration with learning management systems (LMSs) while discussing the challenges, limitations, and future directions for the development of AI-enabled intelligent assistants in education.
Journal Article