Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
102 result(s) for "Lovis, Christian"
Sort by:
Considerations for ethics review of big data health research: A scoping review
Big data trends in biomedical and health research enable large-scale and multi-dimensional aggregation and analysis of heterogeneous data sources, which could ultimately result in preventive, diagnostic and therapeutic benefit. The methodological novelty and computational complexity of big data health research raises novel challenges for ethics review. In this study, we conducted a scoping review of the literature using five databases to identify and map the major challenges of health-related big data for Ethics Review Committees (ERCs) or analogous institutional review boards. A total of 1093 publications were initially identified, 263 of which were included in the final synthesis after abstract and full-text screening performed independently by two researchers. Both a descriptive numerical summary and a thematic analysis were performed on the full-texts of all articles included in the synthesis. Our findings suggest that while big data trends in biomedicine hold the potential for advancing clinical research, improving prevention and optimizing healthcare delivery, yet several epistemic, scientific and normative challenges need careful consideration. These challenges have relevance for both the composition of ERCs and the evaluation criteria that should be employed by ERC members when assessing the methodological and ethical viability of health-related big data studies. Based on this analysis, we provide some preliminary recommendations on how ERCs could adaptively respond to those challenges. This exploration is designed to synthesize useful information for researchers, ERCs and relevant institutional bodies involved in the conduction and/or assessment of health-related big data research.
Barriers and facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a scoping review
Objective Although artificial intelligence (AI) has demonstrated promise in enhancing breast cancer diagnosis, the implementation of AI algorithms in clinical practice encounters various barriers. This scoping review aims to identify these barriers and facilitators to highlight key considerations for developing and implementing AI solutions in breast cancer imaging. Method A literature search was conducted from 2012 to 2022 in six databases (PubMed, Web of Science, CINHAL, Embase, IEEE, and ArXiv). The articles were included if some barriers and/or facilitators in the conception or implementation of AI in breast clinical imaging were described. We excluded research only focusing on performance, or with data not acquired in a clinical radiology setup and not involving real patients. Results A total of 107 articles were included. We identified six major barriers related to data (B1), black box and trust (B2), algorithms and conception (B3), evaluation and validation (B4), legal, ethical, and economic issues (B5), and education (B6), and five major facilitators covering data (F1), clinical impact (F2), algorithms and conception (F3), evaluation and validation (F4), and education (F5). Conclusion This scoping review highlighted the need to carefully design, deploy, and evaluate AI solutions in clinical practice, involving all stakeholders to yield improvement in healthcare. Clinical relevance statement The identification of barriers and facilitators with suggested solutions can guide and inform future research, and stakeholders to improve the design and implementation of AI for breast cancer detection in clinical practice. Key Points • Six major identified barriers were related to data; black-box and trust; algorithms and conception; evaluation and validation; legal, ethical, and economic issues; and education . • Five major identified facilitators were related to data, clinical impact, algorithms and conception, evaluation and validation, and education . • Coordinated implication of all stakeholders is required to improve breast cancer diagnosis with AI .
Scientific Evidence for Clinical Text Summarization Using Large Language Models: Scoping Review
Information overload in electronic health records requires effective solutions to alleviate clinicians' administrative tasks. Automatically summarizing clinical text has gained significant attention with the rise of large language models. While individual studies show optimism, a structured overview of the research landscape is lacking. This study aims to present the current state of the art on clinical text summarization using large language models, evaluate the level of evidence in existing research and assess the applicability of performance findings in clinical settings. This scoping review complied with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. Literature published between January 1, 2019, and June 18, 2024, was identified from 5 databases: PubMed, Embase, Web of Science, IEEE Xplore, and ACM Digital Library. Studies were excluded if they did not describe transformer-based models, did not focus on clinical text summarization, did not engage with free-text data, were not original research, were nonretrievable, were not peer-reviewed, or were not in English, French, Spanish, or German. Data related to study context and characteristics, scope of research, and evaluation methodologies were systematically collected and analyzed by 3 authors independently. A total of 30 original studies were included in the analysis. All used observational retrospective designs, mainly using real patient data (n=28, 93%). The research landscape demonstrated a narrow research focus, often centered on summarizing radiology reports (n=17, 57%), primarily involving data from the intensive care unit (n=15, 50%) of US-based institutions (n=19, 73%), in English (n=26, 87%). This focus aligned with the frequent reliance on the open-source Medical Information Mart for Intensive Care dataset (n=15, 50%). Summarization methodologies predominantly involved abstractive approaches (n=17, 57%) on single-document inputs (n=4, 13%) with unstructured data (n=13, 43%), yet reporting on methodological details remained inconsistent across studies. Model selection involved both open-source models (n=26, 87%) and proprietary models (n=7, 23%). Evaluation frameworks were highly heterogeneous. All studies conducted internal validation, but external validation (n=2, 7%), failure analysis (n=6, 20%), and patient safety risks analysis (n=1, 3%) were infrequent, and none reported bias assessment. Most studies used both automated metrics and human evaluation (n=16, 53%), while 10 (33%) used only automated metrics, and 4 (13%) only human evaluation. Key barriers hinder the translation of current research into trustworthy, clinically valid applications. Current research remains exploratory and limited in scope, with many applications yet to be explored. Performance assessments often lack reliability, and clinical impact evaluations are insufficient raising concerns about model utility, safety, fairness, and data privacy. Advancing the field requires more robust evaluation frameworks, a broader research scope, and a stronger focus on real-world applicability.
iCHECK-DH: Guidelines and Checklist for the Reporting on Digital Health Implementations
Implementation of digital health technologies has grown rapidly, but many remain limited to pilot studies due to challenges, such as a lack of evidence or barriers to implementation. Overcoming these challenges requires learning from previous implementations and systematically documenting implementation processes to better understand the real-world impact of a technology and identify effective strategies for future implementation. A group of global experts, facilitated by the Geneva Digital Health Hub, developed the Guidelines and Checklist for the Reporting on Digital Health Implementations (iCHECK-DH, pronounced \"I checked\") to improve the completeness of reporting on digital health implementations. A guideline development group was convened to define key considerations and criteria for reporting on digital health implementations. To ensure the practicality and effectiveness of the checklist, it was pilot-tested by applying it to several real-world digital health implementations, and adjustments were made based on the feedback received. The guiding principle for the development of iCHECK-DH was to identify the minimum set of information needed to comprehensively define a digital health implementation, to support the identification of key factors for success and failure, and to enable others to replicate it in different settings. The result was a 20-item checklist with detailed explanations and examples in this paper. The authors anticipate that widespread adoption will standardize the quality of reporting and, indirectly, improve implementation standards and best practices. Guidelines for reporting on digital health implementations are important to ensure the accuracy, completeness, and consistency of reported information. This allows for meaningful comparison and evaluation of results, transparency, and accountability and informs stakeholder decision-making. i-CHECK-DH facilitates standardization of the way information is collected and reported, improving systematic documentation and knowledge transfer that can lead to the development of more effective digital health interventions and better health outcomes.
Unlocking the Power of Artificial Intelligence and Big Data in Medicine
Data-driven science and its corollaries in machine learning and the wider field of artificial intelligence have the potential to drive important changes in medicine. However, medicine is not a science like any other: It is deeply and tightly bound with a large and wide network of legal, ethical, regulatory, economical, and societal dependencies. As a consequence, the scientific and technological progresses in handling information and its further processing and cross-linking for decision support and predictive systems must be accompanied by parallel changes in the global environment, with numerous stakeholders, including citizen and society. What can be seen at the first glance as a barrier and a mechanism slowing down the progression of data science must, however, be considered an important asset. Only global adoption can transform the potential of big data and artificial intelligence into an effective breakthroughs in handling health and medicine. This requires science and society, scientists and citizens, to progress together.
Human-machine interactions with clinical phrase prediction system, aligning with Zipf’s least effort principle?
The essence of language and its evolutionary determinants have long been research subjects with multifaceted explorations. This work reports on a large-scale observational study focused on the language use of clinicians interacting with a phrase prediction system in a clinical setting. By adopting principles of adaptation to evolutionary selection pressure, we attempt to identify the major determinants of language emergence specific to this context. The observed adaptation of clinicians’ language behaviour with technology have been confronted to properties shaping language use, and more specifically on two driving forces: conciseness and distinctiveness. Our results suggest that users tailor their interactions to meet these specific forces to minimise the effort required to achieve their objective. At the same time, the study shows that the optimisation is mainly driven by the distinctive nature of interactions, favouring communication accuracy over ease. These results, published for the first time on a large-scale observational study to our knowledge, offer novel fundamental qualitative and quantitative insights into the mechanisms underlying linguistic behaviour among clinicians and its potential implications for language adaptation in human-machine interactions.
Influence of Pedometer Position on Pedometer Accuracy at Various Walking Speeds: A Comparative Study
Demographic growth in conjunction with the rise of chronic diseases is increasing the pressure on health care systems in most OECD countries. Physical activity is known to be an essential factor in improving or maintaining good health. Walking is especially recommended, as it is an activity that can easily be performed by most people without constraints. Pedometers have been extensively used as an incentive to motivate people to become more active. However, a recognized problem with these devices is their diminishing accuracy associated with decreased walking speed. The arrival on the consumer market of new devices, worn indifferently either at the waist, wrist, or as a necklace, gives rise to new questions regarding their accuracy at these different positions. Our objective was to assess the performance of 4 pedometers (iHealth activity monitor, Withings Pulse O2, Misfit Shine, and Garmin vívofit) and compare their accuracy according to their position worn, and at various walking speeds. We conducted this study in a controlled environment with 21 healthy adults required to walk 100 m at 3 different paces (0.4 m/s, 0.6 m/s, and 0.8 m/s) regulated by means of a string attached between their legs at the level of their ankles and a metronome ticking the cadence. To obtain baseline values, we asked the participants to walk 200 m at their own pace. A decrease of accuracy was positively correlated with reduced speed for all pedometers (12% mean error at self-selected pace, 27% mean error at 0.8 m/s, 52% mean error at 0.6 m/s, and 76% mean error at 0.4 m/s). Although the position of the pedometer on the person did not significantly influence its accuracy, some interesting tendencies can be highlighted in 2 settings: (1) positioning the pedometer at the waist at a speed greater than 0.8 m/s or as a necklace at preferred speed tended to produce lower mean errors than at the wrist position; and (2) at a slow speed (0.4 m/s), pedometers worn at the wrist tended to produce a lower mean error than in the other positions. At all positions, all tested pedometers generated significant errors at slow speeds and therefore cannot be used reliably to evaluate the amount of physical activity for people walking slower than 0.6 m/s (2.16 km/h, or 1.24 mph). At slow speeds, the better accuracy observed with pedometers worn at the wrist could constitute a valuable line of inquiry for the future development of devices adapted to elderly people.
Introducing the “AI Language Models in Health Care” Section: Actionable Strategies for Targeted and Wide-Scale Deployment
The realm of health care is on the cusp of a significant technological leap, courtesy of the advancements in artificial intelligence (AI) language models, but ensuring the ethical design, deployment, and use of these technologies is imperative to truly realize their potential in improving health care delivery and promoting human well-being and safety. Indeed, these models have demonstrated remarkable prowess in generating humanlike text, evidenced by a growing body of research and real-world applications. This capability paves the way for enhanced patient engagement, clinical decision support, and a plethora of other applications that were once considered beyond reach. However, the journey from potential to real-world application is laden with challenges ranging from ensuring reliability and transparency to navigating a complex regulatory landscape. There is still a need for comprehensive evaluation and rigorous validation to ensure that these models are reliable, transparent, and ethically sound. This editorial introduces the new section, titled “AI Language Models in Health Care.” This section seeks to create a platform for academics, practitioners, and innovators to share their insights, research findings, and real-world applications of AI language models in health care. The aim is to foster a community that is not only excited about the possibilities but also critically engaged with the ethical, practical, and regulatory challenges that lie ahead.
Extended Grammar of Systematized Nomenclature of Medicine – Clinical Terms for Semantic Representation of Clinical Data: Methodological Study
Interoperability has been a challenge for half a century. Led by an informatics view of the world, the quest for interoperability has evolved from typing and categorizing data to building increasingly complex models. In parallel with the development of these models, the field of terminologies and ontologies emerged to refine granularity and introduce notions of hierarchy. Clinical data models and terminology systems vary in purpose, and their fixed categories shape and constrain representation, which inevitably leads to information loss. Despite these efforts, semantic interoperability remains imperfect. Achieving it is essential for effective data reuse but requires more than rich terminologies and standardized models. This methodological study explores the extent to which the SNOMED CT (Systematized Nomenclature of Medicine - Clinical Terms) compositional grammar can be leveraged and extended to approximate a formal descriptive grammar, allowing clinical reality to be expressed in coherent, meaningful sentences rather than preconstrained categories. Building on a decade of semantic representation efforts at the Geneva University Hospitals, we developed a framework to identify recurring semantic gaps in clinical data. We addressed these gaps by systematically modifying the SNOMED CT Machine Read` Concept Model and extending its Augmented Backus-Naur Form syntax to support necessary grammatical structures and external vocabularies. This approach enabled the semantic representation of over 119,000 distinct data elements covering 13 billion instances. By extending the grammar, we successfully addressed critical limitations such as negation, scalar values, uncertainty, temporality, and the integration of external terminologies like Pango. The extensions proved essential for capturing complex clinical nuances that standard precoordinated concepts could not represent. Rather than creating a new standard from scratch, extending the grammatical capabilities of SNOMED CT offers a viable pathway toward high-fidelity semantic representation. This work serves as a proof-of-concept that separating the rules of composition from vocabulary allows for a more flexible and robust description of clinical reality, provided that challenges regarding governance and machine readability are addressed.
Patient Information Summarization in Clinical Settings: Scoping Review
Information overflow, a common problem in the present clinical environment, can be mitigated by summarizing clinical data. Although there are several solutions for clinical summarization, there is a lack of a complete overview of the research relevant to this field. This study aims to identify state-of-the-art solutions for clinical summarization, to analyze their capabilities, and to identify their properties. A scoping review of articles published between 2005 and 2022 was conducted. With a clinical focus, PubMed and Web of Science were queried to find an initial set of reports, later extended by articles found through a chain of citations. The included reports were analyzed to answer the questions of where, what, and how medical information is summarized; whether summarization conserves temporality, uncertainty, and medical pertinence; and how the propositions are evaluated and deployed. To answer how information is summarized, methods were compared through a new framework \"collect-synthesize-communicate\" referring to information gathering from data, its synthesis, and communication to the end user. Overall, 128 articles were included, representing various medical fields. Exclusively structured data were used as input in 46.1% (59/128) of papers, text in 41.4% (53/128) of articles, and both in 10.2% (13/128) of papers. Using the proposed framework, 42.2% (54/128) of the records contributed to information collection, 27.3% (35/128) contributed to information synthesis, and 46.1% (59/128) presented solutions for summary communication. Numerous summarization approaches have been presented, including extractive (n=13) and abstractive summarization (n=19); topic modeling (n=5); summary specification (n=11); concept and relation extraction (n=30); visual design considerations (n=59); and complete pipelines (n=7) using information extraction, synthesis, and communication. Graphical displays (n=53), short texts (n=41), static reports (n=7), and problem-oriented views (n=7) were the most common types in terms of summary communication. Although temporality and uncertainty information were usually not conserved in most studies (74/128, 57.8% and 113/128, 88.3%, respectively), some studies presented solutions to treat this information. Overall, 115 (89.8%) articles showed results of an evaluation, and methods included evaluations with human participants (median 15, IQR 24 participants): measurements in experiments with human participants (n=31), real situations (n=8), and usability studies (n=28). Methods without human involvement included intrinsic evaluation (n=24), performance on a proxy (n=10), or domain-specific tasks (n=11). Overall, 11 (8.6%) reports described a system deployed in clinical settings. The scientific literature contains many propositions for summarizing patient information but reports very few comparisons of these proposals. This work proposes to compare these algorithms through how they conserve essential aspects of clinical information and through the \"collect-synthesize-communicate\" framework. We found that current propositions usually address these 3 steps only partially. Moreover, they conserve and use temporality, uncertainty, and pertinent medical aspects to varying extents, and solutions are often preliminary.