Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
69,955
result(s) for
"decision support"
Sort by:
Analytics the right way : a business leader's guide to putting data to productive use
by
Wilson, Tim (Analytics practitioner), author
,
Sutherland, Joe, author
in
Decision making.
,
Decision support systems.
,
Business and Management.
2025
Organizations have more data at their fingertips than ever, and their ability to put that data to productive use should be a key source of sustainable competitive advantage. Yet, business leaders looking to tap into a steady and manageable stream of 'actionable insights' often, instead, get blasted with a deluge of dashboards, chart-filled slide decks, and opaque machine learning jargon that leaves them asking, 'So what?' 'Analytics the Right Way' provides a clear and practical approach to putting analytics to productive use with a three-part framework that brings together the realities of the modern business environment.
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
by
Vayena, Effy
,
Blasimme, Alessandro
,
Frey, Dietmar
in
Algorithms
,
Analysis
,
Artificial Intelligence
2020
Background
Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.
Methods
Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.
Results
Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.
Conclusions
To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
Journal Article
Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system
by
Mauer, Elizabeth
,
Kaushal, Rainu
,
Nosal, Sarah
in
Adult
,
Alert fatigue
,
Alert Fatigue, Health Personnel
2017
Background
Although alert fatigue is blamed for high override rates in contemporary clinical decision support systems, the concept of alert fatigue is poorly defined. We tested hypotheses arising from two possible alert fatigue mechanisms: (A)
cognitive overload
associated with amount of work, complexity of work, and effort distinguishing informative from uninformative alerts, and (B)
desensitization
from repeated exposure to the same alert over time.
Methods
Retrospective cohort study using electronic health record data (both drug alerts and clinical practice reminders) from January 2010 through June 2013 from 112 ambulatory primary care clinicians. The cognitive overload hypotheses were that alert acceptance would be lower with higher workload (number of encounters, number of patients), higher work complexity (patient comorbidity, alerts per encounter), and more alerts low in informational value (repeated alerts for the same patient in the same year). The desensitization hypothesis was that, for newly deployed alerts, acceptance rates would decline after an initial peak.
Results
On average, one-quarter of drug alerts received by a primary care clinician, and one-third of clinical reminders, were repeats for the same patient within the same year. Alert acceptance was associated with work complexity and repeated alerts, but not with the amount of work. Likelihood of reminder acceptance dropped by 30% for each additional reminder received per encounter, and by 10% for each five percentage point increase in proportion of repeated reminders. The newly deployed reminders did not show a pattern of declining response rates over time, which would have been consistent with desensitization. Interestingly, nurse practitioners were 4 times as likely to accept drug alerts as physicians.
Conclusions
Clinicians became less likely to accept alerts as they received more of them, particularly more repeated alerts. There was no evidence of an effect of workload per se, or of desensitization over time for a newly deployed alert. Reducing within-patient repeats may be a promising target for reducing alert overrides and alert fatigue.
Journal Article
Insulin dose optimization using an automated artificial intelligence-based decision support system in youths with type 1 diabetes
by
Nimri, Revital
,
Danne, Thomas
,
Schatz, Desmond
in
692/699/2743/137/1418
,
692/700/565
,
Adolescent
2020
Despite the increasing adoption of insulin pumps and continuous glucose monitoring devices, most people with type 1 diabetes do not achieve their glycemic goals
1
. This could be related to a lack of expertise or inadequate time for clinicians to analyze complex sensor-augmented pump data. We tested whether frequent insulin dose adjustments guided by an automated artificial intelligence-based decision support system (AI-DSS) is as effective and safe as those guided by physicians in controlling glucose levels. ADVICE4U was a six-month, multicenter, multinational, parallel, randomized controlled, non-inferiority trial in 108 participants with type 1 diabetes, aged 10–21 years and using insulin pump therapy (ClinicalTrials.gov no. NCT03003806). Participants were randomized 1:1 to receive remote insulin dose adjustment every three weeks guided by either an AI-DSS, (AI-DSS arm,
n
= 54) or by physicians (physician arm,
n
= 54). The results for the primary efficacy measure—the percentage of time spent within the target glucose range (70–180 mg dl
−1
(3.9–10.0 mmol l
−1
))—in the AI-DSS arm were statistically non-inferior to those in the physician arm (50.2 ± 11.1% versus 51.6 ± 11.3%, respectively,
P
< 1 × 10
−7
). The percentage of readings below 54 mg dl
−1
(<3.0 mmol l
−1
) within the AI-DSS arm was statistically non-inferior to that in the physician arm (1.3 ± 1.4% versus 1.0 ± 0.9%, respectively,
P
< 0.0001). Three severe adverse events related to diabetes (two severe hypoglycemia, one diabetic ketoacidosis) were reported in the physician arm and none in the AI-DSS arm. In conclusion, use of an automated decision support tool for optimizing insulin pump settings was non-inferior to intensive insulin titration provided by physicians from specialized academic diabetes centers.
The randomized-controlled trial ADVICE4U demonstrates non-inferiority of an automated AI-based decision support system compared with advice from expert physicians for optimal insulin dosing in youths with type 1 diabetes.
Journal Article
Applications of Clinical Decision Support Systems in Diabetes Care: Scoping Review
2023
Providing comprehensive and individualized diabetes care remains a significant challenge in the face of the increasing complexity of diabetes management and a lack of specialized endocrinologists to support diabetes care. Clinical decision support systems (CDSSs) are progressively being used to improve diabetes care, while many health care providers lack awareness and knowledge about CDSSs in diabetes care. A comprehensive analysis of the applications of CDSSs in diabetes care is still lacking. This review aimed to summarize the research landscape, clinical applications, and impact on both patients and physicians of CDSSs in diabetes care. We conducted a scoping review following the Arksey and O’Malley framework. A search was conducted in 7 electronic databases to identify the clinical applications of CDSSs in diabetes care up to June 30, 2022. Additional searches were conducted for conference abstracts from the period of 2021-2022. Two researchers independently performed the screening and data charting processes. Of 11,569 retrieved studies, 85 (0.7%) were included for analysis. Research interest is growing in this field, with 45 (53%) of the 85 studies published in the past 5 years. Among the 58 (68%) out of 85 studies disclosing the underlying decision-making mechanism, most CDSSs (44/58, 76%) were knowledge based, while the number of non-knowledge-based systems has been increasing in recent years. Among the 81 (95%) out of 85 studies disclosing application scenarios, the majority of CDSSs were used for treatment recommendation (63/81, 78%). Among the 39 (46%) out of 85 studies disclosing physician user types, primary care physicians (20/39, 51%) were the most common, followed by endocrinologists (15/39, 39%) and nonendocrinology specialists (8/39, 21%). CDSSs significantly improved patients’ blood glucose, blood pressure, and lipid profiles in 71% (45/63), 67% (12/18), and 38% (8/21) of the studies, respectively, with no increase in the risk of hypoglycemia. CDSSs are both effective and safe in improving diabetes care, implying that they could be a potentially reliable assistant in diabetes care, especially for physicians with limited experience and patients with limited access to medical resources.
Journal Article
Artificial intelligence–enabled electrocardiograms for identification of patients with low ejection fraction: a pragmatic, randomized clinical trial
by
Molling, Paul E.
,
Friedman, Paul A.
,
Thacher, Thomas D.
in
692/699/75/230
,
692/700/228
,
Adolescent
2021
We have conducted a pragmatic clinical trial aimed to assess whether an electrocardiogram (ECG)-based, artificial intelligence (AI)-powered clinical decision support tool enables early diagnosis of low ejection fraction (EF), a condition that is underdiagnosed but treatable. In this trial (
NCT04000087
), 120 primary care teams from 45 clinics or hospitals were cluster-randomized to either the intervention arm (access to AI results; 181 clinicians) or the control arm (usual care; 177 clinicians). ECGs were obtained as part of routine care from a total of 22,641 adults (
N
= 11,573 intervention;
N
= 11,068 control) without prior heart failure. The primary outcome was a new diagnosis of low EF (≤50%) within 90 days of the ECG. The trial met the prespecified primary endpoint, demonstrating that the intervention increased the diagnosis of low EF in the overall cohort (1.6% in the control arm versus 2.1% in the intervention arm, odds ratio (OR) 1.32 (1.01–1.61),
P
= 0.007) and among those who were identified as having a high likelihood of low EF (that is, positive AI-ECG, 6% of the overall cohort) (14.5% in the control arm versus 19.5% in the intervention arm, OR 1.43 (1.08–1.91),
P
= 0.01). In the overall cohort, echocardiogram utilization was similar between the two arms (18.2% control versus 19.2% intervention,
P
= 0.17); for patients with positive AI-ECGs, more echocardiograms were obtained in the intervention compared to the control arm (38.1% control versus 49.6% intervention,
P
< 0.001). These results indicate that use of an AI algorithm based on ECGs can enable the early diagnosis of low EF in patients in the setting of routine primary care.
In a pragmatic, cluster-randomized clinical trial, use of an AI algorithm for interpretation of electrocardiograms in primary care practices increased the frequency at which impaired heart function was diagnosed.
Journal Article