Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
111,690
result(s) for
"support system"
Sort by:
Beyond PTSD
2018,2019
Impulsivity, poor judgment, moodiness, risky behavior. \"You don't understand.\" \"I don't care.\" \"Whatever, bro.\" Engaging and working with teenagers is tough. Typically, we attribute this to the storms of adolescence. But what if some of the particularly problematic behaviors we see in teens -- self-destructive behaviors, academic issues, substance abuse, reluctance to engage in therapy or treatment -- point to unspoken trauma?
Teens nationwide struggle with traumatic stress related to poverty, abuse, neglect, bullying, traumatic loss, and interpersonal or community violence. But youth are also generally reluctant to disclose or discuss experiences of traumatic stress, and adults working with these youth may not immediately perceive the connection between prior trauma and the teen's current risky or concerning behavior. Beyond PTSD: Helping and Healing Teens Exposed to Trauma helps adults recognize and understand traumatized youth, and provides concrete strategies for talking to and engaging the teen, overcoming resistance, and finding the most appropriate evidence-based treatment approach for them.
Nearly twenty contributors pull from their extensive and varied experience working in schools and hospitals to child welfare programs, juvenile justice facilities, pediatric offices, and with families to provide concrete tips to manage the challenges and opportunities of working with trauma-exposed adolescents. Chapters present trauma-informed approaches to youth with aggression, suicide and self-injury, psychosis, and school refusal; youth with physical or developmental disabilities or medical comorbidities, those in juvenile justice or child welfare; teen parents; and LGBTQ youth, among others.
Throughout the text, tables compare different types of trauma therapies and provide information about how treatments might be adapted to fit a specific teen or setting. Readers will also find \"real life\" case vignettes and concrete, specific clinical pearls -- even examples of language to use -- to demonstrate how to work effectively with difficult-to-engage teens with complex symptoms and behaviors.
Written to be practical and accessible for clinicians, social workers, pediatricians, school counselors, and even parents, with the information, context, and strategies they need to help the teen in front of them.
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
by
Vayena, Effy
,
Blasimme, Alessandro
,
Frey, Dietmar
in
Algorithms
,
Analysis
,
Artificial Intelligence
2020
Background
Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.
Methods
Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.
Results
Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.
Conclusions
To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
Journal Article
Artificial intelligence–enabled electrocardiograms for identification of patients with low ejection fraction: a pragmatic, randomized clinical trial
by
Molling, Paul E.
,
Friedman, Paul A.
,
Thacher, Thomas D.
in
692/699/75/230
,
692/700/228
,
Adolescent
2021
We have conducted a pragmatic clinical trial aimed to assess whether an electrocardiogram (ECG)-based, artificial intelligence (AI)-powered clinical decision support tool enables early diagnosis of low ejection fraction (EF), a condition that is underdiagnosed but treatable. In this trial (
NCT04000087
), 120 primary care teams from 45 clinics or hospitals were cluster-randomized to either the intervention arm (access to AI results; 181 clinicians) or the control arm (usual care; 177 clinicians). ECGs were obtained as part of routine care from a total of 22,641 adults (
N
= 11,573 intervention;
N
= 11,068 control) without prior heart failure. The primary outcome was a new diagnosis of low EF (≤50%) within 90 days of the ECG. The trial met the prespecified primary endpoint, demonstrating that the intervention increased the diagnosis of low EF in the overall cohort (1.6% in the control arm versus 2.1% in the intervention arm, odds ratio (OR) 1.32 (1.01–1.61),
P
= 0.007) and among those who were identified as having a high likelihood of low EF (that is, positive AI-ECG, 6% of the overall cohort) (14.5% in the control arm versus 19.5% in the intervention arm, OR 1.43 (1.08–1.91),
P
= 0.01). In the overall cohort, echocardiogram utilization was similar between the two arms (18.2% control versus 19.2% intervention,
P
= 0.17); for patients with positive AI-ECGs, more echocardiograms were obtained in the intervention compared to the control arm (38.1% control versus 49.6% intervention,
P
< 0.001). These results indicate that use of an AI algorithm based on ECGs can enable the early diagnosis of low EF in patients in the setting of routine primary care.
In a pragmatic, cluster-randomized clinical trial, use of an AI algorithm for interpretation of electrocardiograms in primary care practices increased the frequency at which impaired heart function was diagnosed.
Journal Article
A Conceptual Framework to Study the Implementation of Clinical Decision Support Systems (BEAR): Literature Review and Concept Mapping
by
Zanoletti-Mannello, Manuela
,
Kane-Gill, Sandra L
,
Boyce, Richard D
in
Acceptance
,
Algorithms
,
Attitudes
2020
The implementation of clinical decision support systems (CDSSs) as an intervention to foster clinical practice change is affected by many factors. Key factors include those associated with behavioral change and those associated with technology acceptance. However, the literature regarding these subjects is fragmented and originates from two traditionally separate disciplines: implementation science and technology acceptance.
Our objective is to propose an integrated framework that bridges the gap between the behavioral change and technology acceptance aspects of the implementation of CDSSs.
We employed an iterative process to map constructs from four contributing frameworks-the Theoretical Domains Framework (TDF); the Consolidated Framework for Implementation Research (CFIR); the Human, Organization, and Technology-fit framework (HOT-fit); and the Unified Theory of Acceptance and Use of Technology (UTAUT)-and the findings of 10 literature reviews, identified through a systematic review of reviews approach.
The resulting framework comprises 22 domains: agreement with the decision algorithm; attitudes; behavioral regulation; beliefs about capabilities; beliefs about consequences; contingencies; demographic characteristics; effort expectancy; emotions; environmental context and resources; goals; intentions; intervention characteristics; knowledge; memory, attention, and decision processes; patient-health professional relationship; patient's preferences; performance expectancy; role and identity; skills, ability, and competence; social influences; and system quality. We demonstrate the use of the framework providing examples from two research projects.
We proposed BEAR (BEhavior and Acceptance fRamework), an integrated framework that bridges the gap between behavioral change and technology acceptance, thereby widening the view established by current models.
Journal Article
Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system
by
Mauer, Elizabeth
,
Kaushal, Rainu
,
Nosal, Sarah
in
Adult
,
Alert fatigue
,
Alert Fatigue, Health Personnel
2017
Background
Although alert fatigue is blamed for high override rates in contemporary clinical decision support systems, the concept of alert fatigue is poorly defined. We tested hypotheses arising from two possible alert fatigue mechanisms: (A)
cognitive overload
associated with amount of work, complexity of work, and effort distinguishing informative from uninformative alerts, and (B)
desensitization
from repeated exposure to the same alert over time.
Methods
Retrospective cohort study using electronic health record data (both drug alerts and clinical practice reminders) from January 2010 through June 2013 from 112 ambulatory primary care clinicians. The cognitive overload hypotheses were that alert acceptance would be lower with higher workload (number of encounters, number of patients), higher work complexity (patient comorbidity, alerts per encounter), and more alerts low in informational value (repeated alerts for the same patient in the same year). The desensitization hypothesis was that, for newly deployed alerts, acceptance rates would decline after an initial peak.
Results
On average, one-quarter of drug alerts received by a primary care clinician, and one-third of clinical reminders, were repeats for the same patient within the same year. Alert acceptance was associated with work complexity and repeated alerts, but not with the amount of work. Likelihood of reminder acceptance dropped by 30% for each additional reminder received per encounter, and by 10% for each five percentage point increase in proportion of repeated reminders. The newly deployed reminders did not show a pattern of declining response rates over time, which would have been consistent with desensitization. Interestingly, nurse practitioners were 4 times as likely to accept drug alerts as physicians.
Conclusions
Clinicians became less likely to accept alerts as they received more of them, particularly more repeated alerts. There was no evidence of an effect of workload per se, or of desensitization over time for a newly deployed alert. Reducing within-patient repeats may be a promising target for reducing alert overrides and alert fatigue.
Journal Article
Insulin dose optimization using an automated artificial intelligence-based decision support system in youths with type 1 diabetes
by
Nimri, Revital
,
Danne, Thomas
,
Schatz, Desmond
in
692/699/2743/137/1418
,
692/700/565
,
Adolescent
2020
Despite the increasing adoption of insulin pumps and continuous glucose monitoring devices, most people with type 1 diabetes do not achieve their glycemic goals
1
. This could be related to a lack of expertise or inadequate time for clinicians to analyze complex sensor-augmented pump data. We tested whether frequent insulin dose adjustments guided by an automated artificial intelligence-based decision support system (AI-DSS) is as effective and safe as those guided by physicians in controlling glucose levels. ADVICE4U was a six-month, multicenter, multinational, parallel, randomized controlled, non-inferiority trial in 108 participants with type 1 diabetes, aged 10–21 years and using insulin pump therapy (ClinicalTrials.gov no. NCT03003806). Participants were randomized 1:1 to receive remote insulin dose adjustment every three weeks guided by either an AI-DSS, (AI-DSS arm,
n
= 54) or by physicians (physician arm,
n
= 54). The results for the primary efficacy measure—the percentage of time spent within the target glucose range (70–180 mg dl
−1
(3.9–10.0 mmol l
−1
))—in the AI-DSS arm were statistically non-inferior to those in the physician arm (50.2 ± 11.1% versus 51.6 ± 11.3%, respectively,
P
< 1 × 10
−7
). The percentage of readings below 54 mg dl
−1
(<3.0 mmol l
−1
) within the AI-DSS arm was statistically non-inferior to that in the physician arm (1.3 ± 1.4% versus 1.0 ± 0.9%, respectively,
P
< 0.0001). Three severe adverse events related to diabetes (two severe hypoglycemia, one diabetic ketoacidosis) were reported in the physician arm and none in the AI-DSS arm. In conclusion, use of an automated decision support tool for optimizing insulin pump settings was non-inferior to intensive insulin titration provided by physicians from specialized academic diabetes centers.
The randomized-controlled trial ADVICE4U demonstrates non-inferiority of an automated AI-based decision support system compared with advice from expert physicians for optimal insulin dosing in youths with type 1 diabetes.
Journal Article
Reinforcement Learning for Clinical Decision Support in Critical Care: Comprehensive Review
by
Feng, Mengling
,
Ngiam, Kee Yuan
,
Sun, Xingzhi
in
Algorithms
,
Application
,
Artificial intelligence
2020
Decision support systems based on reinforcement learning (RL) have been implemented to facilitate the delivery of personalized care. This paper aimed to provide a comprehensive review of RL applications in the critical care setting.
This review aimed to survey the literature on RL applications for clinical decision support in critical care and to provide insight into the challenges of applying various RL models.
We performed an extensive search of the following databases: PubMed, Google Scholar, Institute of Electrical and Electronics Engineers (IEEE), ScienceDirect, Web of Science, Medical Literature Analysis and Retrieval System Online (MEDLINE), and Excerpta Medica Database (EMBASE). Studies published over the past 10 years (2010-2019) that have applied RL for critical care were included.
We included 21 papers and found that RL has been used to optimize the choice of medications, drug dosing, and timing of interventions and to target personalized laboratory values. We further compared and contrasted the design of the RL models and the evaluation metrics for each application.
RL has great potential for enhancing decision making in critical care. Challenges regarding RL system design, evaluation metrics, and model choice exist. More importantly, further work is required to validate RL in authentic clinical environments.
Journal Article
Impact of climate change on agricultural production; Issues, challenges, and opportunities in Asia
by
Mansour, Fatma
,
Ahmad, Saeed
,
Ali, Shafaqat
in
Adaptation
,
Agricultural practices
,
Agricultural production
2022
Agricultural production is under threat due to climate change in food insecure regions, especially in Asian countries. Various climate-driven extremes, i.e., drought, heat waves, erratic and intense rainfall patterns, storms, floods, and emerging insect pests have adversely affected the livelihood of the farmers. Future climatic predictions showed a significant increase in temperature, and erratic rainfall with higher intensity while variability exists in climatic patterns for climate extremes prediction. For mid-century (2040–2069), it is projected that there will be a rise of 2.8°C in maximum temperature and a 2.2°C in minimum temperature in Pakistan. To respond to the adverse effects of climate change scenarios, there is a need to optimize the climate-smart and resilient agricultural practices and technology for sustainable productivity. Therefore, a case study was carried out to quantify climate change effects on rice and wheat crops and to develop adaptation strategies for the rice-wheat cropping system during the mid-century (2040–2069) as these two crops have significant contributions to food production. For the quantification of adverse impacts of climate change in farmer fields, a multidisciplinary approach consisted of five climate models (GCMs), two crop models (DSSAT and APSIM) and an economic model [Trade-off Analysis, Minimum Data Model Approach (TOAMD)] was used in this case study. DSSAT predicted that there would be a yield reduction of 15.2% in rice and 14.1% in wheat and APSIM showed that there would be a yield reduction of 17.2% in rice and 12% in wheat. Adaptation technology, by modification in crop management like sowing time and density, nitrogen, and irrigation application have the potential to enhance the overall productivity and profitability of the rice-wheat cropping system under climate change scenarios. Moreover, this paper reviews current literature regarding adverse climate change impacts on agricultural productivity, associated main issues, challenges, and opportunities for sustainable productivity of agriculture to ensure food security in Asia. Flowing opportunities such as altering sowing time and planting density of crops, crop rotation with legumes, agroforestry, mixed livestock systems, climate resilient plants, livestock and fish breeds, farming of monogastric livestock, early warning systems and decision support systems, carbon sequestration, climate, water, energy, and soil smart technologies, and promotion of biodiversity have the potential to reduce the negative effects of climate change.
Journal Article
A systematic review and taxonomy of explanations in decision support and recommender systems
2017
With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today’s increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.
Journal Article
Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI
by
Wheatstone, Peter
,
Kader, Rawen
,
Higham, Janet
in
Accuracy
,
Artificial Intelligence
,
Check lists
2022
A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico, evaluation, but few have yet demonstrated real benefit to patient care. Early stage clinical evaluation is important to assess an AI system’s actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use, and pave the way to further large scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multistakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two round, modified Delphi process to collect and analyse expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 predefined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. 123 experts participated in the first round of Delphi, 138 in the second, 16 in the consensus meeting, and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI specific reporting items (made of 28 subitems) and 10 generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we have developed a guideline comprising key items that should be reported in early stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings.
Journal Article