Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
42,223 result(s) for "Decision Making - ethics"
Sort by:
Ethics in health administration : a practical approach for decision makers
\"Given the many new advances in technology as well as the uncertainty of the future of health care in a time of change, today's healthcare administrators require a strong foundation in practice-based ethics to confront the challenges of the current healthcare landscape. Ethics in Health Administration, Fourth translates the principles and practice of ethics into usable information for application to the real world of healthcare administration and the critical issues faced by today's healthcare administrators\"--Provided by publisher.
Coaching doctors to improve ethical decision-making in adult hospitalized patients potentially receiving excessive treatment. The CODE stepped-wedge cluster randomized controlled trial
PurposeThe aim of this study was to assess whether coaching doctors to enhance ethical decision-making in teams improves (1) goal-oriented care operationalized via written do-not-intubate and do-not attempt cardiopulmonary resuscitation (DNI-DNACPR) orders in adult patients potentially receiving excessive treatment (PET) during their first hospital stay and (2) the quality of the ethical climate.MethodsWe carried out a stepped-wedge cluster randomized controlled trial in the medical intensive care unit (ICU) and 9 referring internal medicine departments of Ghent University Hospital between February 2022 and February 2023. Doctors and nurses in charge of hospitalized patients filled out the ethical decision-making climate questionnaire (ethical decision-making climate questionnaire, EDMCQ) before and after the study, and anonymously identified PET via an electronic alert during the entire study period. All departments were randomly assigned to a 4-month coaching. At least one month of coaching was compared to less than one month coaching and usual care. The first primary endpoint was the incidence of written DNI-DNACPR decisions. The second primary endpoint was the EDMCQ before and after the study period. Because clinicians identified less PET than required to detect a difference in written DNI-DNACPR decisions, a post-hoc analysis on the overall population was performed. To reduce type I errors, we further restricted the analysis to one of our predefined secondary endpoints (mortality up to 1 year).ResultsOf the 442 and 423 clinicians working before and after the study period, respectively 270 (61%) and 261 (61.7%) filled out the EDMCQ. Fifty of the 93 (53.7%) doctors participated in the coaching for a mean (standard deviation [SD]) of 4.36 (2.55) sessions. Of the 7254 patients, 125 (1.7%) were identified as PET, with 16 missing outcome data. Twenty-six of the PET and 624 of the overall population already had a written DNI-DNACPR decision at study entry, resulting in 83 and 6614 patients who were included in the main and post hoc analysis, respectively. The estimated incidence of written DNI-DNACPR decisions in the intervention vs. control arm was, respectively, 29.7% vs. 19.6% (odds ratio 4.24, 95% confidence interval 4.21–4.27; P < 0.001) in PET and 3.4% vs. 1.9% (1.65, 1.12–2.43; P = 0.011) in the overall study population. The estimated mortality at one year was respectively 85% vs. 83.7% (hazard ratio 2.76, 1.26–6.04; P = 0.011) and 14.5% vs. 15.1% (0.89, 0.72–1.09; P = 0.251). The mean difference in EDMCQ before and after the study period was 0.02 points (− 0.18 to 0.23; P = 0.815).ConclusionThis study suggests that coaching doctors regarding ethical decision-making in teams safely improves goal-oriented care operationalized via written DNI-DNACPR decisions in hospitalized patients, however without concomitantly improving the quality of the ethical climate.
AI and XAI second opinion: the danger of false confirmation in human–AI collaboration
Can AI substitute a human physician’s second opinion? Recently the Journal of Medical Ethics published two contrasting views: Kempt and Nagel advocate for using artificial intelligence (AI) for a second opinion except when its conclusions significantly diverge from the initial physician’s while Jongsma and Sand argue for a second human opinion irrespective of AI’s concurrence or dissent. The crux of this debate hinges on the prevalence and impact of ‘false confirmation’—a scenario where AI erroneously validates an incorrect human decision. These errors seem exceedingly difficult to detect, reminiscent of heuristics akin to confirmation bias. However, this debate has yet to engage with the emergence of explainable AI (XAI), which elaborates on why the AI tool reaches its diagnosis. To progress this debate, we outline a framework for conceptualising decision-making errors in physician–AI collaborations. We then review emerging evidence on the magnitude of false confirmation errors. Our simulations show that they are likely to be pervasive in clinical practice, decreasing diagnostic accuracy to between 5% and 30%. We conclude with a pragmatic approach to employing AI as a second opinion, emphasising the need for physicians to make clinical decisions before consulting AI; employing nudges to increase awareness of false confirmations and critically engaging with XAI explanations. This approach underscores the necessity for a cautious, evidence-based methodology when integrating AI into clinical decision-making.
Patients, doctors and risk attitudes
A lively topic of debate in decision theory over recent years concerns our understanding of the different risk attitudes exhibited by decision makers. There is ample evidence that risk-averse and risk-seeking behaviours are widespread, and a growing consensus that such behaviour is rationally permissible. In the context of clinical medicine, this matter is complicated by the fact that healthcare professionals must often make choices for the benefit of their patients, but the norms of rational choice are conventionally grounded in a decision maker’s own desires, beliefs and actions. The presence of both doctor and patient raises the question of whose risk attitude matters for the choice at hand and what to do when these diverge. Must doctors make risky choices when treating risk-seeking patients? Ought they to be risk averse in general when choosing on behalf of others? In this paper, I will argue that healthcare professionals ought to adopt a deferential approach, whereby it is the risk attitude of the patient that matters in medical decision making. I will show how familiar arguments for widely held anti-paternalistic views about medicine can be straightforwardly extended to include not only patients’ evaluations of possible health states, but also their attitudes to risk. However, I will also show that this deferential view needs further refinement: patients’ higher-order attitudes towards their risk attitudes must be considered in order to avoid some counterexamples and to accommodate different views about what sort of attitudes risk attitudes actually are.
Imperfect duties of management : the ethical norm of managerial decisions
This book uses Kant's idea of imperfect duty to extend the theory of the firm. Unlike perfect duty which is contractual or otherwise legally binding, imperfect duty consists of those commitments of choice that pursue some moral value, but that have practical limits to their pursuit. The author presents a broad view of the imperfect duties of management, defined as a nexus of all commitments to do good involving relations internal and external to the firm. This book has major implications for research in business ethics and allows critical insights into managerial decision making.
It is not about autonomy: realigning the ethical debate on substitute judgement and AI preference predictors in healthcare
This article challenges two dominant assumptions in the current ethical debate over the use of algorithmic Personalised Patient Preference Predictors (P4) in substitute judgement for incapacitated patients. First, I question the belief that the autonomy of a patient who no longer has decision-making capacity can be meaningfully respected through a P4-empowered substitute judgement. Second, I critique the assumption that respect for autonomy can be reduced to merely satisfying a patient’s individual treatment preferences. Both assumptions, I argue, are problematic: respect for autonomy cannot be equated with simply delivering the ‘right’ treatments, and expanding the normative scope of agency beyond first-person decisions creates issues for standard clinical decision-making. I suggest, instead, that the development of these algorithmic tools can be justified by achieving other moral goods, such as honouring a patient’s unique identity or reducing surrogate decision-makers’ burdens. This conclusion, I argue, should reshape the ethical debate around not just the future development and use of P4-like systems, but also on how substitute judgement is currently understood and justified in clinical medicine.
Blind spots : why we fail to do what's right and what to do about it
When confronted with an ethical dilemma, most of us like to think we would stand up for our principles. But we are not as ethical as we think we are. In 'Blind Spots', the authors examine the ways we overestimate our ability to do what is right and how we act unethically without meaning to.
Using the Recommended Summary Plan for Emergency Care and Treatment (ReSPECT) in a community setting: does it facilitate best interests decision-making?
In the UK, the Recommended Summary Plan for Emergency Care and Treatment (ReSPECT) is a widely used process, designed to facilitate shared decision-making between a clinician and a patient or, if the patient lacks capacity to participate in the conversation, a person close to the patient. A key outcome of the ReSPECT process is a set of recommendations, recorded on the patient-held ReSPECT form, that reflect the conversation. In an emergency, these recommendations are intended to inform clinical decision-making, and thereby enable the attending clinician—usually a general practitioner (GP) or paramedic—to act in the patient’s best interests. This study is the first to explore the extent to which ReSPECT recommendations realise their goal of informing best interests decision-making in community contexts. Using a modified framework analysis approach, we triangulate interviews with patients and their relatives, GPs and nurses and care home staff. Our findings show that inconsistent practices around recording patient wishes, diverging interpretations of the meaning and authority of recommendations and different situational contexts may affect the interpretation and enactment of ReSPECT recommendations. Enacting ReSPECT recommendations in an emergency can be fraught with complexity, particularly when attending clinicians need to interpret recommendations that did not anticipate the current emergency. This may lead to decision-making that compromises the patient’s best interests. We suggest that recording patients’ values and preferences in greater detail on ReSPECT forms may help overcome this challenge, in providing attending clinicians with richer contextual information through which to interpret treatment recommendations.