Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
4,775 result(s) for "Machine Learning - ethics"
Sort by:
Balancing ethics and statistics: machine learning facilitates highly accurate classification of mice according to their trait anxiety with reduced sample sizes
Understanding how individual differences influence vulnerability to disease and responses to pharmacological treatments represents one of the main challenges in behavioral neuroscience. Nevertheless, inter-individual variability and sex-specific patterns have been long disregarded in preclinical studies of anxiety and stress disorders. Recently, we established a model of trait anxiety that leverages the heterogeneity of freezing responses following auditory aversive conditioning to cluster female and male mice into sustained and phasic endophenotypes. However, unsupervised clustering required larger sample sizes for robust results which is contradictory to animal welfare principles. Here, we pooled data from 470 animals to train and validate supervised machine learning (ML) models for classifying mice into sustained and phasic responders in a sex-specific manner. We observed high accuracy and generalizability of our predictive models to independent animal batches. In contrast to data-driven clustering, the performance of ML classifiers remained unaffected by sample size and modifications to the conditioning protocol. Therefore, ML-assisted techniques not only enhance robustness and replicability of behavioral phenotyping results but also promote the principle of reducing animal numbers in future studies.
Do no harm: a roadmap for responsible machine learning for health care
Interest in machine-learning applications within medicine has been growing, but few studies have progressed to deployment in patient care. We present a framework, context and ultimately guidelines for accelerating the translation of machine-learning-based interventions in health care. To be successful, translation will require a team of engaged stakeholders and a systematic process from beginning (problem formulation) to end (widespread deployment).
Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness
Machine learning, artificial intelligence, and other modern statistical methods are providing new opportunities to operationalise previously untapped and rapidly growing sources of data for patient benefit. Despite much promising research currently being undertaken, particularly in imaging, the literature as a whole lacks transparency, clear reporting to facilitate replicability, exploration for potential ethical concerns, and clear demonstrations of effectiveness. Among the many reasons why these problems exist, one of the most important (for which we provide a preliminary solution here) is the current lack of best practice guidance specific to machine learning and artificial intelligence. However, we believe that interdisciplinary groups pursuing research and impact projects involving machine learning and artificial intelligence for health would benefit from explicitly addressing a series of questions concerning transparency, reproducibility, ethics, and effectiveness (TREE). The 20 critical questions proposed here provide a framework for research groups to inform the design, conduct, and reporting; for editors and peer reviewers to evaluate contributions to the literature; and for patients, clinicians and policy makers to critically appraise where new findings may deliver patient benefit.
Clinical applications of machine learning algorithms: beyond the black box
To maximise the clinical benefits of machine learning algorithms, we need to rethink our approach to explanation, argue David Watson and colleagues
Implementing Machine Learning in Health Care — Addressing Ethical Challenges
We need to consider the ethical challenges inherent in implementing machine learning in health care if its benefits are to be realized. Some of these challenges are straightforward, whereas others have less obvious risks but raise broader ethical concerns.
Computing schizophrenia: ethical challenges for machine learning in psychiatry
Recent advances in machine learning (ML) promise far-reaching improvements across medical care, not least within psychiatry. While to date no psychiatric application of ML constitutes standard clinical practice, it seems crucial to get ahead of these developments and address their ethical challenges early on. Following a short general introduction concerning ML in psychiatry, we do so by focusing on schizophrenia as a paradigmatic case. Based on recent research employing ML to further the diagnosis, treatment, and prediction of schizophrenia, we discuss three hypothetical case studies of ML applications with view to their ethical dimensions. Throughout this discussion, we follow the principlist framework by Tom Beauchamp and James Childress to analyse potential problems in detail. In particular, we structure our analysis around their principles of beneficence, non-maleficence, respect for autonomy, and justice. We conclude with a call for cautious optimism concerning the implementation of ML in psychiatry if close attention is paid to the particular intricacies of psychiatric disorders and its success evaluated based on tangible clinical benefit for patients.
Ethical dilemmas posed by mobile health and machine learning in psychiatry research
The application of digital technology to psychiatry research is rapidly leading to new discoveries and capabilities in the field of mobile health. However, the increase in opportunities to passively collect vast amounts of detailed information on study participants coupled with advances in statistical techniques that enable machine learning models to process such information has raised novel ethical dilemmas regarding researchers' duties to: (i) monitor adverse events and intervene accordingly; (ii) obtain fully informed, voluntary consent; (iii) protect the privacy of participants; and (iv) increase the transparency of powerful, machine learning models to ensure they can be applied ethically and fairly in psychiatric care. This review highlights emerging ethical challenges and unresolved ethical questions in mobile health research and provides recommendations on how mobile health researchers can address these issues in practice. Ultimately, the hope is that this review will facilitate continued discussion on how to achieve best practice in mobile health research within psychiatry.
Applications and ethics of computer-designed organisms
Computer-designed organisms — biobots, such as xenobots — are at the intersection of synthetic developmental biology and machine learning. This technology, which enables the evolution of real, living forms to take place in a virtual world, is part of an emerging new research field with applications in biomedicine and engineering, and raises profound philosophical questions.Michael Levin and colleagues discuss how computer-designed organisms ― biobots and xenobots ― are driving a new research field with applications in biomedicine and engineering, and associated ethical and philosophical questions.
Materiality and practicality: a response to - are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?
In his recent paper Hatherley discusses four reasons given to support mandatory disclosure of the use of machine learning technologies in healthcare, and provides counters to each of these reasons. While I agree with Hatherley’s conclusion that such disclosures should not be mandatory (at least not in an upfront fashion), I raise some problems with his counters to the materiality argument. Finally, I raise another potential problem that exists in a democratic society: that even if Hatherley’s (and other authors who share his conclusions) arguments are sound, in a democratic society the simple fact that most people might wish for such disclosures to be made might be an enough compelling reason to make such disclosures mandatory.