Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
36 result(s) for "Jongsma, Karin R."
Sort by:
The ethics of genome editing in non-human animals
In recent years, newgenome editing technologies have emerged that can edit the genomeofnon-humananimalswithprogressivelyincreasingefficiency. Despite ongoing academic debate about the ethical implications of these technologies, no comprehensive overview of this debate exists. To address this gap in the literature, we conducted a systematic review of the reasons reported in the academic literature for and against the development and use of genome editing technologies in animals. Most included articleswere written by academics from the biomedical or animal sciences. The reported reasons related to seven themes: human health, efficiency, risks and uncertainty, animalwelfare, animal dignity, environmental considerations and public acceptability. Our findings illuminate several key considerations about the academic debate, including a lowdisciplinary diversity in the contributing academics, a scarcity of systematic comparisons ofpotential consequencesofusingthese technologies, anunderrepresentation of animal interests, and a disjunction between the public and academic debate on this topic. As such, this article can be considered a call for a broad range of academics to get increasingly involved in the discussion about genome editing, to incorporate animal interests and systematic comparisons, and to further discuss the aims and methods of public involvement. This article is part of a discussion meeting issue 'The ecology and evolution of prokaryotic CRISPR-Cas adaptive immune systems'.
Ethics parallel research: an approach for (early) ethical guidance of biomedical innovation
Background Our human societies and certainly also (bio) medicine are more and more permeated with technology. There seems to be an increasing awareness among bioethicists that an effective and comprehensive approach to ethically guide these emerging biomedical innovations into society is needed. Such an approach has not been spelled out yet for bioethics, while there are frequent calls for ethical guidance of biomedical innovation, also by biomedical researchers themselves. New and emerging biotechnologies require anticipation of possible effects and implications, meaning the scope is not evaluative after a technology has been fully developed or about hypothetical technologies, but real-time for a real biotechnology. Main text In this paper we aim to substantiate and discuss six ingredients that we increasingly see adopted by ethicists and that together constitute “ethics parallel research”. This approach allows to fulfil two aims: guiding the development process of technologies in biomedicine and providing input for the normative evaluation of such technologies. The six ingredients of ethics parallel research are: (1) disentangling wicked problems, (2) upstream or midstream ethical analysis, (3) ethics from within, (4) inclusion of empirical research, (5) public participation and (6) mapping societal impacts, including hard and soft impacts. We will draw on gene editing, organoid technology and artificial intelligence as examples to illustrate these six ingredients. Conclusion Ethics parallel research brings together these ingredients to ethically analyse and proactively or parallel guide technological development. It widens the roles and judgements from the ethicist to a more anticipatory and constructively guiding role. Ethics parallel research is characterised by a constructive, rather than a purely critical perspective, it focusses on developing best-practices rather than outlining worst practice, and draws on insights from social sciences and philosophy of technology.
Developer perspectives on the ethics of AI-driven neural implants: a qualitative study
Convergence of neural implants with artificial intelligence (AI) presents opportunities for the development of novel neural implants and improvement of existing neurotechnologies. While such technological innovation carries great promise for the restoration of neurological functions, they also raise ethical challenges. Developers of AI-driven neural implants possess valuable knowledge on the possibilities, limitations and challenges raised by these innovations; yet their perspectives are underrepresented in academic literature. This study aims to explore perspectives of developers of neurotechnology to outline ethical implications of three AI-driven neural implants: a cochlear implant, a visual neural implant, and a motor intention decoding speech-brain-computer-interface. We conducted semi-structured focus groups with developers (n = 19) of AI-driven neural implants. Respondents shared ethically relevant considerations about AI-driven neural implants that we clustered into three themes: (1) design aspects; (2) challenges in clinical trials; (3) impact on users and society. Developers considered accuracy and reliability of AI-driven neural implants conditional for users’ safety, authenticity, and mental privacy. These needs were magnified by the convergence with AI. Yet, the need for accuracy and reliability may also conflict with potential benefits of AI in terms of efficiency and complex data interpretation. We discuss strategies to mitigate these challenges.
The Effect of Artificial Intelligence on Patient-Physician Trust: Cross-Sectional Vignette Study
Clinical decision support systems (CDSSs) based on routine care data, using artificial intelligence (AI), are increasingly being developed. Previous studies focused largely on the technical aspects of using AI, but the acceptability of these technologies by patients remains unclear. We aimed to investigate whether patient-physician trust is affected when medical decision-making is supported by a CDSS. We conducted a vignette study among the patient panel (N=860) of the University Medical Center Utrecht, the Netherlands. Patients were randomly assigned into 4 groups-either the intervention or control groups of the high-risk or low-risk cases. In both the high-risk and low-risk case groups, a physician made a treatment decision with (intervention groups) or without (control groups) the support of a CDSS. Using a questionnaire with a 7-point Likert scale, with 1 indicating \"strongly disagree\" and 7 indicating \"strongly agree,\" we collected data on patient-physician trust in 3 dimensions: competence, integrity, and benevolence. We assessed differences in patient-physician trust between the control and intervention groups per case using Mann-Whitney U tests and potential effect modification by the participant's sex, age, education level, general trust in health care, and general trust in technology using multivariate analyses of (co)variance. In total, 398 patients participated. In the high-risk case, median perceived competence and integrity were lower in the intervention group compared to the control group but not statistically significant (5.8 vs 5.6; P=.16 and 6.3 vs 6.0; P=.06, respectively). However, the effect of a CDSS application on the perceived competence of the physician depended on the participant's sex (P=.03). Although no between-group differences were found in men, in women, the perception of the physician's competence and integrity was significantly lower in the intervention compared to the control group (P=.009 and P=.01, respectively). In the low-risk case, no differences in trust between the groups were found. However, increased trust in technology positively influenced the perceived benevolence and integrity in the low-risk case (P=.009 and P=.04, respectively). We found that, in general, patient-physician trust was high. However, our findings indicate a potentially negative effect of AI applications on the patient-physician relationship, especially among women and in high-risk situations. Trust in technology, in general, might increase the likelihood of embracing the use of CDSSs by treating professionals.
Experts’ moral views on gene drive technologies: a qualitative interview study
Background Gene drive technologies (GDTs) promote the rapid spread of a particular genetic element within a population of non-human organisms. Potential applications of GDTs include the control of insect vectors, invasive species and agricultural pests. Whether, and if so, under what conditions, GDTs should be deployed is hotly debated. Although broad stances in this debate have been described, the convictions that inform the moral views of the experts shaping these technologies and related policies have not been examined in depth in the academic literature. Methods In this qualitative study, we interviewed GDT experts (n = 33) from different disciplines to identify and better understand their moral views regarding these technologies. The pseudonymized transcripts were analyzed thematically. Results The respondents’ moral views were principally influenced by their attitudes towards (1) the uncertainty related to GDTs; (2) the alternatives to which they should be compared; and (3) the role humans should have in nature. Respondents agreed there is epistemic uncertainty related to GDTs, identified similar knowledge gaps, and stressed the importance of realistic expectations in discussions on GDTs. They disagreed about whether uncertainty provides a rationale to refrain from field trials (‘risks of intervention’ stance) or to proceed with phased testing to obtain more knowledge given the harms of the status quo (‘risks of non-intervention’ stance). With regards to alternatives to tackle vector-borne diseases, invasive species and agricultural pests, respondents disagreed about which alternatives should be considered (un)feasible and (in)sufficiently explored: conventional strategies (‘downstream solutions’ stance) or systematic changes to health care, political and agricultural systems (‘upstream solutions’ stance). Finally, respondents held different views on nature and whether the use of GDTs is compatible with humans’ role in nature (‘interference’ stance) or not (‘non-interference stance’). Conclusions This interview study helps to disentangle the debate on GDTs by providing a better understanding of the moral views of GDT experts. The obtained insights provide valuable stepping-stones for a constructive debate about underlying value conflicts and call attention to topics that deserve further (normative) reflection. Further evaluation of these issues can facilitate the debate on and responsible development of GDTs.
Ethical, legal, and sociocultural considerations in neural device explantation: a systematic review
Implantable neural devices, including brain-computer interfaces and spinal cord stimulators, hold significant therapeutic promise for conditions such as paralysis and chronic pain. However, the novelty of these technologies introduces unique ethical challenges. While much of the existing literature emphasizes development-related concerns such as device safety, the ethical issues surrounding explantation remain relatively underexplored. We conducted a systematic review to identify ethical, legal, and sociocultural considerations relevant to the explantation of neural devices. The review applied the IEEE BRAIN Neuroethics framework as a guiding structure for the categorization of the themes. A subsequent thematic analysis was performed to categorize and synthesize findings across studies. Thematic analysis revealed that medical motives were the predominant factor in discussions of explantation, with 83% of studies citing medical complications as a central concern. Additional themes identified included changes in cognition and behavior, emotional well-being, lack of therapeutic benefit, identity, financial issues, autonomy, post-trial considerations, and neurorights. Our findings underscore the multifaceted nature of neural device explantation, extending beyond purely medical considerations to include psychological, financial, legal, and sociocultural dimensions. These results highlight the necessity of interdisciplinary approaches to adequately address the broad spectrum of challenges associated with explantation.
The ethics and economics of organoid commercialization: potential donors’ perspectives
Advancing organoid technology requires human tissue donations and collaboration between researchers and commercial parties. However, many potential donors have reservations about commercial involvement in organoid research. To better understand these reservations, we conducted four focus groups with potential donors. Two focus groups were held with individuals with cystic fibrosis ( n  = 10). One focus group included individuals with neurodegenerative diseases (Parkinson’s or Huntington’s disease) ( n  = 4) and the other consisted of individuals with neurological disease (epilepsy) ( n  = 5). Four themes were identified: (1) benefits and concerns regarding commercial involvement, (2) trust in involved parties in research, (3) control over commercial parties and (4) appreciation of donors. To address these themes, we recommend that researchers and commercial parties communicate transparently and effectively, actively engage and appreciate donors, implement robust oversight mechanisms and (re)establish trust and trustworthiness through responsible practices. These considerations can help researchers and commercial parties work toward responsible and sustainable organoid research.
Better governance starts with better words: why responsible human tissue research demands a change of language
The rise of precision medicine has led to an unprecedented focus on human biological material in biomedical research. In addition, rapid advances in stem cell technology, regenerative medicine and synthetic biology are leading to more complex human tissue structures and new applications with tremendous potential for medicine. While promising, these developments also raise several ethical and practical challenges which have been the subject of extensive academic debate. These debates have led to increasing calls for longitudinal governance arrangements between tissue providers and biobanks that go beyond the initial moment of obtaining consent, such as closer involvement of tissue providers in what happens to their tissue, and more active participatory approaches to the governance of biobanks. However, in spite of these calls, such measures are being adopted slowly in practice, and there remains a strong tendency to focus on the consent procedure as the tool for addressing the ethical challenges of contemporary biobanking. In this paper, we argue that one of the barriers to this transition is the dominant language pervading the field of human tissue research, in which the provision of tissue is phrased as a ‘donation’ or ‘gift’, and tissue providers are referred to as ‘donors’. Because of the performative qualities of language, the effect of using ‘donation’ and ‘donor’ shapes a professional culture in which biobank participants are perceived as passive providers of tissue free from further considerations or entitlements. This hampers the kind of participatory approaches to governance that are deemed necessary to adequately address the ethical challenges currently faced in human tissue research. Rather than reinforcing this idea through language, we need to pave the way for the kind of participatory approaches to governance that are being extensively argued for by starting with the appropriate terminology.
SERIES: eHealth in primary care. Part 2: Exploring the ethical implications of its application in primary care practice
eHealth promises to increase self-management and personalised medicine and improve cost-effectiveness in primary care. Paired with these promises are ethical implications, as eHealth will affect patients' and primary care professionals' (PCPs) experiences, values, norms, and relationships. We argue what ethical implications related to the impact of eHealth on four vital aspects of primary care could (and should) be anticipated. (1) EHealth influences dealing with predictive and diagnostic uncertainty. Machine-learning based clinical decision support systems offer (seemingly) objective, quantified, and personalised outcomes. However, they also introduce new loci of uncertainty and subjectivity. The decision-making process becomes opaque, and algorithms can be invalid, biased, or even discriminatory. This has implications for professional responsibilities and judgments, justice, autonomy, and trust. (2) EHealth affects the roles and responsibilities of patients because it can stimulate self-management and autonomy. However, autonomy can also be compromised, e.g. in cases of persuasive technologies and eHealth can increase existing health disparities. (3) The delegation of tasks to a network of technologies and stakeholders requires attention for responsibility gaps and new responsibilities. (4) The triangulate relationship: patient-eHealth-PCP requires a reconsideration of the role of human interaction and 'humanness' in primary care as well as of shaping Shared Decision Making. Our analysis is an essential first step towards setting up a dedicated ethics research agenda that should be examined in parallel to the development and implementation of eHealth. The ultimate goal is to inspire the development of practice-specific ethical recommendations.
A mobile revolution for healthcare? Setting the agenda for bioethics
Mobile health (mHealth) is rapidly being implemented and changing our ways of doing, understanding and organising healthcare. mHealth includes wearable devices as well as apps that track fitness, offer wellness programmes or provide tools to manage chronic conditions. According to industry and policy makers, these systems offer efficient and cost-effective solutions for disease prevention and self-management. While this development raises many ethically relevant questions, so far mHealth has received only little attention in medical ethics. This paper provides an overview of bioethical issues raised by mHealth and aims to draw scholarly attention to the ethical significance of its promises and challenges. We show that the overly positive promises of mHealth need to be nuanced and their desirability critically assessed. Finally, we offer suggestions to bioethicists to engage with this emerging trend in healthcare to develop mHealth to its best potential in a morally sound way.