Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
2,747
result(s) for
"Computer Security - ethics"
Sort by:
Beginning ethical hacking with Python
\"Learn the basics of ethical hacking and gain insights into the logic, algorithms, and syntax of Python. This book will set you up with a foundation that will help you understand the advanced concepts of hacking in the future\"--Back cover.
Ethical and social reflections on the proposed European Health Data Space
by
Slokenberga, Santa
,
Mascalzoni, Deborah
,
Mežinska, Signe
in
COVID-19
,
Ethics
,
Health care policy
2024
The COVID-19 pandemic demonstrated the benefits of international data sharing. Data sharing enabled the health care policy makers to make decisions based on real-time data, it enabled the tracking of the virus, and importantly it enabled the development of vaccines that were crucial to mitigating the impact of the virus. This data sharing is not the norm as data sharing needs to navigate complex ethical and legal rules, and in particular, the fragmented application of the General Data Protection Regulation (GDPR). The introduction of the draft regulation for a European Health Data Space (EHDS) in May 2022 seeks to address some of these legal issues. If passed, it will create an obligation to share electronic health data for certain secondary purposes. While there is a clear need to address the legal complexities involved with data sharing, it is critical that any proposed reforms are in line with ethical principles and the expectations of the data subjects. In this paper we offer a critique of the EHDS and offer some recommendations for this evolving regulatory space.
Journal Article
Revolutionizing Medical Data Sharing Using Advanced Privacy-Enhancing Technologies: Technical, Legal, and Ethical Synthesis
by
Scheibner, James
,
Troncoso-Pastoriza, Juan Ramón
,
Vayena, Effy
in
Aggregates
,
Biomedical research
,
Cancer
2021
Multisite medical data sharing is critical in modern clinical practice and medical research. The challenge is to conduct data sharing that preserves individual privacy and data utility. The shortcomings of traditional privacy-enhancing technologies mean that institutions rely upon bespoke data sharing contracts. The lengthy process and administration induced by these contracts increases the inefficiency of data sharing and may disincentivize important clinical treatment and medical research. This paper provides a synthesis between 2 novel advanced privacy-enhancing technologies—homomorphic encryption and secure multiparty computation (defined together as multiparty homomorphic encryption). These privacy-enhancing technologies provide a mathematical guarantee of privacy, with multiparty homomorphic encryption providing a performance advantage over separately using homomorphic encryption or secure multiparty computation. We argue multiparty homomorphic encryption fulfills legal requirements for medical data sharing under the European Union’s General Data Protection Regulation which has set a global benchmark for data protection. Specifically, the data processed and shared using multiparty homomorphic encryption can be considered anonymized data. We explain how multiparty homomorphic encryption can reduce the reliance upon customized contractual measures between institutions. The proposed approach can accelerate the pace of medical research while offering additional incentives for health care and research institutes to employ common data interoperability standards.
Journal Article
The ethical challenges in the integration of artificial intelligence and large language models in medical education: A scoping review
by
Yan, Xiaodan
,
Li, Xinrui
,
Lai, Han
in
Artificial intelligence
,
Artificial Intelligence - ethics
,
Autonomy
2025
With the rapid development of artificial intelligence (AI), large language models (LLMs), such as ChatGPT have shown potential in medical education, offering personalized learning experiences. However, this integration raises ethical concerns, including privacy, autonomy, and transparency. This study employed a scoping review methodology, systematically searching relevant literature published between January 2010 and August 31, 2024, across three major databases: PubMed, Embase, and Web of Science. Through rigorous screening, 50 articles which met inclusion criteria were ultimately selected from an initial pool of 1,192 records. During data processing, the Kimi AI tool was utilized to facilitate preliminary literature screening, extraction of key information, and construction of content frameworks. Data reliability was ensured through a stringent cross-verification process whereby two independent researchers validated all AI-generated content against original source materials. The study delineates ethical challenges and opportunities arising from the integration of AI and LLMs into medical education, identifying seven core ethical dimensions: privacy and data security, algorithmic bias, accountability attribution, fairness assurance, technological reliability, application dependency, and patient autonomy. Corresponding mitigation strategies were formulated for each challenge. Future research should prioritize establishing dedicated ethical frameworks and application guidelines for AI in medical education while maintaining sustained attention to the long-term ethical implications of these technologies in healthcare domains.
Journal Article
The social implications of using drones for biodiversity conservation
2015
Unmanned aerial vehicles, or 'drones', appear to offer a flexible, accurate and affordable solution to some of the technical challenges of nature conservation monitoring and law enforcement. However, little attention has been given to their possible social impacts. In this paper, I review the possible social impacts of using drones for conservation, including on safety, privacy, psychological wellbeing, data security and the wider understanding of conservation problems. I argue that negative social impacts are probable under some circumstances and should be of concern for conservation for two reasons: (1) because conservation should follow good ethical practice; and (2) because negative social impacts could undermine conservation effectiveness in the long term. The paper concludes with a call for empirical research to establish whether the identified social risks of drones occur in reality and how they could be mitigated, and for self-regulation of drone use by the conservation sector to ensure good ethical practice and minimise the risk of unintended consequences.
Journal Article
Decentralising the Self – Ethical Considerations in Utilizing Decentralised Web Technology for Direct Brain Interfaces
2024
The rapidly advancing field of brain-computer (BCI) and brain-to-brain interfaces (BBI) is stimulating interest across various sectors including medicine, entertainment, research, and military. The developers of large-scale brain-computer networks, sometimes dubbed ‘Mindplexes’ or ‘Cloudminds’, aim to enhance cognitive functions by distributing them across expansive networks. A key technical challenge is the efficient transmission and storage of information. One proposed solution is employing blockchain technology over Web 3.0 to create decentralised cognitive entities. This paper explores the potential of a decentralised web for coordinating large brain-computer constellations, and its associated benefits, focusing in particular on the conceptual and ethical challenges this innovation may pose pertaining to (1) Identity, (2) Sovereignty (encompassing Autonomy, Authenticity, and Ownership), (3) Responsibility and Accountability, and (4) Privacy, Safety, and Security. We suggest that while a decentralised web can address some concerns and mitigate certain risks, underlying ethical issues persist. Fundamental questions about entity definition within these networks, the distinctions between individuals and collectives, and responsibility distribution within and between networks, demand further exploration.
Journal Article
Privacy protection of sexually transmitted infections information from Chinese electronic medical records
2025
The comprehensive adoption of Electronic Medical Records (EMRs) offers numerous benefits but also introduces risks of privacy leakage, particularly for patients with Sexually Transmitted Infections (STI) who need protection from social secondary harm. Despite advancements in privacy protection research, the effectiveness of these strategies in real-world data remains debatable. The objective is to develop effective information extraction and privacy protection strategies to safeguard STI patients in the Chinese healthcare environment and prevent unnecessary privacy leakage during the data-sharing process of EMRs. The research was conducted at a national healthcare data center, where a committee of experts designed rule-based protocols utilizing natural language processing techniques to extract STI information. Extraction Protocol of Sexually Transmitted Infections Information (EPSTII), designed specifically for the Chinese EMRs system, enables accurate and complete identification and extraction of STI-related information, ensuring high protection performance. The protocol was refined multiple times based on the calculated precision and recall. Final protocol was applied to 5,000 randomly selected EMRs to calculate the success rate of privacy protection. A total of 3,233,174 patients were selected based on the inclusion criteria and a 50% entry ratio. Of these, 148,856 patients with sensitive STI information were identified from disease history. The identification frequency varied, with the diagnosis sub-dataset being the highest at 4.8%. Both the precision and recall rates have reached over 95%, demonstrating the effectiveness of our method. The success rate of privacy protection was 98.25%, ensuring the utmost privacy protection for patients with STI. Finding an effective method to protect privacy information in EMRs is meaningful. We demonstrated the feasibility of applying the EPSTII method to EMRs. Our protocol offers more comprehensive results compared to traditional methods of including STI information.
Journal Article
Data science ethics in government
Data science can offer huge opportunities for government. With the ability to process larger and more complex datasets than ever before, it can provide better insights for policymakers and make services more tailored and efficient. As with all new technologies, there is a risk that we do not take up its opportunities and miss out on its enormous potential. We want people to feel confident to innovate with data. So, over the past 18 months, the Government Data Science Partnership has taken an open, evidence-based and user-centred approach to creating an ethical framework. It is a practical document that brings all the legal guidance together in one place, and is written in the context of new data science capabilities. As part of its development, we ran a public dialogue on data science ethics, including deliberative workshops, an experimental conjoint survey and an online engagement tool. The research supported the principles set out in the framework as well as provided useful insight into how we need to communicate about data science. It found that people had a low awareness of the term 'data science', but that showing data science examples can increase broad support for government exploring innovative uses of data. But people's support is highly context driven. People consider acceptability on a case-by-case basis, first thinking about the overall policy goals and likely intended outcome, and then weighing up privacy and unintended consequences. The ethical framework is a crucial start, but it does not solve all the challenges it highlights, particularly as technology is creating new challenges and opportunities every day. Continued research is needed into data minimization and anonymization, robust data models, algorithmic accountability, and transparency and data security. It also has revealed the need to set out a renewed deal between the citizen and state on data, to maintain and solidify trust in how we use people's data for social good.
This article is part of the themed issue ‘The ethical impact of data science’.
Journal Article
Ethical implications related to processing of personal data and artificial intelligence in humanitarian crises: a scoping review
by
Boone, Ella
,
Kreutzer, Tino
,
Orbinski, James
in
Altruism
,
Artificial intelligence
,
Artificial intelligence (AI)
2025
Background
Humanitarian organizations are rapidly expanding their use of data in the pursuit of operational gains in effectiveness and efficiency. Ethical risks, particularly from artificial intelligence (AI) data processing, are increasingly recognized yet inadequately addressed by current humanitarian data protection guidelines. This study reports on a scoping review that maps the range of ethical issues that have been raised in the academic literature regarding data processing of people affected by humanitarian crises.
Methods
We systematically searched databases to identify peer-reviewed studies published since 2010. Data and findings were standardized, grouping ethical issues into the value categories of autonomy, beneficence, non-maleficence, and justice. The study protocol followed Arksey and O’Malley’s approach and PRISMA reporting guidelines.
Results
We identified 16,200 unique records and retained 218 relevant studies. Nearly one in three (
n
= 66) discussed technologies related to AI. Seventeen studies included an author from a lower-middle income country while four included an author from a low-income country. We identified 22 ethical issues which were then grouped along the four ethical value categories of autonomy, beneficence, non-maleficence, and justice. Slightly over half of included studies (
n
= 113) identified ethical issues based on real-world examples. The most-cited ethical issue (
n
= 134) was a concern for privacy in cases where personal or sensitive data might be inadvertently shared with third parties. Aside from AI, the technologies most frequently discussed in these studies included social media, crowdsourcing, and mapping tools.
Conclusions
Studies highlight significant concerns that data processing in humanitarian contexts can cause additional harm, may not provide direct benefits, may limit affected populations’ autonomy, and can lead to the unfair distribution of scarce resources. The increase in AI tool deployment for humanitarian assistance amplifies these concerns. Urgent development of specific, comprehensive guidelines, training, and auditing methods is required to address these ethical challenges. Moreover, empirical research from low and middle-income countries, disproportionally affected by humanitarian crises, is vital to ensure inclusive and diverse perspectives. This research should focus on the ethical implications of both emerging AI systems, as well as established humanitarian data management practices.
Trial registration
Not applicable.
Journal Article