Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
4,431 result(s) for "Human intelligence (intelligence gathering)"
Sort by:
Artificial intelligence in cyber security: research advances, challenges, and opportunities
In recent times, there have been attempts to leverage artificial intelligence (AI) techniques in a broad range of cyber security applications. Therefore, this paper surveys the existing literature (comprising 54 papers mainly published between 2016 and 2020) on the applications of AI in user access authentication, network situation awareness, dangerous behavior monitoring, and abnormal traffic identification. This paper also identifies a number of limitations and challenges, and based on the findings, a conceptual human-in-the-loop intelligence cyber security model is presented.
Designing Transparency for Effective Human-AI Collaboration
The field of artificial intelligence (AI) is advancing quickly, and systems can increasingly perform a multitude of tasks that previously required human intelligence. Information systems can facilitate collaboration between humans and AI systems such that their individual capabilities complement each other. However, there is a lack of consolidated design guidelines for information systems facilitating the collaboration between humans and AI systems. This work examines how agent transparency affects trust and task outcomes in the context of human-AI collaboration. Drawing on the 3-Gap framework, we study agent transparency as a means to reduce the information asymmetry between humans and the AI. Following the Design Science Research paradigm, we formulate testable propositions, derive design requirements, and synthesize design principles. We instantiate two design principles as design features of an information system utilized in the hospitality industry. Further, we conduct two case studies to evaluate the effects of agent transparency: We find that trust increases when the AI system provides information on its reasoning, while trust decreases when the AI system provides information on sources of uncertainty. Additionally, we observe that agent transparency improves task outcomes as it enhances the accuracy of judgemental forecast adjustments.
Ethical Considerations and Fundamental Principles of Large Language Models in Medical Education: Viewpoint
This viewpoint article first explores the ethical challenges associated with the future application of large language models (LLMs) in the context of medical education. These challenges include not only ethical concerns related to the development of LLMs, such as artificial intelligence (AI) hallucinations, information bias, privacy and data risks, and deficiencies in terms of transparency and interpretability but also issues concerning the application of LLMs, including deficiencies in emotional intelligence, educational inequities, problems with academic integrity, and questions of responsibility and copyright ownership. This paper then analyzes existing AI-related legal and ethical frameworks and highlights their limitations with regard to the application of LLMs in the context of medical education. To ensure that LLMs are integrated in a responsible and safe manner, the authors recommend the development of a unified ethical framework that is specifically tailored for LLMs in this field. This framework should be based on 8 fundamental principles: quality control and supervision mechanisms; privacy and data protection; transparency and interpretability; fairness and equal treatment; academic integrity and moral norms; accountability and traceability; protection and respect for intellectual property; and the promotion of educational research and innovation. The authors further discuss specific measures that can be taken to implement these principles, thereby laying a solid foundation for the development of a comprehensive and actionable ethical framework. Such a unified ethical framework based on these 8 fundamental principles can provide clear guidance and support for the application of LLMs in the context of medical education. This approach can help establish a balance between technological advancement and ethical safeguards, thereby ensuring that medical education can progress without compromising the principles of fairness, justice, or patient safety and establishing a more equitable, safer, and more efficient environment for medical education.
Immune escape and immunotherapy of acute myeloid leukemia
In spite of the recent approval of new promising targeted therapies, the clinical outcome of patients with acute myeloid leukemia (AML) remains suboptimal, prompting the search for additional and synergistic therapeutic rationales. It is increasingly evident that the bone marrow immune environment of AML patients is profoundly altered, contributing to the severity of the disease but also providing several windows of opportunity to prompt or rewire a proficient antitumor immune surveillance. In this Review, we present current evidence on immune defects in AML, discuss the challenges with selective targeting of AML cells, and summarize the clinical results and immunologic insights from studies that are testing the latest immunotherapy approaches to specifically target AML cells (antibodies, cellular therapies) or more broadly reactivate antileukemia immunity (vaccines, checkpoint blockade). Given the complex interactions between AML cells and the many components of their environment, it is reasonable to surmise that the future of immunotherapy in AML lies in the rational combination of complementary immunotherapeutic strategies with chemotherapeutics or other oncogenic pathway inhibitors. Identifying reliable biomarkers of response to improve patient selection and avoid toxicities will be critical in this process.
Ethical Issues of Digital Twins for Personalized Health Care Service: Preliminary Mapping Study
The concept of digital twins has great potential for transforming the existing health care system by making it more personalized. As a convergence of health care, artificial intelligence, and information and communication technologies, personalized health care services that are developed under the concept of digital twins raise a myriad of ethical issues. Although some of the ethical issues are known to researchers working on digital health and personalized medicine, currently, there is no comprehensive review that maps the major ethical risks of digital twins for personalized health care services. This study aims to fill the research gap by identifying the major ethical risks of digital twins for personalized health care services. We first propose a working definition for digital twins for personalized health care services to facilitate future discussions on the ethical issues related to these emerging digital health services. We then develop a process-oriented ethical map to identify the major ethical risks in each of the different data processing phases. We resorted to the literature on eHealth, personalized medicine, precision medicine, and information engineering to identify potential issues and developed a process-oriented ethical map to structure the inquiry in a more systematic way. The ethical map allows us to see how each of the major ethical concerns emerges during the process of transforming raw data into valuable information. Developers of a digital twin for personalized health care service may use this map to identify ethical risks during the development stage in a more systematic way and can proactively address them. This paper provides a working definition of digital twins for personalized health care services by identifying 3 features that distinguish the new application from other eHealth services. On the basis of the working definition, this paper further layouts 10 major operational problems and the corresponding ethical risks. It is challenging to address all the major ethical risks that a digital twin for a personalized health care service might encounter proactively without a conceptual map at hand. The process-oriented ethical map we propose here can assist the developers of digital twins for personalized health care services in analyzing ethical risks in a more systematic manner.
How intermittent breaks in interaction improve collective intelligence
People influence each other when they interact to solve problems. Such social influence introduces both benefits (higher average solution quality due to exploitation of existing answers through social learning) and costs (lower maximum solution quality due to a reduction in individual exploration for novel answers) relative to independent problem solving. In contrast to prior work, which has focused on how the presence and network structure of social influence affect performance, here we investigate the effects of time. We show that when social influence is intermittent it provides the benefits of constant social influence without the costs. Human subjects solved the canonical traveling salesperson problem in groups of three, randomized into treatments with constant social influence, intermittent social influence, or no social influence. Groups in the intermittent social-influence treatment found the optimum solution frequently (like groups without influence) but had a high mean performance (like groups with constant influence); they learned from each other, while maintaining a high level of exploration. Solutions improved most on rounds with social influence after a period of separation. We also show that storing subjects’ best solutions so that they could be reloaded and possibly modified in subsequent rounds—a ubiquitous feature of personal productivity software—is similar to constant social influence: It increases mean performance but decreases exploration.
Current and future influenza vaccines
Although antiviral drugs and vaccines have reduced the economic and healthcare burdens of influenza, influenza epidemics continue to take a toll. Over the past decade, research on influenza viruses has revealed a potential path to improvement. The clues have come from accumulated discoveries from basic and clinical studies. Now, virus surveillance allows researchers to monitor influenza virus epidemic trends and to accumulate virus sequences in public databases, which leads to better selection of candidate viruses for vaccines and early detection of drug-resistant viruses. Here we provide an overview of current vaccine options and describe efforts directed toward the development of next-generation vaccines. Finally, we propose a plan for the development of an optimal influenza vaccine. The universal flu vaccine remains elusive, but there are several strategies that scientists can take to develop one, including closer monitoring of viral evolution.
Analysis of error profiles in deep next-generation sequencing data
Background Sequencing errors are key confounding factors for detecting low-frequency genetic variants that are important for cancer molecular diagnosis, treatment, and surveillance using deep next-generation sequencing (NGS). However, there is a lack of comprehensive understanding of errors introduced at various steps of a conventional NGS workflow, such as sample handling, library preparation, PCR enrichment, and sequencing. In this study, we use current NGS technology to systematically investigate these questions. Results By evaluating read-specific error distributions, we discover that the substitution error rate can be computationally suppressed to 10 −5 to 10 −4 , which is 10- to 100-fold lower than generally considered achievable (10 −3 ) in the current literature. We then quantify substitution errors attributable to sample handling, library preparation, enrichment PCR, and sequencing by using multiple deep sequencing datasets. We find that error rates differ by nucleotide substitution types, ranging from 10 −5 for A>C/T>G, C>A/G>T, and C>G/G>C changes to 10 −4 for A>G/T>C changes. Furthermore, C>T/G>A errors exhibit strong sequence context dependency, sample-specific effects dominate elevated C>A/G>T errors, and target-enrichment PCR led to ~ 6-fold increase of overall error rate. We also find that more than 70% of hotspot variants can be detected at 0.1 ~ 0.01% frequency with the current NGS technology by applying in silico error suppression. Conclusions We present the first comprehensive analysis of sequencing error sources in conventional NGS workflows. The error profiles revealed by our study highlight new directions for further improving NGS analysis accuracy both experimentally and computationally, ultimately enhancing the precision of deep sequencing.
Verbal probabilities: Very likely to be somewhat more confusing than numbers
People interpret verbal expressions of probabilities (e.g. 'very likely') in different ways, yet words are commonly preferred to numbers when communicating uncertainty. Simply providing numerical translations alongside reports or text containing verbal probabilities should encourage consistency, but these guidelines are often ignored. In an online experiment with 924 participants, we compared four different formats for presenting verbal probabilities with the numerical guidelines used in the US Intelligence Community Directive (ICD) 203 to see whether any could improve the correspondence between the intended meaning and participants' interpretation ('in-context'). This extends previous work in the domain of climate science. The four experimental conditions we tested were: 1. numerical guidelines bracketed in text, e.g. X is very unlikely (05-20%), 2. click to see the full guidelines table in a new window, 3. numerical guidelines appear in a mouse over tool tip, and 4. no guidelines provided (control). Results indicate that correspondence with the ICD 203 standard is substantially improved only when numerical guidelines are bracketed in text. For this condition, average correspondence was 66%, compared with 32% in the control. We also elicited 'context-free' numerical judgements from participants for each of the seven verbal probability expressions contained in ICD 203 (i.e., we asked participants what range of numbers they, personally, would assign to those expressions), and constructed 'evidence-based lexicons' based on two methods from similar research, 'membership functions' and 'peak values', that reflect our large sample's intuitive translations of the terms. Better aligning the intended and assumed meaning of fuzzy words like 'unlikely' can reduce communication problems between the reporter and receiver of probabilistic information. In turn, this can improve decision making under uncertainty.
The Clinicopathological features and survival outcomes of patients with different metastatic sites in stage IV breast cancer
Background The features and survival of stage IV breast cancer patients with different metastatic sites are poorly understood. This study aims to examine the clinicopathological features and survival of stage IV breast cancer patients according to different metastatic sites. Methods Using the Surveillance, Epidemiology, and End Results database, we restricted our study population to stage IV breast cancer patients diagnosed between 2010 to 2015. The clinicopathological features were examined by chi-square tests. Breast cancer-specific survival (BCSS) and overall survival (OS) were compared among patients with different metastatic sites by the Kaplan-Meier method with log-rank test. Univariable and multivariable analyses were also performed using the Cox proportional hazard model to identify statistically significant prognostic factors. Results A total of 18,322 patients were identified for survival analysis. Bone-only metastasis accounted for 39.80% of patients, followed by multiple metastasis (33.07%), lung metastasis (10.94%), liver metastasis (7.34%), other metastasis (7.34%), and brain metastasis (1.51%). The Kaplan-Meier plots showed that patients with bone metastasis had the best survival, while patients with brain metastasis had the worst survival in both BCSS and OS ( p  < 0.001, for both). Multivariable analyses showed that age, race, marital status, grade, tumor subtype, tumor size, surgery of primary cancer, and a history of radiotherapy or chemotherapy were independent prognostic factors. Conclusion Stage IV breast cancer patients have different clinicopathological characteristics and survival outcomes according to different metastatic sites. Patients with bone metastasis have the best prognosis, and brain metastasis is the most aggressive subgroup.