Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
60 result(s) for "Becich, Michael J"
Sort by:
Computational Pathology: A Path Ahead
We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. To define the scope and needs of computational pathology. A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and nonpathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology.
Open-source Software Sustainability Models: Initial White Paper From the Informatics Technology for Cancer Research Sustainability and Industry Partnership Working Group
The National Cancer Institute Informatics Technology for Cancer Research (ITCR) program provides a series of funding mechanisms to create an ecosystem of open-source software (OSS) that serves the needs of cancer research. As the ITCR ecosystem substantially grows, it faces the challenge of the long-term sustainability of the software being developed by ITCR grantees. To address this challenge, the ITCR sustainability and industry partnership working group (SIP-WG) was convened in 2019. The charter of the SIP-WG is to investigate options to enhance the long-term sustainability of the OSS being developed by ITCR, in part by developing a collection of business model archetypes that can serve as sustainability plans for ITCR OSS development initiatives. The working group assembled models from the ITCR program, from other studies, and from the engagement of its extensive network of relationships with other organizations (eg, Chan Zuckerberg Initiative, Open Source Initiative, and Software Sustainability Institute) in support of this objective. This paper reviews the existing sustainability models and describes 10 OSS use cases disseminated by the SIP-WG and others, including 3D Slicer, Bioconductor, Cytoscape, Globus, i2b2 (Informatics for Integrating Biology and the Bedside) and tranSMART, Insight Toolkit, Linux, Observational Health Data Sciences and Informatics tools, R, and REDCap (Research Electronic Data Capture), in 10 sustainability aspects: governance, documentation, code quality, support, ecosystem collaboration, security, legal, finance, marketing, and dependency hygiene. Information available to the public reveals that all 10 OSS have effective governance, comprehensive documentation, high code quality, reliable dependency hygiene, strong user and developer support, and active marketing. These OSS include a variety of licensing models (eg, general public license version 2, general public license version 3, Berkeley Software Distribution, and Apache 3) and financial models (eg, federal research funding, industry and membership support, and commercial support). However, detailed information on ecosystem collaboration and security is not publicly provided by most OSS. We recommend 6 essential attributes for research software: alignment with unmet scientific needs, a dedicated development team, a vibrant user community, a feasible licensing model, a sustainable financial model, and effective product management. We also stress important actions to be considered in future ITCR activities that involve the discussion of the sustainability and licensing models for ITCR OSS, the establishment of a central library, the allocation of consulting resources to code quality control, ecosystem collaboration, security, and dependency hygiene.
Artificial intelligence in clinical and translational science: Successes, challenges and opportunities
Artificial intelligence (AI) is transforming many domains, including finance, agriculture, defense, and biomedicine. In this paper, we focus on the role of AI in clinical and translational research (CTR), including preclinical research (T1), clinical research (T2), clinical implementation (T3), and public (or population) health (T4). Given the rapid evolution of AI in CTR, we present three complementary perspectives: (1) scoping literature review, (2) survey, and (3) analysis of federally funded projects. For each CTR phase, we addressed challenges, successes, failures, and opportunities for AI. We surveyed Clinical and Translational Science Award (CTSA) hubs regarding AI projects at their institutions. Nineteen of 63 CTSA hubs (30%) responded to the survey. The most common funding source (48.5%) was the federal government. The most common translational phase was T2 (clinical research, 40.2%). Clinicians were the intended users in 44.6% of projects and researchers in 32.3% of projects. The most common computational approaches were supervised machine learning (38.6%) and deep learning (34.2%). The number of projects steadily increased from 2012 to 2020. Finally, we analyzed 2604 AI projects at CTSA hubs using the National Institutes of Health Research Portfolio Online Reporting Tools (RePORTER) database for 2011–2019. We mapped available s to medical subject headings and found that nervous system (16.3%) and mental disorders (16.2) were the most common topics addressed. From a computational perspective, big data (32.3%) and deep learning (30.0%) were most common. This work represents a snapshot in time of the role of AI in the CTSA program.
Towards a Data Sharing Culture: Recommendations for Leadership from Academic Health Centers
Abbreviations: AHC, academic health center; caBIG, Cancer Biomedical Informatics Grid; IRB, institutional review board; NIH, National Institutes of Health Provenance: Not commissioned; externally peer reviewed Sharing biomedical research and health care data is important but difficult. Sharing data generates opportunities for additional publications through collaboration, and may increase the citation rate of primary publications [7]. Since publication history and citation impact are often considered in future funding decisions, these benefits are likely to accelerate research programs, and thus enhance the reputation of the academic institutions. [...]the widespread adoption of a data sharing culture needs leaders [10], and thus provides an opportunity for AHCs to demonstrate excellence.
The Ethics of Artificial Intelligence in Pathology and Laboratory Medicine: Principles and Practice
Growing numbers of artificial intelligence applications are being developed and applied to pathology and laboratory medicine. These technologies introduce risks and benefits that must be assessed and managed through the lens of ethics. This article describes how long-standing principles of medical and scientific ethics can be applied to artificial intelligence using examples from pathology and laboratory medicine.
Realizing the potential of social determinants data in EHR systems: A scoping review of approaches for screening, linkage, extraction, analysis, and interventions
Social determinants of health (SDoH), such as socioeconomics and neighborhoods, strongly influence health outcomes. However, the current state of standardized SDoH data in electronic health records (EHRs) is lacking, a significant barrier to research and care quality. We conducted a PubMed search using \"SDOH\" and \"EHR\" Medical Subject Headings terms, analyzing included articles across five domains: 1) SDoH screening and assessment approaches, 2) SDoH data collection and documentation, 3) Use of natural language processing (NLP) for extracting SDoH, 4) SDoH data and health outcomes, and 5) SDoH-driven interventions. Of 685 articles identified, 324 underwent full review. Key findings include implementation of tailored screening instruments, census and claims data linkage for contextual SDoH profiles, NLP systems extracting SDoH from notes, associations between SDoH and healthcare utilization and chronic disease control, and integrated care management programs. However, variability across data sources, tools, and outcomes underscores the need for standardization. Despite progress in identifying patient social needs, further development of standards, predictive models, and coordinated interventions is critical for SDoH-EHR integration. Additional database searches could strengthen this scoping review. Ultimately, widespread capture, analysis, and translation of multidimensional SDoH data into clinical care is essential for promoting health equity.
A maturity model for Clinical Trials Management Ecosystem
Managing clinical trials is a complex process requiring careful integration of human, technology, compliance, and operations for success. We collaborated with experts to develop a multi-axial Clinical Trials Management Ecosystem (CTME) maturity model (MM) to help institutions identify best practices for CTME capabilities. A working group of research informaticists was established. An online session on maturity models was hosted, followed by a review of the candidate domain axes and finalization of the axes. Next, maturity level attributes were defined for min/max levels (level 1 and level 5) for each axis of the CTME MM, followed by the intermediate levels. A REDCap survey comprising the model's statements was then created, and a subset of working group members tested the model by completing it at their respective institutions. The finalized survey was distributed to all working group members. We developed a CTME MM comprising five maturity levels across 11 axes: study management, regulatory and audit management, financial management, investigational product management, subject identification and recruitment, subject management, data, reporting analytics & dashboard, system integration and interfaces, staff training & personnel management, and organizational maturity and culture. Informaticists at 22 Clinical and Translational Science Award hubs and one other organization self-assessed their institutional CTME maturity. Respondents reported relatively high maturity for study management and investigational product management. The reporting analytics & dashboard axis was the least mature. The CTME MM provides a framework to research organizations to evaluate their current clinical trials management maturity across 11 axes and identify areas for future growth.
Factors influencing malignant mesothelioma survival: a retrospective review of the National Mesothelioma Virtual Bank cohort version 3; peer review: 2 approved, 1 approved with reservations
Background: Malignant mesothelioma (MM) is a rare but deadly malignancy with about 3,000 new cases being diagnosed each year in the US.  Very few studies have been performed to analyze factors associated with mesothelioma survival, especially for peritoneal presentation. The overarching aim of this study is to examine survival of the cohort of patients with malignant mesothelioma enrolled in the National Mesothelioma Virtual Bank (NMVB).   Methods:  888 cases of pleural and peritoneal mesothelioma cases were selected from the NMVB database, which houses data and associated biospecimens for over 1400 cases that were diagnosed from 1990 to 2017. Kaplan Meier's method was performed for survival analysis. The association between prognostic factors and survival was estimated using Cox Hazard Regression method and using R software for analysis. Results: The median overall survival (OS) rate of all MM patients, including pleural and peritoneal mesothelioma cases is 15 months (14 months for pleural and 31 months for peritoneal).  Significant prognostic factors associated with improved survival of malignant mesothelioma cases in this NMVB cohort were younger than 45, female gender, epithelioid histological subtype, stage I, peritoneal occurrence, and having combination treatment of surgical therapy with chemotherapy. Combined surgical and chemotherapy treatment was associated with improved survival of 23 months in comparison to single line therapies. Conclusions: There has not been improvement in the overall survival for patients with malignant mesothelioma over many years with current available treatment options. Our findings show that combined surgical and chemotherapy treatment in peritoneal mesothelioma is associated with improved survival compared to local therapy alone.
A Bayesian Method for Evaluating and Discovering Disease Loci Associations
A genome-wide association study (GWAS) typically involves examining representative SNPs in individuals from some population. A GWAS data set can concern a million SNPs and may soon concern billions. Researchers investigate the association of each SNP individually with a disease, and it is becoming increasingly commonplace to also analyze multi-SNP associations. Techniques for handling so many hypotheses include the Bonferroni correction and recently developed bayesian methods. These methods can encounter problems. Most importantly, they are not applicable to a complex multi-locus hypothesis which has several competing hypotheses rather than only a null hypothesis. A method that computes the posterior probability of complex hypotheses is a pressing need. We introduce the bayesian network posterior probability (BNPP) method which addresses the difficulties. The method represents the relationship between a disease and SNPs using a directed acyclic graph (DAG) model, and computes the likelihood of such models using a bayesian network scoring criterion. The posterior probability of a hypothesis is computed based on the likelihoods of all competing hypotheses. The BNPP can not only be used to evaluate a hypothesis that has previously been discovered or suspected, but also to discover new disease loci associations. The results of experiments using simulated and real data sets are presented. Our results concerning simulated data sets indicate that the BNPP exhibits both better evaluation and discovery performance than does a p-value based method. For the real data sets, previous findings in the literature are confirmed and additional findings are found. We conclude that the BNPP resolves a pressing problem by providing a way to compute the posterior probability of complex multi-locus hypotheses. A researcher can use the BNPP to determine the expected utility of investigating a hypothesis further. Furthermore, we conclude that the BNPP is a promising method for discovering disease loci associations.