Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
109 result(s) for "Rusyn, Ivan"
Sort by:
ToxPi Graphical User Interface 2.0: Dynamic exploration, visualization, and sharing of integrated data models
Background Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. Results We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. Conclusions We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .
Key Characteristics of Carcinogens as a Basis for Organizing Data on Mechanisms of Carcinogenesis
A recent review by the International Agency for Research on Cancer (IARC) updated the assessments of the > 100 agents classified as Group 1, carcinogenic to humans (IARC Monographs Volume 100, parts A-F). This exercise was complicated by the absence of a broadly accepted, systematic method for evaluating mechanistic data to support conclusions regarding human hazard from exposure to carcinogens. IARC therefore convened two workshops in which an international Working Group of experts identified 10 key characteristics, one or more of which are commonly exhibited by established human carcinogens. These characteristics provide the basis for an objective approach to identifying and organizing results from pertinent mechanistic studies. The 10 characteristics are the abilities of an agent to 1) act as an electrophile either directly or after metabolic activation; 2) be genotoxic; 3) alter DNA repair or cause genomic instability; 4) induce epigenetic alterations; 5) induce oxidative stress; 6) induce chronic inflammation; 7) be immunosuppressive; 8) modulate receptor-mediated effects; 9) cause immortalization; and 10) alter cell proliferation, cell death, or nutrient supply. We describe the use of the 10 key characteristics to conduct a systematic literature search focused on relevant end points and construct a graphical representation of the identified mechanistic information. Next, we use benzene and polychlorinated biphenyls as examples to illustrate how this approach may work in practice. The approach described is similar in many respects to those currently being implemented by the U.S. EPA's Integrated Risk Information System Program and the U.S. National Toxicology Program. Smith MT, Guyton KZ, Gibbons CF, Fritz JM, Portier CJ, Rusyn I, DeMarini DM, Caldwell JC, Kavlock RJ, Lambert P, Hecht SS, Bucher JR, Stewart BW, Baan R, Cogliano VJ, Straif K. 2016. Key characteristics of carcinogens as a basis for organizing data on mechanisms of carcinogenesis. Environ Health Perspect 124:713-721; http://dx.doi.org/10.1289/ehp.1509912.
Defective HNF4alpha-dependent gene expression as a driver of hepatocellular failure in alcoholic hepatitis
Alcoholic hepatitis (AH) is a life-threatening condition characterized by profound hepatocellular dysfunction for which targeted treatments are urgently needed. Identification of molecular drivers is hampered by the lack of suitable animal models. By performing RNA sequencing in livers from patients with different phenotypes of alcohol-related liver disease (ALD), we show that development of AH is characterized by defective activity of liver-enriched transcription factors (LETFs). TGF β 1 is a key upstream transcriptome regulator in AH and induces the use of HNF4 α P2 promoter in hepatocytes, which results in defective metabolic and synthetic functions. Gene polymorphisms in LETFs including HNF4 α are not associated with the development of AH. In contrast, epigenetic studies show that AH livers have profound changes in DNA methylation state and chromatin remodeling, affecting HNF4 α -dependent gene expression. We conclude that targeting TGF β 1 and epigenetic drivers that modulate HNF4 α -dependent gene expression could be beneficial to improve hepatocellular function in patients with AH. Alcoholic hepatitis, a common cause of liver failure, lacks effective treatment. Here, the authors show altered hepatic HNF4a isoform expression and hypermethylation of its target genes in patients. HNF4a dysregulation is improved in vitro by TGFb or PPARg modulation suggesting potential therapeutic avenues.
Novel adult cortical neuron processing and screening method illustrates sex- and age-dependent effects of pharmaceutical compounds
Neurodegenerative diseases and neurotraumatic injuries are typically age-associated disorders that can reduce neuron survival, neurite outgrowth, and synaptic plasticity leading to loss of cognitive capacity, executive function, and motor control. In pursuit of reducing the loss of said neurological functions, novel compounds are sought that promote neuron viability, neuritogenesis, and/or synaptic plasticity. Current high content in vitro screenings typically use cells that are iPSC-derived, embryonic, or originate from post-natal tissues; however, most patients suffering from neurodegenerative diseases and neurotrauma are of middle-age and older. The chasm in maturity between the neurons used in drug screens and those in a target population is a barrier for translational success of in vitro results. It has been historically challenging to culture adult neurons let alone conduct screenings; therefore, age-appropriate drug screenings have previously not been plausible. We have modified Miltenyi’s protocol to increase neuronal yield, neuron purity, and neural viability at a reduced cost to expand our capacity to screen compounds directly in primary adult neurons. To our knowledge, we developed the first morphology-based screening system using adult cortical neurons and the first to incorporate age and sex as biological variables in a screen using adult cortical neurons. By using primary adult cortical neurons from mice that were 4 to 48 weeks old for screening pharmaceutical agents, we have demonstrated age- and sex-dependent effects on neuritogenesis and neuron survival in vitro. Utilizing age- and sex-appropriate in vitro models to find novel compounds increasing neuron survival and neurite outgrowth, made possible by our modified adult neuron processing method, will greatly increase the relevance of in vitro screening for finding neuroprotective compounds.
Computational Toxicology: Realizing the Promise of the Toxicity Testing in the 21st Century
Background: The National Academies' Standing Committee on Use of Emerging Science for Environmental Health Decisions held a meeting (21–22 September 2009 in Washington, DC) titled \"Computational Toxicology: From Data to Analyses to Applications.\" This commentary reflects on the presentations and roundtable discussions from the meeting that were designed to review the state of the art in the field and the practical applications of the new science and to provide focus to the field. Objectives: The meeting considered two topics: the emerging data streams amenable to computational modeling and data mining, and the emerging data analysis and modeling tools. Discussion: Computational toxicology is a subdiscipline of toxicology that aims to use the mathematical, statistical, modeling, and computer science tools to better understand the mechanisms through which a given chemical induces harm and, ultimately, to be able to predict adverse effects of the toxicants on human health and/or the environment. The participants stressed the importance of computational toxicology to the future of environmental health sciences and regulatory decisions in public health; however, many challenges remain to be addressed before the findings from high-throughput screens and in silico models may be considered sufficiently robust and informative. Conclusions: Many scientists, regulators, and the general public believe that new and better ways to assess human toxicity are now needed, and technological breakthroughs are empowering the field of toxicity assessment. Even though the application of computational toxicology to environmental health decisions requires additional efforts, the merger of the power of computers with biological information is poised to deliver new tools and knowledge.
Hepatic lipocalin 2 promotes liver fibrosis and portal hypertension
Advanced fibrosis and portal hypertension influence short-term mortality. Lipocalin 2 (LCN2) regulates infection response and increases in liver injury. We explored the role of intrahepatic LCN2 in human alcoholic hepatitis (AH) with advanced fibrosis and portal hypertension and in experimental mouse fibrosis. We found hepatic LCN2 expression and serum LCN2 level markedly increased and correlated with disease severity and portal hypertension in patients with AH. In control human livers, LCN2 expressed exclusively in mononuclear cells, while its expression was markedly induced in AH livers, not only in mononuclear cells but also notably in hepatocytes. Lcn2 −/− mice were protected from liver fibrosis caused by either ethanol or CCl 4 exposure. Microarray analysis revealed downregulation of matrisome, cell cycle and immune related gene sets in Lcn2 −/− mice exposed to CCl 4 , along with decrease in Timp1 and Edn1 expression. Hepatic expression of COL1A1 , TIMP1 and key EDN1 system components were elevated in AH patients and correlated with hepatic LCN2 expressio n. In vitro , recombinant LCN2 induced COL1A1 expression. Overexpression of LCN2 increased HIF1A that in turn mediated EDN1 upregulation. LCN2 contributes to liver fibrosis and portal hypertension in AH and could represent a new therapeutic target.
Standardizing Benchmark Dose Calculations to Improve Science-Based Decisions in Human Health Assessments
Benchmark dose (BMD) modeling computes the dose associated with a prespecified response level. While offering advantages over traditional points of departure (PODs), such as no-observed-adverse-effect-levels (NOAELs), BMD methods have lacked consistency and transparency in application, interpretation, and reporting in human health assessments of chemicals. We aimed to apply a standardized process for conducting BMD modeling to reduce inconsistencies in model fitting and selection. We evaluated 880 dose-response data sets for 352 environmental chemicals with existing human health assessments. We calculated benchmark doses and their lower limits [10% extra risk, or change in the mean equal to 1 SD (BMD/L10/1SD)] for each chemical in a standardized way with prespecified criteria for model fit acceptance. We identified study design features associated with acceptable model fits. We derived values for 255 (72%) of the chemicals. Batch-calculated BMD/L10/1SD values were significantly and highly correlated (R2 of 0.95 and 0.83, respectively, n = 42) with PODs previously used in human health assessments, with values similar to reported NOAELs. Specifically, the median ratio of BMDs10/1SD:NOAELs was 1.96, and the median ratio of BMDLs10/1SD:NOAELs was 0.89. We also observed a significant trend of increasing model viability with increasing number of dose groups. BMD/L10/1SD values can be calculated in a standardized way for use in health assessments on a large number of chemicals and critical effects. This facilitates the exploration of health effects across multiple studies of a given chemical or, when chemicals need to be compared, providing greater transparency and efficiency than current approaches.
Addressing Human Variability in Next-Generation Human Health Risk Assessments of Environmental Chemicals
Characterizing variability in the extent and nature of responses to environmental exposures is a critical aspect of human health risk assessment. Our goal was to explore how next-generation human health risk assessments may better characterize variability in the context of the conceptual framework for the source-to-outcome continuum. This review was informed by a National Research Council workshop titled \"Biological Factors that Underlie Individual Susceptibility to Environmental Stressors and Their Implications for Decision-Making.\" We considered current experimental and in silico approaches, and emerging data streams (such as genetically defined human cells lines, genetically diverse rodent models, human omic profiling, and genome-wide association studies) that are providing new types of information and models relevant for assessing interindividual variability for application to human health risk assessments of environmental chemicals. One challenge for characterizing variability is the wide range of sources of inherent biological variability (e.g., genetic and epigenetic variants) among individuals. A second challenge is that each particular pair of health outcomes and chemical exposures involves combinations of these sources, which may be further compounded by extrinsic factors (e.g., diet, psychosocial stressors, other exogenous chemical exposures). A third challenge is that different decision contexts present distinct needs regarding the identification-and extent of characterization-of interindividual variability in the human population. Despite these inherent challenges, opportunities exist to incorporate evidence from emerging data streams for addressing interindividual variability in a range of decision-making contexts.
Integrative QTL analysis of gene expression and chromatin accessibility identifies multi-tissue patterns of genetic regulation
Gene transcription profiles across tissues are largely defined by the activity of regulatory elements, most of which correspond to regions of accessible chromatin. Regulatory element activity is in turn modulated by genetic variation, resulting in variable transcription rates across individuals. The interplay of these factors, however, is poorly understood. Here we characterize expression and chromatin state dynamics across three tissues-liver, lung, and kidney-in 47 strains of the Collaborative Cross (CC) mouse population, examining the regulation of these dynamics by expression quantitative trait loci (eQTL) and chromatin QTL (cQTL). QTL whose allelic effects were consistent across tissues were detected for 1,101 genes and 133 chromatin regions. Also detected were eQTL and cQTL whose allelic effects differed across tissues, including local-eQTL for Pik3c2g detected in all three tissues but with distinct allelic effects. Leveraging overlapping measurements of gene expression and chromatin accessibility on the same mice from multiple tissues, we used mediation analysis to identify chromatin and gene expression intermediates of eQTL effects. Based on QTL and mediation analyses over multiple tissues, we propose a causal model for the distal genetic regulation of Akr1e1, a gene involved in glycogen metabolism, through the zinc finger transcription factor Zfp985 and chromatin intermediates. This analysis demonstrates the complexity of transcriptional and chromatin dynamics and their regulation over multiple tissues, as well as the value of the CC and related genetic resource populations for identifying specific regulatory mechanisms within cells and tissues.
Chemical Safety Assessment Using Read-Across: Assessing the Use of Novel Testing Methods to Strengthen the Evidence Base for Decision Making
Safety assessment for repeated dose toxicity is one of the largest challenges in the process to replace animal testing. This is also one of the proof of concept ambitions of SEURAT-1, the largest ever European Union research initiative on alternative testing, co-funded by the European Commission and Cosmetics Europe. This review is based on the discussion and outcome of a workshop organized on initiative of the SEURAT-1 consortium joined by a group of international experts with complementary knowledge to further develop traditional read-across and include new approach data. The aim of the suggested strategy for chemical read-across is to show how a traditional read-across based on structural similarities between source and target substance can be strengthened with additional evidence from new approach data--for example, information from in vitro molecular screening, \"-omics\" assays and computational models--to reach regulatory acceptance. We identified four read-across scenarios that cover typical human health assessment situations. For each such decision context, we suggested several chemical groups as examples to prove when read-across between group members is possible, considering both chemical and biological similarities. We agreed to carry out the complete read-across exercise for at least one chemical category per read-across scenario in the context of SEURAT-1, and the results of this exercise will be completed and presented by the end of the research initiative in December 2015.