Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
160 result(s) for "Emmerich, Michael"
Sort by:
A tutorial on multiobjective optimization: fundamentals and evolutionary methods
In almost no other field of computer science, the idea of using bio-inspired search paradigms has been so useful as in solving multiobjective optimization problems. The idea of using a population of search agents that collectively approximate the Pareto front resonates well with processes in natural evolution, immune systems, and swarm intelligence. Methods such as NSGA-II, SPEA2, SMS-EMOA, MOPSO, and MOEA/D became standard solvers when it comes to solving multiobjective optimization problems. This tutorial will review some of the most important fundamentals in multiobjective optimization and then introduce representative algorithms, illustrate their working principles, and discuss their application scope. In addition, the tutorial will discuss statistical performance assessment. Finally, it highlights recent important trends and closely related research fields. The tutorial is intended for readers, who want to acquire basic knowledge on the mathematical foundations of multiobjective optimization and state-of-the-art methods in evolutionary multiobjective optimization. The aim is to provide a starting point for researching in this active area, and it should also help the advanced reader to identify open research topics.
DrugEx v2: de novo design of drug molecules by Pareto-based multi-objective reinforcement learning in polypharmacology
In polypharmacology drugs are required to bind to multiple specific targets, for example to enhance efficacy or to reduce resistance formation. Although deep learning has achieved a breakthrough in de novo design in drug discovery, most of its applications only focus on a single drug target to generate drug-like active molecules. However, in reality drug molecules often interact with more than one target which can have desired (polypharmacology) or undesired (toxicity) effects. In a previous study we proposed a new method named DrugEx that integrates an exploration strategy into RNN-based reinforcement learning to improve the diversity of the generated molecules. Here, we extended our DrugEx algorithm with multi-objective optimization to generate drug-like molecules towards multiple targets or one specific target while avoiding off-targets (the two adenosine receptors, A 1 AR and A 2A AR, and the potassium ion channel hERG in this study). In our model, we applied an RNN as the agent and machine learning predictors as the environment . Both the agent and the environment were pre-trained in advance and then interplayed under a reinforcement learning framework. The concept of evolutionary algorithms was merged into our method such that crossover and mutation operations were implemented by the same deep learning model as the agent . During the training loop, the agent generates a batch of SMILES-based molecules. Subsequently scores for all objectives provided by the environment are used to construct Pareto ranks of the generated molecules. For this ranking a non-dominated sorting algorithm and a Tanimoto-based crowding distance algorithm using chemical fingerprints are applied. Here, we adopted GPU acceleration to speed up the process of Pareto optimization. The final reward of each molecule is calculated based on the Pareto ranking with the ranking selection algorithm. The agent is trained under the guidance of the reward to make sure it can generate desired molecules after convergence of the training process. All in all we demonstrate generation of compounds with a diverse predicted selectivity profile towards multiple targets, offering the potential of high efficacy and low toxicity.
Efficient computation of expected hypervolume improvement using box decomposition algorithms
In the field of multi-objective optimization algorithms, multi-objective Bayesian Global Optimization (MOBGO) is an important branch, in addition to evolutionary multi-objective optimization algorithms. MOBGO utilizes Gaussian Process models learned from previous objective function evaluations to decide the next evaluation site by maximizing or minimizing an infill criterion. A commonly used criterion in MOBGO is the Expected Hypervolume Improvement (EHVI), which shows a good performance on a wide range of problems, with respect to exploration and exploitation. However, so far, it has been a challenge to calculate exact EHVI values efficiently. This paper proposes an efficient algorithm for the exact calculation of the EHVI for in a generic case. This efficient algorithm is based on partitioning the integration volume into a set of axis-parallel slices. Theoretically, the upper bound time complexities can be improved from previously \\[O (n^2)\\] and \\[O(n^3)\\], for two- and three-objective problems respectively, to \\[\\varTheta (n\\log n)\\], which is asymptotically optimal. This article generalizes the scheme in higher dimensional cases by utilizing a new hyperbox decomposition technique, which is proposed by Dächert et al. (Eur J Oper Res 260(3):841–855, 2017). It also utilizes a generalization of the multilayered integration scheme that scales linearly in the number of hyperboxes of the decomposition. The speed comparison shows that the proposed algorithm in this paper significantly reduces computation time. Finally, this decomposition technique is applied in the calculation of the Probability of Improvement (PoI).
A Systematic Approach to Identify Shipping Emissions Using Spatio-Temporally Resolved TROPOMI Data
Stringent global regulations aim to reduce nitrogen dioxide (NO2) emissions from maritime shipping. However, the lack of a global monitoring system makes compliance verification challenging. To address this issue, we propose a systematic approach to monitor shipping emissions using unsupervised clustering techniques on spatio-temporal georeferenced data, specifically NO2 measurements obtained from the TROPOspheric Monitoring Instrument (TROPOMI) on board the Copernicus Sentinel-5 Precursor satellite. Our method involves partitioning spatio-temporally resolved measurements based on the similarity of NO2 column levels. We demonstrate the reproducibility of our approach through rigorous testing and validation using data collected from multiple regions and time periods. Our approach improves the spatial correlation coefficients between NO2 column clusters and shipping traffic frequency. Additionally, we identify a temporal correlation between NO2 column levels along shipping routes and the global container throughput index. We expect that our approach may serve as a prototype for a tool to identify anthropogenic maritime emissions, distinguishing them from background sources.
Sensor placement in water distribution networks using centrality-guided multi-objective optimisation
This paper introduces a multi-objective optimisation approach for the challenging problem of efficient sensor placement in water distribution networks for contamination detection. An important question is how to identify the minimal number of required sensors without losing the capacity to monitor the system as a whole. In this study, we adapted the NSGA-II multi-objective optimisation method by applying centrality mutation. The approach, with two objectives, namely the minimisation of Expected Time of Detection and maximisation of Detection Network Coverage (which computes the number of detected water contamination events), is tested on a moderate-sized benchmark problem (129 nodes). The resulting Pareto front shows that detection network coverage can improve dramatically by deploying only a few sensors (e.g. increase from one sensor to three sensors). However, after reaching a certain number of sensors (e.g. 20 sensors), the effectiveness of further increasing the number of sensors is not apparent. Further, the results confirm that 40–45 sensors (i.e. 31 − 35% of the total number of nodes) will be sufficient for fully monitoring the benchmark network, i.e. for detection of any contaminant intrusion event no matter where it appears in the network.
Cluster-based Kriging approximation algorithms for complexity reduction
Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a well-defined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.
Climate targets in European timber-producing countries conflict with goals on forest ecosystem services and biodiversity
The European Union (EU) set clear climate change mitigation targets to reach climate neutrality, accounting for forests and their woody biomass resources. We investigated the consequences of increased harvest demands resulting from EU climate targets. We analysed the impacts on national policy objectives for forest ecosystem services and biodiversity through empirical forest simulation and multi-objective optimization methods. We show that key European timber-producing countries – Finland, Sweden, Germany (Bavaria) – cannot fulfil the increased harvest demands linked to the ambitious 1.5°C target. Potentials for harvest increase only exists in the studied region Norway. However, focusing on EU climate targets conflicts with several national policies and causes adverse effects on multiple ecosystem services and biodiversity. We argue that the role of forests and their timber resources in achieving climate targets and societal decarbonization should not be overstated. Our study provides insight for other European countries challenged by conflicting policies and supports policymakers.
A novel chemogenomics analysis of G protein-coupled receptors (GPCRs) and their ligands: a potential strategy for receptor de-orphanization
Background G protein-coupled receptors (GPCRs) represent a family of well-characterized drug targets with significant therapeutic value. Phylogenetic classifications may help to understand the characteristics of individual GPCRs and their subtypes. Previous phylogenetic classifications were all based on the sequences of receptors, adding only minor information about the ligand binding properties of the receptors. In this work, we compare a sequence-based classification of receptors to a ligand-based classification of the same group of receptors, and evaluate the potential to use sequence relatedness as a predictor for ligand interactions thus aiding the quest for ligands of orphan receptors. Results We present a classification of GPCRs that is purely based on their ligands, complementing sequence-based phylogenetic classifications of these receptors. Targets were hierarchically classified into phylogenetic trees, for both sequence space and ligand (substructure) space. The overall organization of the sequence-based tree and substructure-based tree was similar; in particular, the adenosine receptors cluster together as well as most peptide receptor subtypes ( e.g . opioid, somatostatin) and adrenoceptor subtypes. In ligand space, the prostanoid and cannabinoid receptors are more distant from the other targets, whereas the tachykinin receptors, the oxytocin receptor, and serotonin receptors are closer to the other targets, which is indicative for ligand promiscuity. In 93% of the receptors studied, de-orphanization of a simulated orphan receptor using the ligands of related receptors performed better than random (AUC > 0.5) and for 35% of receptors de-orphanization performance was good (AUC > 0.7). Conclusions We constructed a phylogenetic classification of GPCRs that is solely based on the ligands of these receptors. The similarities and differences with traditional sequence-based classifications were investigated: our ligand-based classification uncovers relationships among GPCRs that are not apparent from the sequence-based classification. This will shed light on potential cross-reactivity of GPCR ligands and will aid the design of new ligands with the desired activity profiles. In addition, we linked the ligand-based classification with a ligand-focused sequence-based classification described in literature and proved the potential of this method for de-orphanization of GPCRs.