Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
167 result(s) for "Pareto frontier"
Sort by:
Bi-objective optimization for multi-modal transportation routing planning problem based on Pareto optimality
Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO) and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.
Pareto Frontier Based Concept Selection Under Uncertainty, with Visualization
Issue Title: Special Issue on Multidisciplinary Design Optimization (Guest Editor: Natalia Alexandrov) In a recent publication, we presented a new multiobjective decision-making tool for use in conceptual engineering design. In the present paper, we provide important developments that support the next phase in the evolution of the tool. These developments, together with those of our previous work, provide a concept selection approach that capitalizes on the benefits of computational optimization. Specifically, the new approach uses the efficiency and effectiveness of optimization to rapidly compare numerous designs, and characterize the tradeoff properties within the multiobjective design space. As such, the new approach differs significantly from traditional (non-optimization based) concept selection approaches where, comparatively speaking, significant time is often spent evaluating only a few points in the design space. Under the new approach, design concepts are evaluated using a so-calleds-Pareto frontier; this frontier originates from the Pareto frontiers of various concepts, and is the Pareto frontier for thesetof design concepts. An important characteristic of the s-Pareto frontier is that it provides a foundation for analyzing tradeoffs between design objectives and the tradeoffs between design concepts. The new developments presented in this paper include; (i) the notion ofminimally representingthe s-Pareto frontier, (ii) the quantification of concept goodness using s-Pareto frontiers, (iii) the development of an interactive design space exploration approach that can be used to visualizen-dimensional s-Pareto frontiers, and (iv) s-Pareto frontier-based approaches for considering uncertainty in concept selection. Simple structural examples are presented that illustrate representative applications of the proposed method.[PUBLICATION ABSTRACT]
A method for developing systems that traverse the Pareto frontiers of multiple system concepts through modularity
Natural changes in customer needs over time often necessitate the development of new systems that satisfy the new needs. In a previous work by the authors, a 5-step multiobjective optimization-based method was presented to identify systems that anticipate, account for, and allow for these changes by moving from one Pareto design to another through module addition. Recognizing the potential for changes in needs to exceed the limits of a single Pareto frontier, the present paper introduces important advancements that extend development to modules connecting multiple disparate system concepts. As such, the search for suitable system designs is extended from a Pareto frontier that characterizes one system concept to a Pareto frontier that characterizes a set of system concepts. An expanded methodology is described, and a tri-objective hurricane and flood resistant residential structure example is used to demonstrate the method. The authors conclude that the developed method provides a new methodology for selecting platform and module designs in the presence of multiple system concepts, and is capable of identifying a set of modular system designs that are well-suited to satisfy changing needs over time.
Data‐Driven Equation Discovery of a Cloud Cover Parameterization
A promising method for improving the representation of clouds in climate models, and hence climate projections, is to develop machine learning‐based parameterizations using output from global storm‐resolving models. While neural networks (NNs) can achieve state‐of‐the‐art performance within their training distribution, they can make unreliable predictions outside of it. Additionally, they often require post‐hoc tools for interpretation. To avoid these limitations, we combine symbolic regression, sequential feature selection, and physical constraints in a hierarchical modeling framework. This framework allows us to discover new equations diagnosing cloud cover from coarse‐grained variables of global storm‐resolving model simulations. These analytical equations are interpretable by construction and easily transferable to other grids or climate models. Our best equation balances performance and complexity, achieving a performance comparable to that of NNs (R2 = 0.94) while remaining simple (with only 11 trainable parameters). It reproduces cloud cover distributions more accurately than the Xu‐Randall scheme across all cloud regimes (Hellinger distances < 0.09), and matches NNs in condensate‐rich regimes. When applied and fine‐tuned to the ERA5 reanalysis, the equation exhibits superior transferability to new data compared to all other optimal cloud cover schemes. Our findings demonstrate the effectiveness of symbolic regression in discovering interpretable, physically‐consistent, and nonlinear equations to parameterize cloud cover. Plain Language Summary In climate models, cloud cover is usually expressed as a function of coarse, pixelated variables. Traditionally, this functional relationship is derived from physical assumptions. In contrast, machine learning (ML) approaches, such as neural networks, sacrifice interpretability for performance. In our approach, we use high‐resolution climate model output to learn a hierarchy of cloud cover schemes from data. To bridge the gap between simple statistical methods and ML algorithms, we employ a symbolic regression method. Unlike classical regression, which requires providing a set of basis functions from which the equation is composed of, symbolic regression only requires mathematical operators (such as +, ×) that it learns to combine. By using a genetic algorithm, inspired by the process of natural selection, we discover an interpretable, nonlinear equation for cloud cover. This equation is simple, performs well, satisfies physical principles, and outperforms other algorithms when applied to new observationally‐informed data. Key Points We systematically derive and evaluate cloud cover parameterizations of various complexity from global storm‐resolving simulation output Using symbolic regression combined with physical constraints, we find a new interpretable equation balancing performance and simplicity Our data‐driven cloud cover equation can be retuned with few samples, facilitating transfer learning to generalize to other realistic data
A survey on handling computationally expensive multiobjective optimization problems using surrogates: non-nature inspired methods
Computationally expensive multiobjective optimization problems arise, e.g. in many engineering applications, where several conflicting objectives are to be optimized simultaneously while satisfying constraints. In many cases, the lack of explicit mathematical formulas of the objectives and constraints may necessitate conducting computationally expensive and time-consuming experiments and/or simulations. As another challenge, these problems may have either convex or nonconvex or even disconnected Pareto frontier consisting of Pareto optimal solutions. Because of the existence of many such solutions, typically, a decision maker is required to select the most preferred one. In order to deal with the high computational cost, surrogate-based methods are commonly used in the literature. This paper surveys surrogate-based methods proposed in the literature, where the methods are independent of the underlying optimization algorithm and mitigate the computational burden to capture different types of Pareto frontiers. The methods considered are classified, discussed and then compared. These methods are divided into two frameworks: the sequential and the adaptive frameworks. Based on the comparison, we recommend the adaptive framework to tackle the aforementioned challenges.
Pareto-Front Optimization of Variance-Added Expected Loss with Interrelated Qualities
In industries, particularly in quality optimization, the trade-off between model bias and variance is inevitable, reflecting the tension between accuracy and uncertainty. Traditional methods often address these aspects separately, potentially leading to suboptimal decisions. This study proposes a Pareto-front optimization framework for a variance-added expected loss function within the context of interrelated quality characteristics. By integrating multivariate quadratic loss with a variance term, our approach simultaneously captures deviation from targets (bias) and system uncertainty (variance). Unlike sequential approaches that first minimize bias and then variance—often increasing total risk—our weighted formulation flexibly adjusts for their trade-offs. This enables a more balanced and efficient optimization process that identifies solutions with lower overall risk. Through Pareto-front analysis, we reveal trade-offs between expected loss and variance, allowing users to select optimal quality designs based on their preferred bias–variance balance. Representative examples and a case study adopted from the literature validate the effectiveness and practical applicability of the proposed method.
The explanation game
We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions.
Pareto-optimal reinsurance policies in the presence of individual risk constraints
The notion of Pareto optimality is commonly employed to formulate decisions that reconcile the conflicting interests of multiple agents with possibly different risk preferences. In the context of a one-period reinsurance market comprising an insurer and a reinsurer, both of which perceive risk via distortion risk measures, also known as dual utilities, this article characterizes the set of Pareto-optimal reinsurance policies analytically and visualizes the insurer–reinsurer trade-off structure geometrically. The search of these policies is tackled by translating it mathematically into a functional minimization problem involving a weighted average of the insurer’s risk and the reinsurer’s risk. The resulting solutions not only cast light on the structure of the Pareto-optimal contracts, but also allow us to portray the resulting insurer–reinsurer Pareto frontier graphically. In addition to providing a pictorial manifestation of the compromise reached between the insurer and reinsurer, an enormous merit of developing the Pareto frontier is the considerable ease with which Pareto-optimal reinsurance policies can be constructed even in the presence of the insurer’s and reinsurer’s individual risk constraints. A strikingly simple graphical search of these constrained policies is performed in the special cases of Value-at-Risk and Tail Value-at-Risk.
A multi-objective framework for Pareto frontier exploration of lattice structures
Multi-scale topology optimisation has received renewed research interest in the last decade due to the potential for increased mechanical performance and improved additive manufacturing capabilities. Most multi-scale routines rely on homogenization to bridge the scale difference, simulate part performance and eventually drive it towards an optimum. Key to macroscale performance is the search for optimal metamaterials. In this work, a multi-objective framework is proposed to reformulate this classical problem, which is to the authors’ knowledge the first work to do so. The use of multiple objectives implies that the underlying structure of the optimal metamaterial performance space is a Pareto frontier: a manifold of solutions which cannot be improved upon without compromising on either stiffness or weight. The proposed framework is applied to a lattice unit cell and the map between the optimal design and performance space, through the so-called compromise space, is studied numerically. Deficiencies that cause a collapse of the Pareto frontier are resolved and the effect of design constraints is examined. In the end, it is shown that a 14-dimensional compromise space is capable of accurately capturing every Pareto-optimal performance while also ensuring a bijective map to the design space. Therefore, these properties make this lattice material model an attractive target for usage inside multi-scale routines.
Performance and Energy Trade-Offs for Parallel Applications on Heterogeneous Multi-Processing Systems
This work proposes a methodology to find performance and energy trade-offs for parallel applications running on Heterogeneous Multi-Processing systems with a single instruction-set architecture. These offer flexibility in the form of different core types and voltage and frequency pairings, defining a vast design space to explore. Therefore, for a given application, choosing a configuration that optimizes the performance and energy consumption is not straightforward. Our method proposes novel analytical models for performance and power consumption whose parameters can be fitted using only a few strategically sampled offline measurements. These models are then used to estimate an application’s performance and energy consumption for the whole configuration space. In turn, these offline predictions define the choice of estimated Pareto-optimal configurations of the model, which are used to inform the selection of the configuration that the application should be executed on. The methodology was validated on an ODROID-XU3 board for eight programs from the PARSEC Benchmark, Phoronix Test Suite and Rodinia applications. The generated Pareto-optimal configuration space represented a 99% reduction of the universe of all available configurations. Energy savings of up to 59.77%, 61.38% and 17.7% were observed when compared to the performance, ondemand and powersave Linux governors, respectively, with higher or similar performance.