Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,660 result(s) for "uncertainty quantification"
Sort by:
Solving Stochastic Inverse Problems for Property–Structure Linkages Using Data-Consistent Inversion and Machine Learning
Determining process–structure–property linkages is one of the key objectives in material science, and uncertainty quantification plays a critical role in understanding both process–structure and structure–property linkages. In this work, we seek to learn a distribution of microstructure parameters that are consistent in the sense that the forward propagation of this distribution through a crystal plasticity finite element model matches a target distribution on materials properties. This stochastic inversion formulation infers a distribution of acceptable/consistent microstructures, as opposed to a deterministic solution, which expands the range of feasible designs in a probabilistic manner. To solve this stochastic inverse problem, we employ a recently developed uncertainty quantification framework based on push-forward probability measures, which combines techniques from measure theory and Bayes’ rule to define a unique and numerically stable solution. This approach requires making an initial prediction using an initial guess for the distribution on model inputs and solving a stochastic forward problem. To reduce the computational burden in solving both stochastic forward and stochastic inverse problems, we combine this approach with a machine learning Bayesian regression model based on Gaussian processes and demonstrate the proposed methodology on two representative case studies in structure–property linkages.
On the Brittleness of Bayesian Inference
With the advent of high-performance computing, Bayesian methods are becoming increasingly popular tools for the quantification of uncertainty throughout science and industry. Since these methods can impact the making of sometimes critical decisions in increasingly complicated contexts, the sensitivity of their posterior conclusions with respect to the underlying models and prior beliefs is a pressing question to which there currently exist positive and negative answers. We report new results suggesting that, although Bayesian methods are robust when the number of possible outcomes is finite or when only a finite number of marginals of the data-generating distribution are unknown, they could be generically brittle when applied to continuous systems (and their discretizations) with finite information on the data-generating distribution. If closeness is defined in terms of the total variation (TV) metric or the matching of a finite system of generalized moments, then (1) two practitioners who use arbitrarily close models and observe the same (possibly arbitrarily large amount of) data may reach opposite conclusions; and (2) any given prior and model can be slightly perturbed to achieve any desired posterior conclusion. The mechanism causing brittleness/robustness suggests that learning and robustness are antagonistic requirements, which raises the possibility of a missing stability condition when using Bayesian inference in a continuous world under finite information.
Developing hazard scenarios from monitoring data, historical chronicles, and expert elicitation: a case study of Sangay volcano, Ecuador
Sangay volcano is considered as one of the most active volcanoes worldwide. Nevertheless, due to its remote location and low-impact eruptions, its eruptive history and hazard scenarios are poorly constrained. In this work, we address this issue by combining an analysis of monitoring data and historical chronicles with expert elicitation. During the last 400 years, we recognize periods of quiescence, weak, and enhanced eruptive activity, lasting from several months to several years, punctuated by eruptive pulses, lasting from a few hours to a few days. Sangay volcano has been mainly active since the seventeenth century, with weak eruptive activity as the most common regime, although there have also been several periods of quiescence. During this period, eruptive pulses with VEI 1–3 occurred mainly during enhanced eruptive activity and produced far-reaching impacts due to ash fallout to the west and long-runout lahars to the south-east. Four eruptive pulse scenarios are considered in the expert elicitation: strong ash venting (SAV, VEI 1–2), violent Strombolian (VS, VEI 2–3), sub-Plinian (SPL, VEI 3–4), and Plinian (PL, VEI 4–5). SAV is identified as the most likely scenario, while PL has the smallest probability of occurrence. The elicitation results show high uncertainty about the probability of occurrence of VS and SPL. Large uncertainties are also observed for eruption duration and bulk fallout volume for all eruptive scenarios, while average column height is better characterized, particularly for SAV and VS. We interpret these results as a consequence of the lack of volcano-physical data, which could be reduced with further field studies. This study shows how historical reconstruction and expert elicitation can help to develop hazard scenarios with uncertainty assessment for poorly known volcanoes, representing a first step towards the elaboration of appropriate hazard maps and subsequent planning.
Two sources of uncertainty in estimating tephra volumes from isopachs: perspectives and quantification
Calculating the tephra volume is important for estimating eruption intensity and magnitude. Traditionally, tephra volumes are estimated by integrating the area under curves fit to the square root of isopach areas. In this work, we study two sources of uncertainty in estimating tephra volumes based on isopachs. The first is model uncertainty. It occurs because no fitted curves perfectly describe the tephra thinning pattern, and the fitting is done based on log-transformed square root of isopach area. The second source of uncertainty occurs because thickness must be extrapolated beyond the available data, which makes it impossible to validate the extrapolated thickness. We demonstrate the importance of the two sources of uncertainty on a theoretical level. We use six isopach datasets with different characteristics to demonstrate their presence and the effect they could have on volume estimation. Measures to better represent the uncertainty are proposed and tested. For the model uncertainty, we propose (i) a better-informed and stricter way to report and evaluate goodness-of-fit, and (ii) that uncertainty estimations be based on the envelope defined by different well-fitted curves, rather than volumes estimated from individual curves. For the second source of uncertainty, we support reporting separately the volume portions that are interpolated and extrapolated, and we propose to test how sensitive the total volume is to variability in the extrapolated volume. The two sources of uncertainty should not be ignored as they could introduce additional bias and uncertainty in the volume estimate.
Uncertainty Quantification of Material Properties in Ballistic Impact of Magnesium Alloys
The design and development of cutting-edge light materials for extreme conditions including high-speed impact remains a continuing and significant challenge in spite of steady advances. Magnesium (Mg) and its alloys have gained much attention, due to their high strength-to-weight ratio and potential of further improvements in material properties such as strength and ductility. In this paper, a recently developed computational framework is adopted to quantify the effects of material uncertainties on the ballistic performance of Mg alloys. The framework is able to determine the largest deviation in the performance measure resulting from a finite variation in the corresponding material properties. It can also provide rigorous upper bounds on the probability of failure using known information about uncertainties and the system, and then conservative safety design and certification can be achieved. This work specifically focuses on AZ31B Mg alloys, and it is assumed that the material is well-characterized by the Johnson–Cook constitutive and failure models, but the model parameters are uncertain. The ordering of uncertainty contributions for model parameters and the corresponding behavior regimes where those parameters play a crucial role are determined. Finally, it is shown that how this ordering provides insight on the improvement of ballistic performance and the development of new material models for Mg alloys.
Metamodeling on uncertainty quantification in the behavior of the tire/road interaction of vehicles
In recent years, the automotive industry has been developing applied research to meet customer’s needs; considering safety, vehicle comfort and energetic efficiency. In particular, automotive tires have a prominent position in this research area, ensuring good vehicle handling, comfort and safety. In the vehicle dynamics performance, excellent gripping and reduced rolling resistance in the tires are crucial to maximize the energetic efficiency in a reliable way. However, to ensure such a level of reliability in vehicle operation, the inherent uncertainties of tires, as well as other factors subject to variability must be taken into account in the vehicle design. In this way, the present paper analyzes in detail the effect of variations in some parameters such as ambient temperature, ground conditions, vertical load, speed and tire inflation on the analysis of vehicle dynamics. A Metamodeling approach associated with the Monte Carlo Simulation was employed to develop the mathematical models to analyze the effect of uncertain parameters on the tire rolling resistance; traction, centripetal and lateral forces, using experimental data from the literature, in the longitudinal and lateral vehicle dynamics. Therefore, the present research brings as an innovation an integrated approach to the input parameters of the system with the rolling resistance through the developed metamodels. There was a substantial variability of up to 15% both up and down in the Maximum Traction Force of a vehicle in response to variations in the vehicle’s weight and the coefficient of tire rolling resistance. In contrast, the Lateral Force exhibited a greater variability, with a 25 downward and 10% upward variation associated with the weight and friction coefficient variability of the vehicle. Further investigations into the sensitivity analysis highlight the significant influence of the friction coefficient and temperature on the Traction Forces of the vehicle.
Bayesian Parameter Determination of a CT-Test Described by a Viscoplastic-Damage Model Considering the Model Error
The state of materials and accordingly the properties of structures are changing over the period of use, which may influence the reliability and quality of the structure during its life-time. Therefore identification of the model parameters of the system is a topic which has attracted attention in the content of structural health monitoring. The parameters of a constitutive model are usually identified by minimization of the difference between model response and experimental data. However, the measurement errors and differences in the specimens lead to deviations in the determined parameters. In this article, the Choboche model with a damage is used and a stochastic simulation technique is applied to generate artificial data which exhibit the same stochastic behavior as experimental data. Then the model and damage parameters are identified by applying the sequential Gauss-Markov-Kalman filter (SGMKF) approach as this method is determined as the most efficient method for time consuming finite element model updating problems among filtering and random walk approaches. The parameters identified using this Bayesian approach are compared with the true parameters in the simulation, and further, the efficiency of the identification method is discussed. The aim of this study is to observe whether the mentioned method is suitable and efficient to identify the model and damage parameters of a material model, as a highly non-linear model, for a real structural specimen using a limited surface displacement measurement vector gained by Digital Image Correlation (DIC) and to see how much information is indeed needed to estimate the parameters accurately even by considering the model error and whether this approach can also practically be used for health monitoring purposes before the occurrence of severe damage and collapse.
On the Mechanical Properties and Uncertainties of Jute Yarns
Products made from natural materials are eco-friendly. Therefore, it is important to supply product developers with reliable information regarding the properties of natural materials. In this study, we consider a widely used natural material called jute, which grows in Bangladesh, India, and China. We described the results of tensile tests on jute yarns, as well as the energy absorption patterns leading to yarn failure. We have also used statistical analyses and possibility distributions to quantify the uncertainty associated with the following properties of jute yarn: tensile strength, modulus of elasticity, and strain to failure. The uncertainty and energy absorption patterns of jute yarns were compared with those of jute fibers. We concluded that in order to ensure the reliability and durability of a product made from jute, it is good practice to examine the material properties of yarns rather than those of fibers.
A Nonlinear Approach in the Quantification of Numerical Uncertainty by High-Order Methods for Compressible Turbulence with Shocks
This is a comprehensive overview on our research work to link interdisciplinary modeling and simulation techniques to improve the predictability and reliability simulations (PARs) of compressible turbulence with shock waves for general audiences who are not familiar with our nonlinear approach. This focused nonlinear approach is to integrate our “nonlinear dynamical approach” with our “newly developed high order entropy-conserving, momentum-conserving and kinetic energy-preserving methods” in the quantification of numerical uncertainty in highly nonlinear flow simulations. The central issue is that the solution space of discrete genuinely nonlinear systems is much larger than that of the corresponding genuinely nonlinear continuous systems, thus obtaining numerical solutions that might not be solutions of the continuous systems. Traditional uncertainty quantification (UQ) approaches in numerical simulations commonly employ linearized analysis that might not provide the true behavior of genuinely nonlinear physical fluid flows. Due to the rapid development of high-performance computing, the last two decades have been an era when computation is ahead of analysis and when very large-scale practical computations are increasingly used in poorly understood multiscale data-limited complex nonlinear physical problems and non-traditional fields. This is compounded by the fact that the numerical schemes used in production computational fluid dynamics (CFD) computer codes often do not take into consideration the genuinely nonlinear behavior of numerical methods for more realistic modeling and simulations. Often, the numerical methods used might have been developed for weakly nonlinear flow or different flow types other than the flow being investigated. In addition, some of these methods are not discretely physics-preserving (structure-preserving); this includes but is not limited to entropy-conserving, momentum-conserving and kinetic energy-preserving methods. Employing theories of nonlinear dynamics to guide the construction of more appropriate, stable and accurate numerical methods could help, e.g., (a) delineate solutions of the discretized counterparts but not solutions of the governing equations; (b) prevent numerical chaos or numerical “turbulence” leading to FALSE predication of transition to turbulence; (c) provide more reliable numerical simulations of nonlinear fluid dynamical systems, especially by direct numerical simulations (DNS), large eddy simulations (LES) and implicit large eddy simulations (ILES) simulations; and (d) prevent incorrect computed shock speeds for problems containing stiff nonlinear source terms, if present. For computation intensive turbulent flows, the desirable methods should also be efficient and exhibit scalable parallelism for current high-performance computing. Selected numerical examples to illustrate the genuinely nonlinear behavior of numerical methods and our integrated approach to improve PARs are included.
Drilling High Precision Holes in Ti6Al4V Using Rotary Ultrasonic Machining and Uncertainties Underlying Cutting Force, Tool Wear, and Production Inaccuracies
Ti6Al4V alloys are difficult-to-cut materials that have extensive applications in the automotive and aerospace industry. A great deal of effort has been made to develop and improve the machining operations of Ti6Al4V alloys. This paper presents an experimental study that systematically analyzes the effects of the machining conditions (ultrasonic power, feed rate, spindle speed, and tool diameter) on the performance parameters (cutting force, tool wear, overcut error, and cylindricity error), while drilling high precision holes on the workpiece made of Ti6Al4V alloys using rotary ultrasonic machining (RUM). Numerical results were obtained by conducting experiments following the design of an experiment procedure. The effects of the machining conditions on each performance parameter have been determined by constructing a set of possibility distributions (i.e., trapezoidal fuzzy numbers) from the experimental data. A possibility distribution is a probability-distribution-neural representation of uncertainty, and is effective in quantifying the uncertainty underlying physical quantities when there is a limited number of data points which is the case here. Lastly, the optimal machining conditions have been identified using these possibility distributions.