Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
711 result(s) for "multi-scale modeling"
Sort by:
Slip Tendency Analysis From Sparse Stress and Satellite Data Using Physics‐Guided Deep Neural Networks
The significant risk associated with fault reactivation often necessitates slip tendency analyses for effective risk assessment. However, such analyses are challenging, particularly in large areas with limited or absent reliable stress measurements and where the cost of extensive geomechanical analyses or simulations is prohibitive. In this paper, we propose a novel approach using a physics‐informed neural network that integrates stress orientation and satellite displacement observations in a top‐down multi‐scale framework to estimate two‐dimensional slip tendency analyses even in regions lacking comprehensive stress data. Our study demonstrates that velocities derived from a continental scale analysis, combined with reliable stress orientation averages, can effectively guide models at smaller scales to generate qualitative slip tendency maps. By offering customizable data selection and stress resolution options, this method presents a robust solution to address data scarcity issues, as exemplified through a case study of the South Australian Eyre Peninsula. Plain Language Summary Fault reactivation poses significant risks, often requiring slip tendency analyses for thorough risk assessment. Yet, such analyses face challenges, especially in large areas lacking reliable stress measurements or where extensive geomechanical analyses are too costly. Our paper suggests a new method using a physics‐based neural network. This approach combines compressive direction and satellite displacement observations to estimate slip tendencies in two dimensions, even where stress data is lacking. Our study shows that by using displacements from a continental scale analysis and reliable averages of compressive directions, we can guide models to create smaller‐scale maps indicating where faults are more likely to reactivate. This method allows for customizable data selection and stress resolution, offering a strong solution to data scarcity issues. We demonstrate its effectiveness through a case study of South Australia's Eyre Peninsula. Key Points Physics‐based neural networks allow two‐dimensional slip tendency analyses without prior full‐stress information A multi‐scale approach provides required displacement constraints when inferring full stresses from global navigation satellite system (GNSS) and stress orientation data We present a new application for GNSS data that would welcome more stations, even in seismically stable areas
Machine Learning Approaches in Soft Matter Molecular Simulation and Materials Characterization: Challenges and Perspectives
Machine learning (ML) techniques are currently investigated for their potential applicability in a wide range of disciplines and scientific domains as a powerful extension to existing state‐of‐the‐art experimental and computational methods. The diverse scientific areas within the materials science field can largely benefit from the development of data‐driven methods in the present era of advanced ML computational algorithms, efficient, and optimized hardware and large amounts of produced information. In this perspective, basic concepts are introduced and representative advances are showcased from the standpoint of materials characterization and soft matter molecular simulation. Prerequisites and challenges are discussed toward the construction of sound and efficient ML‐aided approaches that can contribute via new auxiliary routes to fundamental understanding and thus facilitate scientific discovery and technological applications. Rigorous frameworks construction toward the development of science‐based machine learning (ML) schemes: invocation of statistical learning and data‐driven methods within the diverse materials science fields, from materials characterization to molecular modeling utilizing domain knowledge to facilitate fundamental understanding and scientific discovery.
Multiscale Computational and Artificial Intelligence Models of Linear and Nonlinear Composites: A Review
Herein, state‐of‐the‐art multiscale modeling methods have been described. This research includes notable molecular, micro‐, meso‐, and macroscale models for hard (polymer, metal, yarn, fiber, fiber‐reinforced polymer, and polymer matrix composites) and soft (biological tissues such as brain white matter [BWM]) composite materials. These numerical models vary from molecular dynamics simulations to finite‐element (FE) analyses and machine learning/deep learning surrogate models. Constitutive material models are summarized, such as viscoelastic hyperelastic, and emerging models like fractional viscoelastic. Key challenges such as meshing, data variability and material nonlinearity‐driven uncertainty, limitations in terms of computational resources availability, model fidelity, and repeatability are outlined with state‐of‐the‐art models. Latest advancements in FE modeling involving meshless methods, hybrid ML and FE models, and nonlinear constitutive material (linear and nonlinear) models aim to provide readers with a clear outlook on futuristic trends in composite multiscale modeling research and development. The data‐driven models presented here are of varying length and time scales, developed using advanced mathematical, numerical, and huge volumes of experimental results as data for digital models. An in‐depth discussion on data‐driven models would provide researchers with the necessary tools to build real‐time composite structure monitoring and lifecycle prediction models.
Multi-scale Modeling of the Cardiovascular System: Disease Development, Progression, and Clinical Intervention
Cardiovascular diseases (CVDs) are the leading cause of death in the western world. With the current development of clinical diagnostics to more accurately measure the extent and specifics of CVDs, a laudable goal is a better understanding of the structure–function relation in the cardiovascular system. Much of this fundamental understanding comes from the development and study of models that integrate biology, medicine, imaging, and biomechanics. Information from these models provides guidance for developing diagnostics, and implementation of these diagnostics to the clinical setting, in turn, provides data for refining the models. In this review, we introduce multi-scale and multi-physical models for understanding disease development, progression, and designing clinical interventions. We begin with multi-scale models of cardiac electrophysiology and mechanics for diagnosis, clinical decision support, personalized and precision medicine in cardiology with examples in arrhythmia and heart failure. We then introduce computational models of vasculature mechanics and associated mechanical forces for understanding vascular disease progression, designing clinical interventions, and elucidating mechanisms that underlie diverse vascular conditions. We conclude with a discussion of barriers that must be overcome to provide enhanced insights, predictions, and decisions in pre-clinical and clinical applications.
Improving Stratocumulus Cloud Amounts in a 200‐m Resolution Multi‐Scale Modeling Framework Through Tuning of Its Interior Physics
High‐Resolution Multi‐scale Modeling Frameworks (HR)—global climate models that embed separate, convection‐resolving models with high enough resolution to resolve boundary layer eddies—have exciting potential for investigating low cloud feedback dynamics due to reduced parameterization and ability for multidecadal throughput on modern computing hardware. However low clouds in past HR have suffered a stubborn problem of over‐entrainment due to an uncontrolled source of mixing across the marine subtropical inversion manifesting as stratocumulus dim biases in present‐day climate, limiting their scientific utility. We report new results showing that this over‐entrainment can be partly offset by using hyperviscosity and cloud droplet sedimentation. Hyperviscosity damps small‐scale momentum fluctuations associated with the formulation of the momentum solver of the embedded large eddy simulation. By considering the sedimentation process adjacent to default one‐moment microphysics in HR, condensed phase particles can be removed from the entrainment zone, which further reduces entrainment efficiency. The result is an HR that can produce more low clouds with a higher liquid water path and a reduced stratocumulus dim bias. Associated improvements in the explicitly simulated sub‐cloud eddy spectrum are observed. We report these sensitivities in multi‐week tests and then explore their operational potential alongside microphysical retuning in decadal simulations at operational 1.5° exterior resolution. The result is a new HR having desired improvements in the baseline present‐day low cloud climatology, and a reduced global mean bias and root mean squared error of absorbed shortwave radiation. We suggest it should be promising for examining low cloud feedbacks with minimal approximation. Plain Language Summary Stratocumulus clouds cover a large fraction of the globe but are very challenging to reproduce in computer simulations of Earth's atmosphere because of their unique complexity. Previous studies find the model produces too few Stratocumulus clouds as we increase the model resolution, which, in theory, should improve the simulation of important motions for the clouds. This is because the clouds are exposed to more conditions that make them evaporate away. On Earth, stratocumulus clouds reflect a lot of sunlight. In the computer model of Earth, too much sunlight reaches the surface because of too few stratocumulus clouds, which makes it warmer. This study tests two methods to thicken Stratocumulus clouds in the computer model Earth. The first method smooths out some winds, which helps reduce the exposure of clouds to the conditions that make them evaporate. The second method moves water droplets in the cloud away from the conditions that would otherwise make them evaporate. In long simulations, combining these methods helps the model produce thicker stratocumulus clouds with more water. Key Points We improve a long‐standing stratocumulus (Sc) dim bias in a high‐resolution Multiscale Modeling Framework Incorporating intra‐CRM hyperviscosity hedges against the numerics of its momentum solver, reducing entrainment vicinity Further adding sedimentation boosts Sc brightness close to observed, opening path to more faithful low cloud feedback analysis
Multi-scale habitat modelling identifies spatial conservation priorities for mainland clouded leopards (Neofelis nebulosa)
Aim Deforestation is rapidly altering Southeast Asian landscapes, resulting in some of the highest rates of habitat loss worldwide. Among the many species facing declines in this region, clouded leopards rank notably for their ambassadorial potential and capacity to act as powerful levers for broader forest conservation programmes. Thus, identifying core habitat and conservation opportunities are critical for curbing further Neofelis declines and extending umbrella protection for diverse forest biota similarly threatened by widespread habitat loss. Furthermore, a recent comprehensive habitat assessment of Sunda clouded leopards (N. diardi) highlights the lack of such information for the mainland species (N. nebulosa) and facilitates a comparative assessment. Location Southeast Asia. Methods Species–habitat relationships are scale‐dependent, yet <5% of all recent habitat modelling papers apply robust approaches to optimize multivariate scale relationships. Using one of the largest camera trap datasets ever collected, we developed scale‐optimized species distribution models for two con‐generic carnivores, and quantitatively compared their habitat niches. Results We identified core habitat, connectivity corridors, and ranked remaining habitat patches for conservation prioritization. Closed‐canopy forest was the strongest predictor, with ~25% lower Neofelis detections when forest cover declined from 100 to 65%. A strong, positive association with increasing precipitation suggests ongoing climate change as a growing threat along drier edges of the species’ range. While deforestation and land use conversion were deleterious for both species, N. nebulosa was uniquely associated with shrublands and grasslands. We identified 800 km2 as a minimum patch size for supporting clouded leopard conservation. Main conclusions We illustrate the utility of multi‐scale modelling for identifying key habitat requirements, optimal scales of use and critical targets for guiding conservation prioritization. Curbing deforestation and development within remaining core habitat and dispersal corridors, particularly in Myanmar, Laos and Malaysia, is critical for supporting evolutionary potential of clouded leopards and conservation of associated forest biodiversity.
Addressing planar solid oxide cell degradation mechanisms: A critical review of selected components
In this review paper, a critical assessment of the main degradation processes in three key components of solid oxide fuel cells and electrolysers (negative and positive electrodes and the interconnect) is undertaken, attempting prioritization of respective degradation effects and recommendation of the best approaches in their experimental ascertainment and numerical modeling. Besides different approaches to quantifying the degradation rate of an operating solid oxide cell (SOC), the latest advancements in microstructural representation (3D imaging and reconstruction) of SOC electrodes are reviewed, applied to the quantification of triple‐phase boundary (TPB) lengths and morphology evolution over time. The intrinsic degradation processes in the negative (fuel) electrode and the positive (oxygen) electrode are discussed, covering first the composition and governing mechanisms of the respective electrodes, followed by a comprehensive evaluation of the most important factors of degradation during operation. By this systematic identification of dominant degradation processes, measurement techniques, and modeling approaches, the foundations are laid for the definition of meaningful accelerated stress testing of SOC cells and stacks, which will help the technology achieve the constantly more demanding durability targets in market applications.
A systems biology view of blood vessel growth and remodelling
Blood travels throughout the body in an extensive network of vessels – arteries, veins and capillaries. This vascular network is not static, but instead dynamically remodels in response to stimuli from cells in the nearby tissue. In particular, the smallest vessels – arterioles, venules and capillaries – can be extended, expanded or pruned, in response to exercise, ischaemic events, pharmacological interventions, or other physiological and pathophysiological events. In this review, we describe the multi‐step morphogenic process of angiogenesis – the sprouting of new blood vessels – and the stability of vascular networks in vivo. In particular, we review the known interactions between endothelial cells and the various blood cells and plasma components they convey. We describe progress that has been made in applying computational modelling, quantitative biology and high‐throughput experimentation to the angiogenesis process.
Load‐Balancing Intense Physics Calculations to Embed Regionalized High‐Resolution Cloud Resolving Models in the E3SM and CESM Climate Models
We design a new strategy to load‐balance high‐intensity sub‐grid atmospheric physics calculations restricted to a small fraction of a global climate simulation's domain. We show why the current parallel load balancing infrastructure of Community Earth System Model (CESM) and Energy Exascale Earth Model (E3SM) cannot efficiently handle this scenario at large core counts. As an example, we study an unusual configuration of the E3SM Multiscale Modeling Framework (MMF) that embeds a binary mixture of two separate cloud‐resolving model grid structures that is attractive for low cloud feedback studies. Less than a third of the planet uses high‐resolution (MMF‐HR; sub‐km horizontal grid spacing) relative to standard low‐resolution (MMF‐LR) cloud superparameterization elsewhere. To enable MMF runs with Multi‐Domain cloud resolving models (CRMs), our load balancing theory predicts the most efficient computational scale as a function of the high‐intensity work's relative overhead and its fractional coverage. The scheme successfully maximizes model throughput and minimizes model cost relative to precursor infrastructure, effectively by devoting the vast majority of the processor pool to operate on the few high‐intensity (and rate‐limiting) high‐resolution (HR) grid columns. Two examples prove the concept, showing that minor artifacts can be introduced near the HR/low‐resolution CRM grid transition boundary on idealized aquaplanets, but are minimal in operationally relevant real‐geography settings. As intended, within the high (low) resolution area, our Multi‐Domain CRM simulations exhibit cloud fraction and shortwave reflection convergent to standard baseline tests that use globally homogenous MMF‐LR and MMF‐HR. We suggest this approach can open up a range of creative multi‐resolution climate experiments without requiring unduly large allocations of computational resources. Plain Language Summary The atmospheric physics parameterizations of traditional climate models do not spend radically different amounts of computational power and time across different parts of the globe. However, there are some physical processes, such as low cloud formation, that require much higher resolution than we currently use to model climate change. Using the traditional framework, this becomes too expensive because of computational limitations. In this work, we develop a way to efficiently enhance the interior resolution of a multi‐scale climate model over only parts of the world. Specifically, we develop a way for a supercomputer to best split up the work of simulating the cloud physics over the parts of the world with coarser versus higher resolution so that it runs much faster. We also show that this task of using coarse resolution in some places and high resolution in others doesn't produce unwanted side effects. Key Points We adapt Energy Exascale Earth Model (E3SM)/Community Earth System Model so most of a processor pool can operate on just a subset of demanding physics columns Load balancing theory finds optimal computational scale for regionalized high intensity physics work Multi‐Domain cloud resolving model tests with Large‐Eddy Simulation resolution embedded in 30% of the superparameterized E3SM succeed with few artifacts
Fates of Emitted Particles Depending on Mask Wearing Using an Approach Validated Across Spatial Scales
The spread of emitted potentially virus‐laden aerosol particles is known to be highly dependent on whether a mask is worn by an infected person and on the emission scenario, i.e., whether the person is coughing, speaking, or breathing. The aim of this work is to investigate in detail the fates of particles emitted by a person wearing a perfectly fitting, a naturally fitted mask with leakage, and no mask depending on the emission scenario. Therefore, a two‐scale numerical workflow is proposed where parameters are carried through from a micro‐scale where the fibers of the mask filter medium and the aerosol particles are resolved to a macro‐scale and validated by comparison to experimental measurements of fractional filtration efficiency and pressure drop of the filter medium as well as pressure drop of the mask. It turns out that masks reduce the number of both emitted and inhaled particles significantly even with leakage. While without a mask, the person opposite of an infected person is generally at the highest risk of being infected, a mask worn by an infected person speaking or coughing will deflect the flow leading to the fact that the person behind the infected person might inhale the largest number of aerosol particles. The spread of emitted virus‐laden aerosol particles is highly dependent on whether a mask is worn and on the emission scenario, i.e., coughing, speaking, or simply breathing. This work aims at investigating in detail the fates of particles emitted by an infected person wearing a perfectly fitting, naturally fitted mask, and no mask depending on the emission scenario.