Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
65 result(s) for "Sapsis, Themistoklis P."
Sort by:
Output-weighted optimal sampling for Bayesian regression and rare event statistics using few samples
For many important problems the quantity of interest is an unknown function of the parameters, which is a random vector with known statistics. Since the dependence of the output on this random vector is unknown, the challenge is to identify its statistics, using the minimum number of function evaluations. This problem can be seen in the context of active learning or optimal experimental design. We employ Bayesian regression to represent the derived model uncertainty due to finite and small number of input–output pairs. In this context we evaluate existing methods for optimal sample selection, such as model error minimization and mutual information maximization. We show that for the case of known output variance, the commonly employed criteria in the literature do not take into account the output values of the existing input–output pairs, while for the case of unknown output variance this dependence can be very weak. We introduce a criterion that takes into account the values of the output for the existing samples and adaptively selects inputs from regions of the parameter space which have an important contribution to the output. The new method allows for application to high-dimensional inputs, paving the way for optimal experimental design in high dimensions.
Reduced-order precursors of rare events in unidirectional nonlinear water waves
We consider the problem of short-term prediction of rare, extreme water waves in irregular unidirectional fields, a critical topic for ocean structures and naval operations. One possible mechanism for the occurrence of such rare, unusually intense waves is nonlinear wave focusing. Recent results have demonstrated that random localizations of energy, induced by the linear dispersive mixing of different harmonics, can grow significantly due to modulation instability. Here we show how the interplay between (i) modulation instability properties of localized wave groups and (ii) statistical properties of wave groups that follow a given spectrum defines a critical length scale associated with the formation of extreme events. The energy that is locally concentrated over this length scale acts as the ‘trigger’ of nonlinear focusing for wave groups and the formation of subsequent rare events. We use this property to develop inexpensive, short-term predictors of large water waves, circumventing the need for solving the governing equations. Specifically, we show that by merely tracking the energy of the wave field over the critical length scale allows for the robust, inexpensive prediction of the location of intense waves with a prediction window of 25 wave periods. We demonstrate our results in numerical experiments of unidirectional water wave fields described by the modified nonlinear Schrödinger equation. The presented approach introduces a new paradigm for understanding and predicting intermittent and localized events in dynamical systems characterized by uncertainty and potentially strong nonlinear mechanisms.
Machine learning the kinematics of spherical particles in fluid flows
Numerous efforts have been devoted to the derivation of equations describing the kinematics of finite-size spherical particles in arbitrary fluid flows. These approaches rely on asymptotic arguments to obtain a description of the particle motion in terms of a slow manifold. Here we present a novel approach that results in kinematic models with unprecedented accuracy compared with traditional methods. We apply a recently developed machine learning framework that relies on (i) an imperfect model, obtained through analytical arguments, and (ii) a long short-term memory recurrent neural network. The latter learns the mismatch between the analytical model and the exact velocity of the finite-size particle as a function of the fluid velocity that the particle has encountered along its trajectory. We show that training the model for one flow is sufficient to generate accurate predictions for any other arbitrary flow field. In particular, using as an exact model for trajectories of spherical particles, the Maxey–Riley equation, we first train the proposed machine learning framework using trajectories from a cellular flow. We are then able to accurately reproduce the trajectories of particles having the same inertial parameters for completely different fluid flows: the von Kármán vortex street as well as a two-dimensional turbulent fluid flow. For the second example we also demonstrate that the machine learned kinematic model successfully captures the spectrum of the particle velocity, as well as the extreme event statistics. The proposed scheme paves the way for machine learning kinematic models for bubbles and aerosols using high-fidelity DNS simulations and experiments.
Statistical modeling of fully nonlinear hydrodynamic loads on offshore wind turbine monopile foundations using wave episodes and targeted CFD simulations through active sampling
Accurately determining hydrodynamic force statistics is crucial for designing offshore engineering structures, including offshore wind turbine foundations, due to the significant impact of nonlinear wave–structure interactions. However, obtaining precise load statistics often involves computationally intensive simulations. Furthermore, the estimation of statistics using current practices is subject to ongoing discussion due to the inherent uncertainty involved. To address these challenges, we present a novel machine learning framework that leverages data‐driven surrogate modeling to predict hydrodynamic loads on monopile foundations while reducing reliance on costly simulations and facilitate the load statistics reconstruction. The primary advantage of our approach is the significant reduction in evaluation time compared to traditional modeling methods. The novelty of our framework lies in its efficient construction of the surrogate model, utilizing the Gaussian process regression machine learning technique and a Bayesian active learning method to sequentially sample wave episodes that contribute to accurate predictions of extreme hydrodynamic forces. Additionally, a spectrum transfer technique combines computational fluid dynamics (CFD) results from both quiescent and extreme waves, further reducing data requirements. This study focuses on reducing the dimensionality of stochastic irregular wave episodes and their associated hydrodynamic force time series. Although the dimensionality reduction is linear, Gaussian process regression successfully captures high‐order correlations. Furthermore, our framework incorporates built‐in uncertainty quantification capabilities, facilitating efficient parameter sampling using traditional CFD tools. This paper provides comprehensive implementation details and demonstrates the effectiveness of our approach in delivering reliable statistics for hydrodynamic loads while overcoming the computational cost constraints associated with classical modeling methods.
Statistically accurate low-order models for uncertainty quantification in turbulent dynamical systems
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.
Multifidelity digital twin for real-time monitoring of structural dynamics in aquaculture net cages
As the global population grows, ensuring sustainable food production has become critical. Marine aquaculture provides a sustainable and scalable source of protein; however, its continued expansion requires the development of novel technologies that enable remote management and autonomous operations. Digital twin technology emerges as a transformative tool for realizing this goal, yet its adoption remains limited. Fish net cages—flexible, floating structures—are critical but vulnerable components of aquaculture systems. Exposed to harsh and dynamic marine conditions, they experience substantial hydrodynamic loads that can cause structural damage leading to fish escapes, environmental impacts, and financial losses. We propose a multifidelity surrogate modeling framework for integration into a digital twin that enables real-time monitoring of net cage structural dynamics under stochastic marine conditions. At the core of the framework lies the nonlinear autoregressive Gaussian process method, which captures complex, nonlinear cross-correlations between models of varying fidelity. It combines low-fidelity simulation data with a limited set of high-fidelity field sensor measurements, which, although accurate, are costly and spatially sparse. The framework was validated at the SINTEF ACE fish farm in Norway, where the digital twin assimilates online metocean data to accurately predict net cage displacements and mooring line loads, closely matching field measurements. This approach is especially valuable in data-scarce environments, offering rapid predictions and real-time structural representation. Beyond monitoring, the developed digital twin enables proactive assessment of structural integrity and supports remote operations with unmanned underwater vehicles. Finally, we compare Gaussian processes and graph convolutional networks for predicting net cage deformation, demonstrating the superior ability of the latter to capture in complex structural behaviors.
Information FOMO: The Unhealthy Fear of Missing Out on Information—A Method for Removing Misleading Data for Healthier Models
Misleading or unnecessary data can have out-sized impacts on the health or accuracy of Machine Learning (ML) models. We present a Bayesian sequential selection method, akin to Bayesian experimental design, that identifies critically important information within a dataset while ignoring data that are either misleading or bring unnecessary complexity to the surrogate model of choice. Our method improves sample-wise error convergence and eliminates instances where more data lead to worse performance and instabilities of the surrogate model, often termed sample-wise “double descent”. We find these instabilities are a result of the complexity of the underlying map and are linked to extreme events and heavy tails. Our approach has two key features. First, the selection algorithm dynamically couples the chosen model and data. Data is chosen based on its merits towards improving the selected model, rather than being compared strictly against other data. Second, a natural convergence of the method removes the need for dividing the data into training, testing, and validation sets. Instead, the selection metric inherently assesses testing and validation error through global statistics of the model. This ensures that key information is never wasted in testing or validation. The method is applied using both Gaussian process regression and deep neural network surrogate models.
New perspectives for the prediction and statistical quantification of extreme events in high-dimensional dynamical systems
We discuss extreme events as random occurrences of strongly transient dynamics that lead to nonlinear energy transfers within a chaotic attractor. These transient events are the result of finite-time instabilities and therefore are inherently connected with both statistical and dynamical properties of the system. We consider two classes of problems related to extreme events and nonlinear energy transfers, namely (i) the derivation of precursors for the short-term prediction of extreme events, and (ii) the efficient sampling of random realizations for the fastest convergence of the probability density function in the tail region. We summarize recent methods on these problems that rely on the simultaneous consideration of the statistical and dynamical characteristics of the system. This is achieved by combining available data, in the form of second-order statistics, with dynamical equations that provide information for the transient events that lead to extreme responses. We present these methods through two high-dimensional, prototype systems that exhibit strongly chaotic dynamics and extreme responses due to transient instabilities, the Kolmogorov flow and unidirectional nonlinear water waves. This article is part of the theme issue 'Nonlinear energy transfer in dynamical and acoustical systems'.
Blended particle filters for large-dimensional chaotic dynamical systems
A major challenge in contemporary data science is the development of statistically accurate particle filters to capture non-Gaussian features in large-dimensional chaotic dynamical systems. Blended particle filters that capture non-Gaussian features in an adaptively evolving low-dimensional subspace through particles interacting with evolving Gaussian statistics on the remaining portion of phase space are introduced here. These blended particle filters are constructed in this paper through a mathematical formalism involving conditional Gaussian mixtures combined with statistically nonlinear forecast models compatible with this structure developed recently with high skill for uncertainty quantification. Stringent test cases for filtering involving the 40-dimensional Lorenz 96 model with a 5-dimensional adaptive subspace for nonlinear blended filtering in various turbulent regimes with at least nine positive Lyapunov exponents are used here. These cases demonstrate the high skill of the blended particle filter algorithms in capturing both highly non-Gaussian dynamical features as well as crucial nonlinear statistics for accurate filtering in extreme filtering regimes with sparse infrequent high-quality observations. The formalism developed here is also useful for multiscale filtering of turbulent systems and a simple application is sketched below.
Sequential sampling strategy for extreme event statistics in nonlinear dynamical systems
We develop a method for the evaluation of extreme event statistics associated with nonlinear dynamical systems from a small number of samples. From an initial dataset of design points, we formulate a sequential strategy that provides the “next-best” data point (set of parameters) that when evaluated results in improved estimates of the probability density function (pdf) for a scalar quantity of interest. The approach uses Gaussian process regression to perform Bayesian inference on the parameter-to-observation map describing the quantity of interest. We then approximate the desired pdf along with uncertainty bounds using the posterior distribution of the inferred map. The next-best design point is sequentially determined through an optimization procedure that selects the point in parameter space that maximally reduces uncertainty between the estimated bounds of the pdf prediction. Since the optimization process uses only information from the inferred map, it has minimal computational cost. Moreover, the special form of the metric emphasizes the tails of the pdf. The method is practical for systems where the dimensionality of the parameter space is of moderate size and for problems where each sample is very expensive to obtain. We apply the method to estimate the extreme event statistics for a very high-dimensional system with millions of degrees of freedom: an offshore platform subjected to 3D irregular waves. It is demonstrated that the developed approach can accurately determine the extreme event statistics using a limited number of samples.