Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
4,685 result(s) for "Adaptive sampling"
Sort by:
Markov state modeling reveals alternative unbinding pathways for peptide–MHC complexes
Peptide binding to major histocompatibility complexes (MHCs) is a central component of the immune system, and understanding the mechanism behind stable peptide–MHC binding will aid the development of immunotherapies. While MHC binding is mostly influenced by the identity of the so-called anchor positions of the peptide, secondary interactions from nonanchor positions are known to play a role in complex stability. However, current MHC-binding prediction methods lack an analysis of the major conformational states and might underestimate the impact of secondary interactions. In this work, we present an atomically detailed analysis of peptide–MHC binding that can reveal the contributions of any interaction toward stability. We propose a simulation framework that uses both umbrella sampling and adaptive sampling to generate a Markov state model (MSM) for a coronavirus-derived peptide (QFKDNVILL), bound to one of the most prevalent MHC receptors in humans (HLA-A24:02). While our model reaffirms the importance of the anchor positions of the peptide in establishing stable interactions, our model also reveals the underestimated importance of position 4 (p4), a nonanchor position. We confirmed our results by simulating the impact of specific peptide mutations and validated these predictions through competitive binding assays. By comparing the MSM of the wild-type system with those of the D4A and D4P mutations, our modeling reveals stark differences in unbinding pathways. The analysis presented here can be applied to any peptide–MHC complex of interest with a structural model as input, representing an important step toward comprehensive modeling of the MHC class I pathway.
Nanopore adaptive sampling to identify the NLR gene family in melon (Cucumis melo L.)
Background Nanopore adaptive sampling (NAS) offers a promising approach for assessing genetic diversity in targeted genomic regions. Here we designed and validated an experiment to enrich a set of resistance genes in several melon cultivars as a proof of concept. Results Using the same reference to guide read acceptance or rejection with NAS, we successfully and accurately reconstructed the 15 regions in two newly assembled ssp. melo genomes and in a third ssp. agrestis cultivar. We obtained fourfold enrichment regardless of the tested samples, but with some variations according to the enriched regions. The accuracy of our assembly was further confirmed by PCR in the agrestis cultivar. We discussed parameters that could influence the enrichment and accuracy of NAS generated assemblies. Conclusions Overall, we demonstrated that NAS is a simple and efficient approach for exploring complex genomic regions, such as clusters of Nucleotide-binding site leucine-rich repeat (NLR) resistance genes. These regions are characterized by containing a high number of copy number variations, presence-absence polymorphisms and repetitive elements. These features make accurate assembly challenging but are crucial to study due to their central role in plant immunity and disease resistance. This approach facilitates resistance gene characterization in a large number of individuals, as required when breeding new cultivars suitable for the agroecological transition.
Compressed Adaptive-Sampling-Rate Image Sensing Based on Overcomplete Dictionary
In this paper, a compressed adaptive image-sensing method based on an overcomplete ridgelet dictionary is proposed. Some low-complexity operations are designed to distinguish between smooth blocks and texture blocks in the compressed domain, and adaptive sampling is performed by assigning different sampling rates to different types of blocks. The efficient, sparse representation of images is achieved by using an overcomplete ridgelet dictionary; at the same time, a reasonable dictionary-partitioning method is designed, which effectively reduces the number of candidate dictionary atoms and greatly improves the speed of classification. Unlike existing methods, the proposed method does not rely on the original signal, and computation is simple, making it particularly suitable for scenarios where a device’s computing power is limited. At the same time, the proposed method can accurately identify smooth image blocks and more reasonably allocate sampling rates to obtain a reconstructed image with better quality. The experimental results show that our method’s image reconstruction quality is superior to that of existing ARCS methods and still maintains low computational complexity.
Deep Adaptive Sampling for Surrogate Modeling Without Labeled Data
Surrogate modeling is of great practical significance for parametric differential equation systems. In contrast to classical numerical methods, using physics-informed deep learning-based methods to construct simulators for such systems is a promising direction due to its potential to handle high dimensionality, which requires minimizing a loss over a training set of random samples. However, the random samples introduce statistical errors, which may become the dominant errors for the approximation of low-regularity and high-dimensional problems. In this work, we present a deep adaptive sampling method for surrogate modeling of low-regularity parametric differential equations and illustrate the necessity of adaptive sampling for constructing surrogate models. In the parametric setting, the residual loss function can be regarded as an unnormalized probability density function (PDF) of the spatial and parametric variables. In contrast to the non-parametric setting, factorized joint density models can be employed to alleviate the difficulties induced by the parametric space. The PDF is approximated by a deep generative model, from which new samples are generated and added to the training set. Since the new samples match the residual-induced distribution, the refined training set can further reduce the statistical error in the current approximate solution through variance reduction. We demonstrate the effectiveness of the proposed method with a series of numerical experiments, including the physics-informed operator learning problem, the parametric optimal control problem with geometrical parametrization, and the parametric lid-driven 2D cavity flow problem with a continuous range of Reynolds numbers from 100 to 3200.
An Adaptive Sampling Algorithm with Dynamic Iterative Probability Adjustment Incorporating Positional Information
Physics-informed neural networks (PINNs) have garnered widespread use for solving a variety of complex partial differential equations (PDEs). Nevertheless, when addressing certain specific problem types, traditional sampling algorithms still reveal deficiencies in efficiency and precision. In response, this paper builds upon the progress of adaptive sampling techniques, addressing the inadequacy of existing algorithms to fully leverage the spatial location information of sample points, and introduces an innovative adaptive sampling method. This approach incorporates the Dual Inverse Distance Weighting (DIDW) algorithm, embedding the spatial characteristics of sampling points within the probability sampling process. Furthermore, it introduces reward factors derived from reinforcement learning principles to dynamically refine the probability sampling formula. This strategy more effectively captures the essential characteristics of PDEs with each iteration. We utilize sparsely connected networks and have adjusted the sampling process, which has proven to effectively reduce the training time. In numerical experiments on fluid mechanics problems, such as the two-dimensional Burgers’ equation with sharp solutions, pipe flow, flow around a circular cylinder, lid-driven cavity flow, and Kovasznay flow, our proposed adaptive sampling algorithm markedly enhances accuracy over conventional PINN methods, validating the algorithm’s efficacy.
An adaptive sampling method for Kriging surrogate model with multiple outputs
The sample distribution has a vital influence on the quality of a Kriging surrogate model, which may further influence the required cost or convergence of the surrogate model-based design and optimization problems. Adaptive sampling methods utilize the information from existing samples to reasonably allocate the sequential samples, which can generally build a more accurate Kriging surrogate model under the same computational budget. However, most of the existing adaptive sampling methods for the Kriging surrogate model are only available for single-output problems, and there are few studies on problems with multiple responses. In this paper, an adaptive sampling method based on Delaunay triangulation and technique for order preference by similarity to ideal solution (TOPSIS) is proposed for Kriging surrogate model with multiple outputs (mKMDT). In the proposed mKMDT, Delaunay triangulation is used to partition the design space into multiple triangle regions, whose area denotes the dispersion of the sample points. The prediction error at each triangle’s centroid represents the local approximation error. Specifically, three different strategies are developed when allocating weights to the area and the prediction error of each triangle with the entropy method and the TOPSIS method. The performance of the proposed method is illustrated through numerical examples with different numbers of outputs and a collision problem between the missile and the adapter. Results show that the proposed method can construct an accuracy surrogate model with few samples, which is useful for practical engineering design problems with multiple outputs.
Adaptive infill sampling criterion for multi-fidelity gradient-enhanced kriging model
Multi-fidelity surrogate (MFS) method is very promising for the optimization of complex problems. The optimization capability of MFS can be improved by infilling samples in the optimization process. Furthermore, once the gradient information is provided, the gradient-enhanced kriging (GEK) can be utilized to construct a more accurate MFS model. However, for the existing infill sampling criterions, it is difficult to improve the optimization speed without sacrificing the optimization gains. In this paper, a novel infill sampling criterion named Adaptive Multi-fidelity Expected Improvement (AMEI) is proposed, in which the prediction accuracy and the optimization potential of the surrogate model are both considered. With a set of extra samples calculated, the AMEI determines which fidelity model for the new sample is to be added. Through two numerical examples and two engineering examples, it can be found that the AMEI always provides the best optimization result with the fewest analysis calls, and the robustness is also good. The optimization capability and efficiency of the AMEI have been demonstrated compared with traditional criterions.
System Maintainability Estimation with Multi-Peak Time Distribution based on the Bayesian Melding Method
In some test phases of equipment, the small sample size of test data and the absence of some maintenance operations may lead to a multi-peak phenomenon in data distribution, which is a challenge for Bayesian information fusion based on maintainability assessment. In this paper, prior information at two levels, the system level and the maintenance operation level, is integrated with the field test data via the Bayesian melding method (BMM). Mixture priors are used to avoid prior-data conflicts in the Bayesian framework, and a Bayesian posterior distribution is used to estimate system maintainability. Adaptive sampling importance resampling (ASIR) is used to overcome computational difficulties in simulation algorithms. Compared to the other methods, the proposed method provides more information sources for maintainability estimation, whose estimation effect is shown to be satisfactory based on two validation cases.
Adaptive multi-fidelity sparse polynomial chaos-Kriging metamodeling for global approximation of aerodynamic data
The multi-fidelity metamodeling method can dramatically improve the efficiency of metamodeling for computationally expensive engineering problems when multiple levels of fidelity data are available. In this paper, an efficient and novel adaptive multi-fidelity sparse polynomial chaos-Kriging (AMF-PCK) metamodeling method is proposed for accurate global approximation. This approach, by first using low-fidelity computations, builds the PCK model as a model trend for the high-fidelity function and captures the relative importance of the significant sparse polynomial bases selected by least angle regression (LAR). Then, by using high-fidelity model evaluations, the developed method utilizes the trend information to adaptively refine a scaling PCK model using an adaptive correction polynomial expansion-Gaussian process modeling. Here, the most relevant sparse polynomial basis set and the optimal correction expansion are adaptively identified and constructed based on a devised nested leave-one-out cross-validation-based LAR procedure. As a result, the optimal AMF-PCK metamodel is adaptively established, which combines advantages of high flexibility and strong nonlinear modeling ability. Moreover, an adaptive sequential sampling approach is specially developed to further improve the multi-fidelity metamodeling efficiency. The developed method is evaluated by several benchmark functions and two practically challenging transonic aerodynamic modeling applications. A comprehensive comparison with the popular hierarchical Kriging, universal Kriging, and LAR-PCK approaches demonstrates that the proposed method is the most efficient and provides the best global approximation accuracy, with particular superiority for quantities of interest in the multimodal and highly nonlinear landscape. This novel method is very promising for efficient uncertainty analysis and surrogate-based optimization of expensive engineering problems.
SAFE ADAPTIVE IMPORTANCE SAMPLING
This paper investigates adaptive importance sampling algorithms for which the policy, the sequence of distributions used to generate the particles, is a mixture distribution between a flexible kernel density estimate (based on the previous particles), and a “safe” heavy-tailed density. When the share of samples generated according to the safe density goes to zero but not too quickly, two results are established: (i) uniform convergence rates are derived for the policy toward the target density; (ii) a central limit theorem is obtained for the resulting integral estimates. The fact that the asymptotic variance is the same as the variance of an “oracle” procedure with variance-optimal policy, illustrates the benefits of the approach. In addition, a subsampling step (among the particles) can be conducted before constructing the kernel estimate in order to decrease the computational effort without altering the performance of the method. The practical behavior of the algorithms is illustrated in a simulation study.