Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,151
result(s) for
"sampling strategy"
Sort by:
Optimal sampling strategies for darunavir and external validation of the underlying population pharmacokinetic model
by
Belkhir Leila
,
Stillemans Gabriel
,
Elens Laure
in
Antiretroviral drugs
,
Datasets
,
Human immunodeficiency virus
2021
PurposeA variety of diagnostic methods are available to validate the performance of population pharmacokinetic models. Internal validation, which applies these methods to the model building dataset and to additional data generated through Monte Carlo simulations, is often sufficient, but external validation, which requires a new dataset, is considered a more rigorous approach, especially if the model is to be used for predictive purposes. Our first objective was to validate a previously published population pharmacokinetic model of darunavir, an HIV protease inhibitor boosted with ritonavir or cobicistat. Our second objective was to use this model to derive optimal sampling strategies that maximize the amount of information collected with as few pharmacokinetic samples as possible.MethodsA validation dataset comprising 164 sparsely sampled individuals using ritonavir-boosted darunavir was used for validation. Standard plots of predictions and residuals, NPDE, visual predictive check, and bootstrapping were applied to both the validation set and the combined learning/validation set in NONMEM to assess model performance. D-optimal designs for darunavir were then calculated in PopED and further evaluated in NONMEM through simulations.ResultsExternal validation confirmed model robustness and accuracy in most scenarios but also highlighted several limitations. The best one-, two-, and three-point sampling strategies were determined to be pre-dose (0 h); 0 and 4 h; and 1, 4, and 19 h, respectively. A combination of samples at 0, 1, and 4 h was comparable to the optimal three-point strategy. These could be used to reliably estimate individual pharmacokinetic parameters, although with fewer samples, precision decreased and the number of outliers increased significantly.ConclusionsOptimal sampling strategies derived from this model could be used in clinical practice to enhance therapeutic drug monitoring or to conduct additional pharmacokinetic studies.
Journal Article
Forecasting SMEs’ credit risk in supply chain finance with a sampling strategy based on machine learning techniques
2023
Exploring the value of multi-source information fusion to predict small and medium-sized enterprises’ (SMEs) credit risk in supply chain finance (SCF) is a popular yet challenging task, as two issues of key variable selection and imbalanced class must be addressed simultaneously. To this end, we develop new forecast models adopting an imbalance sampling strategy based on machine learning techniques and apply these new models to predict credit risk of SMEs in China, using financial information, operation information, innovation information, and negative events as predictors. The empirical results show that the financial-based information, such as TOC, NIR, is most useful in predicting SMEs’ credit risk in SCF, and multi-source information fusion is meaningful in better predicting the credit risk. In addition, based on the preferred CSL-RF model, which extends cost-sensitive learning to a random forest, we also present the varying mechanisms of key predictors for SMEs’ credit risk by using partial dependency analysis. The strategic insights obtained may be helpful for market participants, such as SMEs’ managers, investors, and market regulators.
Journal Article
Surrogate-assisted global sensitivity analysis: an overview
by
Lu, Zhenzhou
,
Ling, Chunyan
,
Cheng, Kai
in
Computational Mathematics and Numerical Analysis
,
Computer simulation
,
Engineering
2020
Surrogate models are popular tool to approximate the functional relationship of expensive simulation models in multiple scientific and engineering disciplines. Successful use of surrogate models can provide significant savings of computational cost. However, with a variety of surrogate model approaches available in literature, it is a difficult task to select an appropriate one at hand. In this paper, we present an overview of surrogate model approaches with an emphasis of their application for variance-based global sensitivity analysis, including polynomial regression model, high-dimensional model representation, state-dependent parameter, polynomial chaos expansion, Kriging/Gaussian Process, support vector regression, radial basis function, and low rank tensor approximation. The accuracy and efficiency of these approaches are compared with several benchmark examples. The strengths and weaknesses of these surrogate models are discussed, and the recommendations are provided for different types of applications. For ease of implementations, the packages, as well as toolboxes, of surrogate model techniques and their applications for global sensitivity analysis are collected.
Journal Article
Accurate Determination of Uranium Content in Uranium-Bearing Powders via Optimized Sampling Strategies
2025
Accurate analysis of uranium content in powder is great significant for environmental protection and supporting clean energy. However, precise analysis of uranium-containing powders faces numerous challenges, such as high radioactivity and uneven powder distribution. This investigation employs a rigorously designed experimental-comparison approach to quantitatively elucidate the influence of particle size on blending homogeneity, and to systematically delineate the mechanisms by which powder homogeneity, spatial sampling location (dimensional distribution) and sample quantity affect the accuracy of uranium-content determination. The results demonstrate that the blending homogeneity of the powder increases as particle size decreases. Powder homogeneity constitutes the decisive determinant of analytical accuracy: the higher the homogeneity, the greater the accuracy. Sampling strategy directly governs representativeness: concentrating sampling within a localized zone decreases accuracy, whereas increasing the number of samples markedly enhances the reliability of the result. On the basis of these findings, an optimized sampling protocol for uranium-bearing powders has been established and validated: the powder is comminuted until it completely passes through an 80-mesh sieve (aperture ≈ 180 µm); nine sampling points are then selected uniformly along three mutually orthogonal axes (length, width and height) within the container. Validation experiments confirm that this protocol achieves analytical accuracy approaching 100%.
Journal Article
New bubble sampling method for reliability analysis
by
Li, Changquan
,
He, Wanxin
,
Pang, Yongsheng
in
Bubbles
,
Computational efficiency
,
Computational Mathematics and Numerical Analysis
2023
In recent years, the safety of engineering systems is seriously threatened by increasingly complex and uncertain engineering environment, and the reliability analysis of engineering structure has attracted increasing attention. The sampling methods are widely used because of its simplicity and universality. However, their applications are limited by the expensive computational cost. To ease the computation burden, a new bubble sampling method (BSM) is proposed in this study. Its core idea is to generate several bubbles in the safe and failure domains, in which the performance function signs of samples located in these bubbles can be directly determined and are unnecessary to be computed. In this way, the number of function calls is greatly reduced. Moreover, a new bubble optimization model is developed, in which the uniform sampling strategy is adopted. Several numerical and engineering applications are validated to demonstrate the performances of the proposed BSM, which confirms its computational efficiency and accuracy.
Journal Article
Ferries and environmental DNA: underway sampling from commercial vessels provides new opportunities for systematic genetic surveys of marine biodiversity
by
Simon J. Goodman
,
Roberto Lombardi
,
Elena Valsecchi
in
12S and 16S ribosomal RNA genes; citizen science; marine conservation; marine mammals; MarVer; metabarcoding; sampling strategy; spatial planning
,
citizen science
,
General. Including nature conservation, geographical distribution
2021
Journal Article
Methodology series module 5: Sampling strategies
by
Setia, Maninder
in
Biometry
,
Module on Biostatistics and Research Methodology for the Dermatologist
,
Non-probability sampling
2016
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.
Journal Article
A novel dynamic-candidate-pool-based sequential metamodel method for slope reliability analysis: Insights from optimization methodology
2024
Sequential metamodel-based slope reliability analysis can significantly reduce the number of calls to a slope stability model, thereby enhancing computational efficiency. However, most existing methods rely on static candidate pools created by random sampling, either locally or globally, which often results in pools that are excessively large and lead to untargeted sampling. To address this issue, this paper proposes a novel dynamic-candidate-pool-based sequential metamodel (DCP-SM) method for slope reliability analysis informed by an optimization methodology. This method optimally utilizes the information provided by a dynamically updated metamodel ĜT. Our recently proposed normal search particle swarm optimization algorithm is utilized to optimize |ĜT|. Solution sets that are evenly distributed and proximal to the predicted limit state function are used to construct a DCP. A novel sequential sampling strategy was proposed to identify the most informative points efficiently by taking full advantage of the characteristics of DCP. The efficacy of DCP-SM was validated by benchmarking it against nine state-of-the-art methods on three explicit performance functions and two typical examples of slope engineering. The results confirm the superiority of DCP-SM in terms of computational efficiency, accuracy, and stability.
Journal Article
Heavy Minerals for Junior Woodchucks
2019
In the last two centuries, since the dawn of modern geology, heavy minerals have been used to investigate sediment provenance and for many other scientific or practical applications. Not always, however, with the correct approach. Difficulties are diverse, not just technical and related to the identification of tiny grains, but also procedural and conceptual. Even the definition of “heavy minerals” is elusive, and possibly impossible. Sampling is critical. In many environments (e.g., beaches), both absolute and relative heavy mineral abundances invariably increase or decrease locally to different degrees owing to hydraulic-sorting processes, so that samples close to \"neutral composition\" are hard to obtain. Several widely shared opinions are misleading. Choosing a narrow size-window for analysis leads to increased bias, not to increased accuracy or precision. Only point-counting provides real volume percentages, whereas grain-counting distorts results in favor of smaller minerals. This paper also briefly reviews the heavy mineral associations typically found in diverse plate-tectonic settings. A mineralogical assemblage, however, only reproduces the mineralogy of source rocks, which does not correlate univocally with the geodynamic setting in which those source rocks were formed and assembled. Moreover, it is affected by environmental bias, and by diagenetic bias on top in the case of ancient sandstones. One fruitful way to extract information on both provenance and sedimentological processes is to look for anomalies in mineralogical–textural relationships (e.g., denser minerals bigger than lower-density minerals; harder minerals better rounded than softer minerals; less durable minerals increasing with stratal age and stratigraphic depth). To minimize mistakes, it is necessary to invariably combine heavy mineral investigations with the petrographic analysis of bulk sand. Analysis of thin sections allows us to see also those source rocks that do not shed significant amounts of heavy minerals, such as limestone or granite, and helps us to assess heavy mineral concentration, the “outer” message carrying the key to decipher the “inner message” contained in the heavy mineral suite. The task becomes thorny indeed when dealing with samples with strong diagenetic overprint, which is, unfortunately, the case of most ancient sandstones. Diagenesis is the Moloch that devours all grains that are not chemically resistant, leaving a meager residue difficult or even impossible to interpret when diagenetic effects accumulate through multiple sedimentary cycles. We have conceived this friendly little handbook to help the student facing these problems, hoping that it may serve the purpose.
Journal Article
A facial structure sampling contrastive learning method for sketch facial synthesis
2025
Sketch face synthesis aims to generate sketch images from photos. Recently, contrastive learning, which maps and aligns information across diverse modalities, has found extensive application in image translation. However, when applying traditional contrastive learning to sketch face synthesis, the random sampling strategy and the imbalance between positive and negative samples result in poor performance of synthesized sketch images regarding local details. To address the above challenges, we propose A Facial Structure Sampling Contrastive Learning Method for Sketch Facial Synthesis. Firstly, we propose a region-constrained sampling module that utilizes the distribution map of facial structure obtained by a dual-branch attention mechanism to segment the input photos into diverse regions, thereby providing regional constraints for sample selection. Subsequently, we propose a dynamic sampling strategy that dynamically adjusts the sampling frequency based on the feature density in the distribution map, thereby alleviating sample imbalance. Additionally, to diminish the background influence and enhance the delineation of character contours, we introduce the mask derived from the input photo as an additional input. Finally, to further enhance the quality of the synthesized sketch images, we introduce pixel-wise loss and perceptual loss. The CUFS dataset experiment demonstrates that our method generates high-quality sketch images, outperforming existing state-of-the-art methods in subjective and objective evaluations.
Journal Article