Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
52,106
result(s) for
"random methods"
Sort by:
A Random‐XFEM technique modeling of hydraulic fracture interaction with natural void
2022
In this paper, a Random‐XFEM technology was employed to simulate hydraulic fracture interaction with the natural void in a random field of Young's modulus. The random distribution of Young's modulus is characterized by random field theory, while the stress and pressure fields fracturing were solved by the extended finite element method. And a Random‐XFEM iteration algorithm was proposed to solve the hydraulic fracture propagation problem. Then the proposed model was verified against the KGD model and numerical study. A series of numerical examples were presented to analyze the interaction mechanism of hydraulic fracture and void in a random field. The numerical results show that the random field of Young's modulus has a great impact on the hydraulic fracture propagation path. Under the random distribution of Young's modulus, the hydraulic fracture would deviate from the direction of maximum horizontal principal stress. And the random field parameters (Young's modulus mean and point variance) have different effects on the hydraulic fracture propagation path. Also, the interaction patterns of hydraulic fracture and void are greatly affected by random fields. The random field of Young's modulus has a great effect on the hydraulic fracture propagation path and the interaction patterns between fracture and void. Under the random distribution of Yong's modulus, the hydraulic fracture will deviate from the direction of maximum horizontal principal stress.
Journal Article
Improving network inference algorithms using resampling methods
2018
Background
Relatively small changes to gene expression data dramatically affect co-expression networks inferred from that data which, in turn, can significantly alter the subsequent biological interpretation. This error propagation is an underappreciated problem that, while hinted at in the literature, has not yet been thoroughly explored. Resampling methods (e.g. bootstrap aggregation, random subspace method) are hypothesized to alleviate variability in network inference methods by minimizing outlier effects and distilling persistent associations in the data. But the efficacy of the approach assumes the generalization from statistical theory holds true in biological network inference applications.
Results
We evaluated the effect of bootstrap aggregation on inferred networks using commonly applied network inference methods in terms of stability, or resilience to perturbations in the underlying expression data, a metric for accuracy, and functional enrichment of edge interactions.
Conclusion
Bootstrap aggregation results in improved stability and, depending on the size of the input dataset, a marginal improvement to accuracy assessed by each method’s ability to link genes in the same functional pathway.
Journal Article
Landslide susceptibility mapping using random forest, boosted regression tree, classification and regression tree, and general linear models and comparison of their performance at Wadi Tayyah Basin, Asir Region, Saudi Arabia
by
Pourghasemi, Hamid Reza
,
Youssef, Ahmed Mohamed
,
Pourtaghi, Zohre Sadat
in
Agriculture
,
Carts
,
Civil Engineering
2016
The purpose of the current study is to produce landslide susceptibility maps using different data mining models. Four modeling techniques, namely random forest (RF), boosted regression tree (BRT), classification and regression tree (CART), and general linear (GLM) are used, and their results are compared for landslides susceptibility mapping at the Wadi Tayyah Basin, Asir Region, Saudi Arabia. Landslide locations were identified and mapped from the interpretation of different data types, including high-resolution satellite images, topographic maps, historical records, and extensive field surveys. In total, 125 landslide locations were mapped using ArcGIS 10.2, and the locations were divided into two groups; training (70 %) and validating (25 %), respectively. Eleven layers of landslide-conditioning factors were prepared, including slope aspect, altitude, distance from faults, lithology, plan curvature, profile curvature, rainfall, distance from streams, distance from roads, slope angle, and land use. The relationships between the landslide-conditioning factors and the landslide inventory map were calculated using the mentioned 32 models (RF, BRT, CART, and generalized additive (GAM)). The models’ results were compared with landslide locations, which were not used during the models’ training. The receiver operating characteristics (ROC), including the area under the curve (AUC), was used to assess the accuracy of the models. The success (training data) and prediction (validation data) rate curves were calculated. The results showed that the AUC for success rates are 0.783 (78.3 %), 0.958 (95.8 %), 0.816 (81.6 %), and 0.821 (82.1 %) for RF, BRT, CART, and GLM models, respectively. The prediction rates are 0.812 (81.2 %), 0.856 (85.6 %), 0.862 (86.2 %), and 0.769 (76.9 %) for RF, BRT, CART, and GLM models, respectively. Subsequently, landslide susceptibility maps were divided into four classes, including low, moderate, high, and very high susceptibility. The results revealed that the RF, BRT, CART, and GLM models produced reasonable accuracy in landslide susceptibility mapping. The outcome maps would be useful for general planned development activities in the future, such as choosing new urban areas and infrastructural activities, as well as for environmental protection.
Journal Article
Designs to Improve Capability of Neural Networks to Make Structural Predictions
by
Li, Jian-Feng
,
Chen, Jeff Z. Y.
,
Wang, Tian-Yao
in
Artificial neural networks
,
Characterization and Evaluation of Materials
,
Chemistry
2023
A deep neural network model generally consists of different modules that play essential roles in performing a task. The optimal design of a module for use in modeling a physical problem is directly related to the success of the model. In this work, the effectiveness of a number of special modules, the self-attention mechanism for recognizing the importance of molecular sequence information in a polymer, as well as the big-stride representation and conditional random field for enhancing the network ability to produce desired local configurations, is numerically studied. Network models containing these modules are trained by using the well documented data of the native structures of the HP model and assessed according to their capability in making structural predictions of unseen data. The specific network design of self-attention mechanism adopted here is modified from a similar idea in natural language recognition. The big-stride representation module introduced in this work is shown to drastically improve network’s capability to model polymer segments of strong lattice position correlations.
Journal Article
Gradient-Based Monte Carlo Methods for Relaxation Approximations of Hyperbolic Conservation Laws
by
Caflisch, Russel E.
,
Pareschi, Lorenzo
,
Bertaglia, Giulia
in
Algorithms
,
Approximation
,
Asymptotic methods
2024
Particle methods based on evolving the spatial derivatives of the solution were originally introduced to simulate reaction-diffusion processes, inspired by vortex methods for the Navier–Stokes equations. Such methods, referred to as gradient random walk methods, were extensively studied in the ’90s and have several interesting features, such as being grid-free, automatically adapting to the solution by concentrating elements where the gradient is large, and significantly reducing the variance of the standard random walk approach. In this work, we revive these ideas by showing how to generalize the approach to a larger class of partial differential equations, including hyperbolic systems of conservation laws. To achieve this goal, we first extend the classical Monte Carlo method to relaxation approximation of systems of conservation laws, and subsequently consider a novel particle dynamics based on the spatial derivatives of the solution. The methodology, combined with asymptotic-preserving splitting discretization, yields a way to construct a new class of gradient-based Monte Carlo methods for hyperbolic systems of conservation laws. Several results in one spatial dimension for scalar equations and systems of conservation laws show that the new methods are very promising and yield remarkable improvements compared to standard Monte Carlo approaches, either in terms of variance reduction as well as in describing the shock structure.
Journal Article
Real eigenvector distributions of random tensors with backgrounds and random deviations
2023
As in random matrix theories, eigenvector/value distributions are important quantities of random tensors in their applications. Recently, real eigenvector/value distributions of Gaussian random tensors have been explicitly computed by expressing them as partition functions of quantum field theories with quartic interactions. This procedure to compute distributions in random tensors is general, powerful, and intuitive, because one can take advantage of well-developed techniques and knowledge of quantum field theories. In this paper we extend the procedure to the cases that random tensors have mean backgrounds and eigenvector equations have random deviations. In particular, we study in detail the case that the background is a rank-one tensor, namely, the case of a spiked tensor. We discuss the condition under which the background rank-one tensor has a visible peak in the eigenvector distribution. We obtain a threshold value, which agrees with a previous result in the literature.
Journal Article
A novel TD3 for solving multi-level imperfect maintenance optimization problem
2024
As we all know, multi-level imperfect maintenance strategy is usually more effective than single-level maintenance strategy for the actual production machines. At the same time, in the previous multi-level maintenance strategies, the majority of maintenance models only consider the constant maintenance thresholds, while variable maintenance thresholds are usually ignored. Under these contexts, a novel multi-level imperfect maintenance model with variable preventive maintenance (PM) thresholds, variable overhaul maintenance (OM) thresholds and variable number of PMs in each OM cycle is established. In order to deal with the concerned problem, a novel twin delayed deep deterministic policy gradient (TD3) algorithm that is a kind of reinforcement learning is designed, renamed as NTD3. Finally, through numerical simulation, we can find that (1) the average improvements between the proposed maintenance strategy and other three traditional strategies in the average cost rate (ACR) are 11.50%, 595.91% and 5.16%, respectively; and (2) the average improvement between the proposed NTD3 and other random search method is 5.53%. Thus, the effectiveness of the proposed maintenance strategy and the superiority of proposed NTD3 are all demonstrated.
Journal Article
Applying Information Theory and GIS-based quantitative methods to produce landslide susceptibility maps in Nancheng County, China
2017
The main objective of the present study was to produce a landslide susceptibility map by implementing a novel methodology that combines Information Theory and GIS-based methods for the Nancheng County, China, an area with numerous reported landslide events. Specifically, the information coefficient that is estimated from Shannon’s entropy index was used to determine the number of classes of each landslide-related variable that maximizes the information coefficient, while three methods, logistic regression, weight of evidence, and random forest algorithm, were implemented to produce the landslide susceptibility map. The comparison of the various models was based on the assessment of a database of 112 past landslide events, which were divided randomly into a training dataset (70 %) and a validation dataset (30 %). The identification of the areas affected was established by analyzing airborne imagery, extensive field investigation, and the examination of previous research studies, while the morphometric variables were derived using remote sensing technology. The geo-environmental conditions in those locations were analyzed regarding their susceptibility to slide. In particular, 11 variables were analyzed: lithology, altitude, slope, aspect, topographic wetness index, sediment transport index, profile curvature, plan curvature, distance to rivers, distance to faults, and distance to roads. The comparison and validation of the outcomes of each model were achieved using statistical evaluation measures, the receiving operating characteristic, and the area under the success and predictive rate curves. Each model gave similar outcomes; however, the random forest model had a slightly higher predictive performance in terms of area under the curve (0.9220) against the ones estimated for the weight of evidence (0.9090) and the logistic regression model (0.8940). The same pattern of performance was reported when the success power of the models was calculated. Random forest was slightly better than the other two models in terms of area under the curve (0.9350) in comparison with the weight of evidence (0.9255) and logistic regression (0.9097). The predictive performance was estimated by using the validation dataset, while the success power of the models was estimated by using the training dataset. From the visual inspection of the produced landslide susceptibility maps, the most susceptible areas are located at the west and east mountainous areas, while moderate to low susceptibility values characterize the central area.
Journal Article
A Method to Automate the Prediction of Student Academic Performance from Early Stages of the Course
by
Francisci, Giacomo
,
Duque, Rafael
,
Nieto-Reyes, Alicia
in
Academic achievement
,
Artificial intelligence
,
Automation
2021
The objective of this work is to present a methodology that automates the prediction of students’ academic performance at the end of the course using data recorded in the first tasks of the academic year. Analyzing early student records is helpful in predicting their later results; which is useful, for instance, for an early intervention. With this aim, we propose a methodology based on the random Tukey depth and a non-parametric kernel. This methodology allows teachers and evaluators to define the variables that they consider most appropriate to measure those aspects related to the academic performance of students. The methodology is applied to a real case study obtaining a success rate in the predictions of over the 80%. The case study was carried out in the field of Human-computer Interaction.The results indicate that the methodology could be of special interest to develop software systems that process the data generated by computer-supported learning systems and to warn the teacher of the need to adopt intervention mechanisms when low academic performance is predicted.
Journal Article