Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
60
result(s) for
"Song, Xueguan"
Sort by:
A radial basis function-based multi-fidelity surrogate model: exploring correlation between high-fidelity and low-fidelity models
by
Sun, Wei
,
Lv, Liye
,
Zhang, Jie
in
Accuracy
,
Computational efficiency
,
Computational Mathematics and Numerical Analysis
2019
In computational simulation, a high-fidelity (HF) model is generally more accurate than a low-fidelity (LF) model, while the latter is generally more computationally efficient than the former. To take advantages of both HF and LF models, a multi-fidelity surrogate model based on radial basis function (MFS-RBF) is developed in this paper by combining HF and LF models. To determine the scaling factor between HF and LF models, a correlation matrix is augmented by further integrating LF responses. The scaling factor and relevant basis function weights are then calculated by employing corresponding HF responses. MFS-RBF is compared with Co-Kriging model, multi-fidelity surrogate based on linear regression (LR-MFS) model, CoRBF model, and three single-fidelity surrogates. The impact of key factors, such as the cost ratio of LF to HF models and different combinations of HF and LF samples, is also investigated. The results show that (i) MFS-RBF presents a better accuracy and robustness than the three benchmark MFS models and single-fidelity surrogates in about 90% cases of this paper; (ii) MFS-RBF is less sensitive to the correlation between HF and LF models than the three MFS models; (iii) by fixing the total computational cost, the cost ratio of LF to HF models is suggested to be less than 0.2, and 10–80% of the total cost should be used for LF samples; (iv) the MFS-RBF model is able to save an average of 50 to 70% computational cost if HF and LF models are highly correlated.
Journal Article
A multi-fidelity surrogate model based on support vector regression
by
Sun, Wei
,
Lv, Liye
,
Song, Xueguan
in
Accuracy
,
Algorithms
,
Computational Mathematics and Numerical Analysis
2020
Computational simulations with different fidelities have been widely used in engineering design and optimization. A high-fidelity (HF) model is generally more accurate but also more time-consuming than the corresponding low-fidelity (LF) model. To take advantage of both HF and LF models, a number of multi-fidelity surrogate (MFS) models based on different surrogate models (e.g., Kriging, response surface, and radial basis function) have been developed, but MFS models based on support vector regression are rarely reported. In this paper, a new MFS model based on support vector regression, which is named Co_SVR, is developed. In the proposed method, the HF and LF samples are mapped into a high-dimensional feature space through a kernel function, and then, a linear model is utilized to evaluate the relationship between inputs and outputs. The root mean square error (
RMSE
) of HF responses of interest is used to express the training error of Co_SVR, and a heuristic algorithm, grey wolf optimizer, is used to obtain the optimal parameters. For verification, the Co_SVR model is compared with four popular multi-fidelity surrogate models and four single-fidelity surrogate models through a number of numerical cases and a pressure relief valve design problem. The results show that Co_SVR provides competitive performance in both numerical cases and the practical case. Moreover, the effects of key factors (i.e., the correlation between HF and LF models, the cost ratio of HF to LF models, and the combination of HF and LF samples) on the performance of Co_SVR are also explored.
Journal Article
The Digital Twin in Medicine: A Key to the Future of Healthcare?
by
Tianze Sun
,
Xiwang He
,
Xueguan Song
in
Accuracy
,
Artificial intelligence
,
artificial intelligence (AI)
2022
There is a growing need for precise diagnosis and personalized treatment of disease in recent years. Providing treatment tailored to each patient and maximizing efficacy and efficiency are broad goals of the healthcare system. As an engineering concept that connects the physical entity and digital space, the digital twin (DT) entered our lives at the beginning of Industry 4.0. It is evaluated as a revolution in many industrial fields and has shown the potential to be widely used in the field of medicine. This technology can offer innovative solutions for precise diagnosis and personalized treatment processes. Although there are difficulties in data collection, data fusion, and accurate simulation at this stage, we speculated that the DT may have an increasing use in the future and will become a new platform for personal health management and healthcare services. We introduced the DT technology and discussed the advantages and limitations of its applications in the medical field. This article aims to provide a perspective that combining Big Data, the Internet of Things (IoT), and artificial intelligence (AI) technology; the DT will help establish high-resolution models of patients to achieve precise diagnosis and personalized treatment.
Journal Article
A multi-fidelity surrogate model based on moving least squares: fusing different fidelity data for engineering design
2021
In numerical simulations, a high-fidelity (HF) simulation is generally more accurate than a low-fidelity (LF) simulation, while the latter is generally more computationally efficient than the former. To take advantages of both HF and LF simulations, a multi-fidelity surrogate (MFS) model based on moving least squares (MLS), termed as adaptive MFS-MLS, is proposed. The MFS-MLS calculates the LF scaling factors and the unknown coefficients of the discrepancy function simultaneously using an extended MLS model. In the proposed method, HF samples are not regarded as equally important in the process of constructing MFS-MLS models, and adaptive weightings are given to different HF samples. Moreover, both the size of the influence domain and the scaling factors can be determined adaptively according to the training samples. The MFS-MLS model is compared with three state-of-the-art MFS models and three single-fidelity surrogate models in terms of the prediction accuracy through multiple benchmark numerical cases and an engineering problem. In addition, the effects of key factors on the performance of the MFS-MLS model, such as the correlation between HF and LF models, the cost ratio of HF to LF samples, and the combination of HF and LF samples, are also investigated. The results show that MFS-MLS is able to provide competitive performance with high computational efficiency.
Journal Article
Robust optimization of foam-filled thin-walled structure based on sequential Kriging metamodel
by
Li, Qing
,
Sun, Guangyong
,
Song, Xueguan
in
Aerospace industry
,
Automobile industry
,
Automotive engineering
2014
Deterministic optimization has been successfully applied to a range of design problems involving foam-filled thin-walled structures, and to some extent gained significant confidence for the applications of such structures in automotive, aerospace, transportation and defense industries. However, the conventional deterministic design could become less meaningful or even unacceptable when considering the perturbations of design variables and noises of system parameters. To overcome this drawback, a robust design methodology is presented in this paper to address the effects of parametric uncertainties of foam-filled thin-walled structure on design optimization, in which different sigma criteria are adopted to measure the variations. The Kriging modeling technique is used to construct the corresponding surrogate models of mean and standard deviation for different crashworthiness criteria. A sequential sampling approach is introduced to improve the fitness accuracy of these surrogate models. Finally, a gradient-based sequential quadratic program (SQP) method is employed from 20 different initial points to obtain a quasi-global robust optimum solution. The optimal solutions were verified by using the Monte Carlo simulation. The results show that the presented robust optimization method is fairly effective and efficient, the crashworthiness and robustness of the foam-filled thin-walled structure can be improved significantly.
Journal Article
Industrial Data Denoising via Low-Rank and Sparse Representations and Its Application in Tunnel Boring Machine
2022
The operation data of a tunnel boring machine (TBM) reflects its geological conditions and working status, which can provide critical references and essential information for TBM designers and operators. However, in practice, operation data may get corrupted due to equipment failures or data management errors. Moreover, the working state of a TBM system usually changes, which results in patterns of operation data that vary comparatively. This paper proposes a denoising approach to process the corrupted data. This approach is combined with low-rank matrix recovery (LRMR) and sparse representation (SR) theory. The classical LRMR model requires that the noise must be sparse, but the sparsity of noise cannot be fully guaranteed. In the proposed model, a weighted nuclear norm is utilized to enhance the sparsity of sparse components, and a constraint of condition number is applied to ensure the stability of the model solution. The approach is coupled with a fuzzy c-means algorithm (FCM) to find the natural partitioning using the TBM operation data as input. The performances of the proposed approach are illustrated through an application to the Shenzhen metro. Experimental results show that the proposed approach performs well in corrupted TBM data denoising. The different excavation status of the TBM recognition accuracy is improved remarkably after denoising.
Journal Article
A Novel Hybrid Transfer Learning Framework for Dynamic Cutterhead Torque Prediction of the Tunnel Boring Machine
by
Fu, Tao
,
Song, Xueguan
,
Zhang, Tianci
in
Algorithms
,
Clustering
,
Construction accidents & safety
2022
A tunnel boring machine (TBM) is an important large-scale engineering machine, which is widely applied in tunnel construction. Precise cutterhead torque prediction plays an essential role in the cost estimation of energy consumption and safety operation in the tunneling process, since it directly influences the adaptable adjustment of excavation parameters. Complicated and variable geological conditions, leading to operational and status parameters of the TBM, usually exhibit some spatio-temporally varying characteristic, which poses a serious challenge to conventional data-based methods for dynamic cutterhead torque prediction. In this study, a novel hybrid transfer learning framework, namely TRLS-SVR, is proposed to transfer knowledge from a historical dataset that may contain multiple working patterns and alleviate fresh data noise interference when addressing dynamic cutterhead torque prediction issues. Compared with conventional data-driven algorithms, TRLS-SVR considers long-ago historical data, and can effectively extract and leverage the public latent knowledge that is implied in historical datasets for current prediction. A collection of in situ TBM operation data from a tunnel project located in China is utilized to evaluate the performance of the proposed framework.
Journal Article
Novel Hybrid Physics-Informed Deep Neural Network for Dynamic Load Prediction of Electric Cable Shovel
by
Fu, Tao
,
Zhang, Tianci
,
Cui, Yunhao
in
Artificial neural networks
,
Dynamic load prediction
,
Dynamic loads
2022
Electric cable shovel (ECS) is a complex production equipment, which is widely utilized in open-pit mines. Rational valuations of load is the foundation for the development of intelligent or unmanned ECS, since it directly influences the planning of digging trajectories and energy consumption. Load prediction of ECS mainly consists of two types of methods: physics-based modeling and data-driven methods. The former approach is based on known physical laws, usually, it is necessarily approximations of reality due to incomplete knowledge of certain processes, which introduces bias. The latter captures features/patterns from data in an end-to-end manner without dwelling on domain expertise but requires a large amount of accurately labeled data to achieve generalization, which introduces variance. In addition, some parts of load are non-observable and latent, which cannot be measured from actual system sensing, so they can’t be predicted by data-driven methods. Herein, an innovative hybrid physics-informed deep neural network (HPINN) architecture, which combines physics-based models and data-driven methods to predict dynamic load of ECS, is presented. In the proposed framework, some parts of the theoretical model are incorporated, while capturing the difficult-to-model part by training a highly expressive approximator with data. Prior physics knowledge, such as Lagrangian mechanics and the conservation of energy, is considered extra constraints, and embedded in the overall loss function to enforce model training in a feasible solution space. The satisfactory performance of the proposed framework is verified through both synthetic and actual measurement dataset.
Journal Article
A fast active learning method in design of experiments: multipeak parallel adaptive infilling strategy based on expected improvement
by
Wang, Shuo
,
Zhang, Yang
,
Lv, Liye
in
Computational Mathematics and Numerical Analysis
,
Correlation analysis
,
Design engineering
2021
Surrogate models are widely used in simulation-based engineering design. The distribution of samples directly determines the quality and efficiency of surrogate models, which has a significant influence on follow-up work. This paper proposes a multipeak parallel adaptive infilling (MPEI) strategy based on expected improvement (EI), which can be divided into two stages: the construction of candidate peak areas and the selection of appropriate candidates at the candidate peak areas. In the first stage, the candidates are divided into the corresponding subspaces in sequence according to the value of EI and the position of each candidate to construct the candidate peak areas. In the second stage, the Gaussian function is used to extract the uncorrelated parent point and the corresponding offspring points in each candidate peak area. Based on these stages, the MPEI strategy selects multiple new samples in spaces with both local optima and areas of large uncertainty interest, which can fully balance global exploration and local exploitation. In addition, the samples selected in each candidate peak area are concise and locally uniform, which can effectively reduce the computational cost. Seven benchmark cases and one engineering problem are used to validate the performance of the MPEI strategy. The results show that the MPEI strategy can efficiently obtain the desired prediction accuracy of surrogate models at a small price of a few samples and confirm the feasibility and robustness of the presented methodology.
Journal Article
Physics-Informed Neural Networks-Based Online Excavation Trajectory Planning for Unmanned Excavator
by
Bi, Qiushi
,
Fu, Tao
,
Hu, Zhengguo
in
Autonomous excavation
,
Constraints
,
Electrical Machines and Networks
2024
As a large-scale mining excavator, the electric shovel (ES) has been extensively employed in open-pit mines for overburden removal and mineral loading. In the development of unmanned operations for ES, dynamic excavation trajectory planning is essential, as it directly influences operational efficiency and energy consumption by guiding the dipper during excavation. However, conventional optimization-based methods for excavation trajectory planning typically start from scratch, resulting in a time-consuming process that fails to meet real-time requirements. To address this challenge, we propose an innovative online trajectory planning framework based on physics-informed neural networks (PINNOTP) that utilizes advanced data-driven techniques. The input to PINNOTP consists of on-site working conditions, including the initial state of the ES and the material surface being excavated. The output is a smooth, polynomial-based curve that serves as the reference trajectory for the dipper. To ensure smooth execution of the generated trajectory, prior domain knowledge—such as physics-based target-oriented constraints, essential system dynamics, and mechanical constraints—is explicitly incorporated into the loss function during training. A case study is presented to validate the proposed method, demonstrating that PINNOTP effectively addresses the challenges of online excavation trajectory planning.
Journal Article