Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
423
result(s) for
"Peng, Zhiping"
Sort by:
Estimation of Paddy Rice Nitrogen Content and Accumulation Both at Leaf and Plant Levels from UAV Hyperspectral Imagery
2021
Remote sensing-based mapping of crop nitrogen (N) status is beneficial for precision N management over large geographic regions. Both leaf/canopy level nitrogen content and accumulation are valuable for crop nutrient diagnosis. However, previous studies mainly focused on leaf nitrogen content (LNC) estimation. The effects of growth stages on the modeling accuracy have not been widely discussed. This study aimed to estimate different paddy rice N traits—LNC, plant nitrogen content (PNC), leaf nitrogen accumulation (LNA) and plant nitrogen accumulation (PNA)—from unmanned aerial vehicle (UAV)-based hyperspectral images. Additionally, the effects of the growth stage were evaluated. Univariate regression models on vegetation indices (VIs), the traditional multivariate calibration method, partial least squares regression (PLSR) and modern machine learning (ML) methods, including artificial neural network (ANN), random forest (RF), and support vector machine (SVM), were evaluated both over the whole growing season and in each single growth stage (including the tillering, jointing, booting and heading growth stages). The results indicate that the correlation between the four nitrogen traits and the other three biochemical traits—leaf chlorophyll content, canopy chlorophyll content and aboveground biomass—are affected by the growth stage. Within a single growth stage, the performance of selected VIs is relatively constant. For the full-growth-stage models, the performance of the VI-based models is more diverse. For the full-growth-stage models, the transformed chlorophyll absorption in the reflectance index/optimized soil-adjusted vegetation index (TCARI/OSAVI) performs best for LNC, PNC and PNA estimation, while the three band vegetation index (TBVITian) performs best for LNA estimation. There are no obvious patterns regarding which method performs the best of the PLSR, ANN, RF and SVM in either the growth-stage-specific or full-growth-stage models. For the growth-stage-specific models, a lower mean relative error (MRE) and higher R2 can be acquired at the tillering and jointing growth stages. The PLSR and ML methods yield obviously better estimation accuracy for the full-growth-stage models than the VI-based models. For the growth-stage-specific models, the performance of VI-based models seems optimal and cannot be obviously surpassed. These results suggest that building linear regression models on VIs for paddy rice nitrogen traits estimation is still a reasonable choice when only a single growth stage is involved. However, when multiple growth stages are involved or missing the phenology information, using PLSR or ML methods is a better option.
Journal Article
Generalized transformer in fault diagnosis of Tennessee Eastman process
by
Song, Zhihuan
,
Zhang, Qinghua
,
Peng, Zhiping
in
Ablation
,
Artificial Intelligence
,
Belief networks
2022
Fault diagnosis is an important yet challenging task. Because of the powerful feature representation capabilities of deep model, intelligent fault diagnosis on deep learning becomes a research hotspot in the field. Although many deep models as sparse autoencoder, deep belief network is developed for fault diagnosis with encouraging performance, integrating the merits of deep learning into fault diagnosis still has a long way to go. In this paper, we propose a novel method, namely generalized transformer. Compared to previous deep models, generalized transformer excavates relations among inputs and nonlinearity between inputs and outputs by attention mechanism. To deal with structured data, generalized transformer further borrows the idea from graph attention network. By replacing dot product between query and key information in transformer, we introduce a forward network with learned weight vector to compute the similarity. Through limiting the similarity calculations in a neighbor region, prior knowledge can be injected into generalized transformer. On Tennessee Eastman process dataset, our new model can produce high performance, which is better or competitive to state-of-the-art models. Extensive ablation studies validate the effectiveness of the proposed model.
Journal Article
A multi-objective trade-off framework for cloud resource scheduling based on the Deep Q-network algorithm
2020
Cloud computing, now a mature computing service, provides an efficient and economical solution for processing big data. As such, it attracts a lot of attention in academia and plays an important role in industrial applications. With the recent increase in the scale of cloud computing data centers, and improvements in user service quality requirements, the structure of the whole cloud system has become more complex, which has also made the resource scheduling management of these systems more challenging. Thus, the goal of this research was to resolve the conflict between cloud service providers (CSPs) who aim to minimize energy costs and those who seek to optimize service quality. Based on the excellent environmental awareness and online adaptive decision-making ability of deep reinforcement learning (DRL), we proposed an online resource scheduling framework based on the Deep Q-network (DQN) algorithm. The framework could make a trade-off of the two optimization objectives of energy consumption and task makespan by adjusting the proportion of the reward of different optimization objectives. Experimental results showed that this framework could effectively be used to make a trade-off of the two optimization objectives of energy consumption and task makespan, and exhibited obvious optimization effects compared with the baseline algorithm. Therefore, our proposed framework can dynamically adjust the optimization objective of the system according to the different requirements of the cloud system.
Journal Article
An novel cloud task scheduling framework using hierarchical deep reinforcement learning for cloud computing
by
Deng, Xiangwu
,
Li, Kaibin
,
He, Jieguang
in
Algorithms
,
Biology and Life Sciences
,
Cloud computing
2025
With the increasing popularity of cloud computing services, their large and dynamic load characteristics have rendered task scheduling an NP-complete problem.To address the problem of large-scale task scheduling in a cloud computing environment, this paper proposes a novel cloud task scheduling framework using hierarchical deep reinforcement learning (DRL) to address the challenges of large-scale task scheduling in cloud computing. The framework defines a set of virtual machines (VMs) as a VM cluster and employs hierarchical scheduling to allocate tasks first to the cluster and then to individual VMs. The scheduler, designed using DRL, adapts to dynamic changes in the cloud environments by continuously learning and updating network parameters. Experiments demonstrate that it skillfully balances cost and performance. In low-load situations, costs are reduced by using low-cost nodes within the Service Level Agreement (SLA) range; in high-load situations, resource utilization is improved through load balancing. Compared with classical heuristic algorithms, it effectively optimizes load balancing, cost, and overdue time, achieving a 10% overall improvement. The experimental results demonstrate that this approach effectively balances cost and performance, optimizing objectives such as load balance, cost, and overdue time. One potential shortcoming of the proposed hierarchical deep reinforcement learning (DRL) framework for cloud task scheduling is its complexity and computational overhead. Implementing and maintaining a DRL-based scheduler requires significant computational resources and expertise in machine learning. There are still shortcomings in the method used in this study. First, the continuous learning and updating of network parameters might introduce latency, which could impact real-time task scheduling efficiency. Furthermore, the framework’s performance heavily depends on the quality and quantity of training data, which might be challenging to obtain and maintain in a dynamic cloud environment.
Journal Article
Structural Alteration of Gut Microbiota during the Amelioration of Human Type 2 Diabetes with Hyperlipidemia by Metformin and a Traditional Chinese Herbal Formula: a Multicenter, Randomized, Open Label Clinical Trial
2018
Accumulating evidence implicates gut microbiota as promising targets for the treatment of type 2 diabetes mellitus (T2DM). With a randomized clinical trial, we tested the hypothesis that alteration of gut microbiota may be involved in the alleviation of T2DM with hyperlipidemia by metformin and a specifically designed herbal formula (AMC). Four hundred fifty patients with T2DM and hyperlipidemia were randomly assigned to either the metformin- or AMC-treated group. After 12 weeks of treatment, 100 patients were randomly selected from each group and assessed for clinical improvement. The effects of the two drugs on the intestinal microbiota were evaluated by analyzing the V3 and V4 regions of the 16S rRNA gene by Illumina sequencing and multivariate statistical methods. Both metformin and AMC significantly alleviated hyperglycemia and hyperlipidemia and shifted gut microbiota structure in diabetic patients. They significantly increased a coabundant group represented by Blautia spp., which significantly correlated with the improvements in glucose and lipid homeostasis. However, AMC showed better efficacies in improving homeostasis model assessment of insulin resistance (HOMA-IR) and plasma triglyceride and also exerted a larger effect on gut microbiota. Furthermore, only AMC increased the coabundant group represented by Faecalibacterium spp., which was previously reported to be associated with the alleviation of T2DM in a randomized clinical trial. Metformin and the Chinese herbal formula may ameliorate type 2 diabetes with hyperlipidemia via enriching beneficial bacteria, such as Blautia and Faecalibacterium spp. IMPORTANCE Metabolic diseases such as T2DM and obesity have become a worldwide public health threat. Accumulating evidence indicates that gut microbiota can causatively arouse metabolic diseases, and thus the gut microbiota serves as a promising target for disease control. In this study, we evaluated the role of gut microbiota during improvements in hyperglycemia and hyperlipidemia by two drugs: metformin and a specifically designed Chinese herbal formula (AMC) for diabetic patients with hyperlipidemia. Both drugs significantly ameliorated blood glucose and lipid levels and shifted the gut microbiota. Blautia spp. were identified as being associated with improvements in glucose and lipid homeostasis for both drugs. AMC exerted larger effects on the gut microbiota together with better efficacies in improving HOMA-IR and plasma triglyceride levels, which were associated with the enrichment of Faecalibacterium spp. In brief, these data suggest that gut microbiota might be involved in the alleviation of diabetes with hyperlipidemia by metformin and the AMC herbal formula. Metabolic diseases such as T2DM and obesity have become a worldwide public health threat. Accumulating evidence indicates that gut microbiota can causatively arouse metabolic diseases, and thus the gut microbiota serves as a promising target for disease control. In this study, we evaluated the role of gut microbiota during improvements in hyperglycemia and hyperlipidemia by two drugs: metformin and a specifically designed Chinese herbal formula (AMC) for diabetic patients with hyperlipidemia. Both drugs significantly ameliorated blood glucose and lipid levels and shifted the gut microbiota. Blautia spp. were identified as being associated with improvements in glucose and lipid homeostasis for both drugs. AMC exerted larger effects on the gut microbiota together with better efficacies in improving HOMA-IR and plasma triglyceride levels, which were associated with the enrichment of Faecalibacterium spp. In brief, these data suggest that gut microbiota might be involved in the alleviation of diabetes with hyperlipidemia by metformin and the AMC herbal formula.
Journal Article
Hair keratin promotes wound healing in rats with combined radiation-wound injury
2020
Keratins derived from human hair have been suggested to be particularly effective in general surgical wound healing. However, the healing of a combined radiation-wound injury is a multifaceted regenerative process. Here, hydrogels fabricated with human hair keratins were used to test the wound healing effects on rats suffering from combined radiation-wound injuries. Briefly, the keratin extracts were verified by dodecyl sulfate polyacrylamide gel electrophoresis analysis and amino acid analysis, and the keratin hydrogels were then characterized by morphological observation, Fourier transform infrared spectroscopy analysis and rheology analyses. The results of the cell viability assay indicated that the keratin hydrogels could enhance cell growth after radiation exposure. Furthermore, keratin hydrogels could accelerate wound repair and improve the survival rate in vivo. The results demonstrate that keratin hydrogels possess a strong ability to accelerate the repair of a combined radiation-wound injury, which opens up new tissue regeneration applications for keratins.
Journal Article
A Sequence Prediction Algorithm Integrating Knowledge Graph Embedding and Dynamic Evolution Process
2025
Sequence prediction is widely applied and has significant theoretical and practical application value in fields such as meteorology and medicine. Traditional models, such as LSTM(Long Short-Term Memory) and GRU(Gated Recurrent Unit), may perform better than this model when dealing with short-term dependencies, but their performance may decline on long sequences and complex data, especially in cases where sequence fluctuations are significant. However, the Transformer requires a large amount of computing resources (parallel computing) when dealing with long sequences. Aiming to solve the problems existing in sequence prediction models, such as insufficient modeling ability of long sequence dependencies, insufficient interpretability, and low efficiency of multi-element heterogeneous information fusion, this study embeds sequential data into the knowledge graph, enabling the model to associate context information when processing complex data and providing more reasonable decision support for the prediction results. Given the historical sequence and the predicted future sequence, three groups of sequence lengths were set in the experiment. And MAE (Mean Absolute Error)and MSE (Mean Square Error) are used as indicators for sequence prediction. In sequence prediction, dynamic evolution is conducive to enhancing the ability of the prediction model to capture the changing patterns of the current time series data and significantly improving the reliability of the prediction results. Experiments were conducted using five datasets from different application fields to verify the effectiveness of the prediction model. The experimental results show that based on the randomization of the prediction time step, the prediction model proposed in this study significantly improves the expression performance of stationary sequences. It has addressed the shortcomings of these traditional methods, such as maintaining good performance in the case of short sequences with large fluctuations.
Journal Article
Phenology Effects on Physically Based Estimation of Paddy Rice Canopy Traits from UAV Hyperspectral Imagery
2021
Radiation transform models such as PROSAIL are widely used for crop canopy reflectance simulation and biophysical parameter inversion. The PROSAIL model basically assumes that the canopy is turbid homogenous media with a bare soil background. However, the canopy structure changes when crop growth stages develop, which is more or less a departure from this assumption. In addition, a paddy rice field is inundated most of the time with flooded soil background. In this study, field-scale paddy rice leaf area index (LAI), leaf cholorphyll content (LCC), and canopy chlorophyll content (CCC) were retrieved from unmanned-aerial-vehicle-based hyperspectral images by the PROSAIL radiation transform model using a lookup table (LUT) strategy, with a special focus on the effects of growth-stage development and soil-background signature selection. Results show that involving flooded soil reflectance as background reflectance for PROSAIL could improve estimation accuracy. When using a LUT with the flooded soil reflectance signature (LUTflooded) the coefficients of determination (R2) between observed and estimation variables are 0.70, 0.11, and 0.79 for LAI, LCC, and CCC, respectively, for the entire growing season (from tillering to heading growth stages), and the corresponding mean absolute errors (MAEs) are 21.87%, 16.27%, and 12.52%. For LAI and LCC, high model bias mainly occurred in tillering growth stages. There is an obvious overestimation of LAI and underestimation of LCC for in the tillering growth stage. The estimation accuracy of CCC is relatively consistent from tillering to heading growth stages.
Journal Article
SLA-DQTS: SLA Constrained Adaptive Online Task Scheduling Based on DDQN in Cloud Computing
2021
Task scheduling is key to performance optimization and resource management in cloud computing systems. Because of its complexity, it has been defined as an NP problem. We introduce an online scheme to solve the problem of task scheduling under a dynamic load in the cloud environment. After analyzing the process, we propose a server level agreement constraint adaptive online task scheduling algorithm based on double deep Q-learning (SLA-DQTS) to reduce the makespan, cost, and average overdue time under the constraints of virtual machine (VM) resources and deadlines. In the algorithm, we prevent the change of the model input dimension with the number of VMs by taking the Gaussian distribution of related parameters as a part of the state space. Through the design of the reward function, the model can be optimized for different goals and task loads. We evaluate the performance of the algorithm by comparing it with three heuristic algorithms (Min-Min, random, and round robin) under different loads. The results show that the algorithm in this paper can achieve similar or better results than the comparison algorithms at a lower cost.
Journal Article
UDL: a cloud task scheduling framework based on multiple deep neural networks
by
Cui, Delong
,
Zhang, Hao
,
Peng, Zhiping
in
Algorithms
,
Artificial neural networks
,
Cloud computing
2023
Cloud task scheduling and resource allocation (TSRA) constitute a core issue in cloud computing. Batch submission is a common user task deployment mode in cloud computing systems. In this mode, it has been a challenge for cloud systems to balance the quality of user service and the revenue of cloud service provider (CSP). To this end, with multi-objective optimization (MOO) of minimizing task latency and energy consumption, we propose a cloud TSRA framework based on deep learning (DL). The system solves the TSRA problems of multiple task queues and virtual machine (VM) clusters by uniting multiple deep neural networks (DNNs) as task scheduler of cloud system. The DNNs are divided into exploration part and exploitation part. At each scheduling time step, the model saves the best outputs of all scheduling policies from each DNN to the experienced sample memory pool (SMP), and periodically selects random training samples from SMP to train each DNN of exploitation part. We designed a united deep learning (UDL) algorithm based on this framework. Experimental results show that the UDL algorithm can effectively solve the MOO problem of TSRA for cloud tasks, and performs better than benchmark algorithms such as heterogeneous distributed deep learning (HDDL) in terms of task scheduling performance.
Journal Article