Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,820 result(s) for "statistical process monitoring"
Sort by:
Monitoring of group-structured high-dimensional processes via sparse group LASSO
In a general high-dimensional process, a large number of process parameters or quality characteristics is found to be featured through their dependencies and relevance. The features that have similar characteristics or behaviors in the process operation can be categorized into multiple groups. Thus, when a few quality characteristics in the process change, it is highly probable that the process shift would have occurred in a few relevant groups. Recently, several advanced statistical process control techniques are developed to monitor the changes in high-dimensional processes under sparsity. However, monitoring schemes that utilize the grouped pattern of the quality characteristics are sparse. This paper proposes a new method to monitor the high-dimensional process when the grouped structure of the process data is observed. The proposed method identifies the potentially changed groups and individual variables within the groups based on a modified sparse group LASSO (MSGL) model. Then, a monitoring statistic is obtained using MSGL-based likelihood function to test abnormality of the process. Extensive numerical studies are conducted to demonstrate the effectiveness and efficiency of the proposed method. In addition, a real-life application of a liquefied natural gas process is presented to illustrate the proposed method.
Bayesian sequential update for monitoring and control of high-dimensional processes
Simultaneous monitoring of multi-dimensional processes becomes much more challenging as the dimension increases, especially when there are only a few or moderate number of process variables that are responsible for the process change, and when the size of change is particularly small. In this paper, we develop an efficient statistical process monitoring methodology in high-dimensional processes based on the Bayesian approach. The key idea of this paper is to sequentially update a posterior distribution of the process parameter of interest through the Bayesian rule. In particular, a sparsity promoting prior distribution of the parameter is applied properly under sparsity, and is sequentially updated in online processing. A Bayesian hierarchical model with a data-driven way of determining the hyperparameters enables the monitoring scheme to be effective to the detection of process shifts and to be efficient to the computational complexity in the high-dimensional processes. Comparison with recently proposed methods for monitoring high-dimensional processes demonstrates the superiority of the proposed method in detecting small shifts. In addition, graphical presentations in tracking the process parameter provide the information about decisions regarding whether a process needs to be adjusted before it triggers alarm.
Statistical process monitoring in a specified period for the image data of fused deposition modeling parts with consistent layers
Statistical process monitoring (SPM) methods have been adopted and studied to detect variations in the fused deposition modeling (FDM) process in recent years. The FDM process that builds parts layer-by-layer is accomplished in a specified manufacturing period (number of layers) without interruption or suspension. Thus, traditional SPM methods, where the average run length is used for the calculation of the control limits and the measurement of the performance, are no longer applicable to the FDM process. In this paper, an SPM method is proposed based on the surface image data of FDM parts with consistent layers and a specified period. The probability of alarm in a specified period (PASP) and the cumulative PASP are introduced to determine the control limits and evaluate the monitoring performance. Regions of interest are determined in a fixed way to cover the sizes and locations of different defects. The statistics are calculated based on the generalized likelihood ratio. The control limit is determined based on the specified period and the nominal in-control PASP. A simulation study for different locations, sizes and magnitudes of the mean shift of defects is presented. In the case study, the proposed SPM method is applied to monitor the FDM process of a cuboid, which verifies the effectiveness of the proposed method.
Tensor-Based ECG Anomaly Detection toward Cardiac Monitoring in the Internet of Health Things
Advanced heart monitors, especially those enabled by the Internet of Health Things (IoHT), provide a great opportunity for continuous collection of the electrocardiogram (ECG), which contains rich information about underlying cardiac conditions. Realizing the full potential of IoHT-enabled cardiac monitoring hinges, to a great extent, on the detection of disease-induced anomalies from collected ECGs. However, challenges exist in the current literature for IoHT-based cardiac monitoring: (1) Most existing methods are based on supervised learning, which requires both normal and abnormal samples for training. This is impractical as it is generally unknown when and what kind of anomalies will occur during cardiac monitoring. (2) Furthermore, it is difficult to leverage advanced machine learning approaches for information processing of 1D ECG signals, as most of them are designed for 2D images and higher-dimensional data. To address these challenges, a new sensor-based unsupervised framework is developed for IoHT-based cardiac monitoring. First, a high-dimensional tensor is generated from the multi-channel ECG signals through the Gramian Angular Difference Field (GADF). Then, multi-linear principal component analysis (MPCA) is employed to unfold the ECG tensor and delineate the disease-altered patterns. Obtained principal components are used as features for anomaly detection using machine learning models (e.g., deep support vector data description (deep SVDD)) as well as statistical control charts (e.g., Hotelling T2 chart). The developed framework is evaluated and validated using real-world ECG datasets. Comparing to the state-of-the-art approaches, the developed framework with deep SVDD achieves superior performances in detecting abnormal ECG patterns induced by various types of cardiac disease, e.g., an F-score of 0.9771 is achieved for detecting atrial fibrillation, 0.9986 for detecting right bundle branch block, and 0.9550 for detecting ST-depression. Additionally, the developed framework with the T2 control chart facilitates personalized cycle-to-cycle monitoring with timely detected abnormal ECG patterns. The developed framework has a great potential to be implemented in IoHT-enabled cardiac monitoring and smart management of cardiac health.
Employing machine learning techniques in monitoring autocorrelated profiles
In profile monitoring, it is usually assumed that the observations between or within each profile are independent of each other. However, this assumption is often violated in manufacturing practice, and it is of utmost importance to carefully consider autocorrelation effects in the underlying models for profile monitoring. For this reason, various statistical control charts have been proposed to monitor profiles when between- or within-data is correlated in Phase II, in which the main aim is to develop control charts with quicker detection ability. As a novel approach, this study aims to employ machine learning techniques as control charts instead of statistical approaches in monitoring profiles with between-profile autocorrelations. Specifically, new input features based on conventional statistical control chart statistics and normalized estimated parameters are defined that are capable of adequately accounting for the between-autocorrelation effect of profiles. In addition, six machine learning techniques are extended and compared by means of Monte Carlo simulations. The simulation results indicate that machine learning techniques can obtain more accurate results compared with statistical control charts. Moreover, adaptive neuro-fuzzy inference systems outperform other machine learning techniques and the conventional statistical control charts.
Improved EWMA and CUSUM Charts Under Modified Successive Sampling for Monitoring Process Dispersion
The Statistical Process Control (SPC) toolkit is extensively utilized to identify variations in processes, with control charts serving as the most efficient and commonly employed instrument for real-time process monitoring. Control charts evaluate whether a process is stable or unstable, detecting special cause fluctuations. Monitoring process variability is generally prioritized over location characteristics. Although quality evaluation samples are typically obtained via simple random sampling (SRS), the modified successive sampling (MSS) method is favored to reduce sampling duration and expenses. This research formulates CUSUM and EWMA control charts employing the MSS methodology to assess process variability. Performance criteria, such as run length measurements, are employed to evaluate the efficacy of CUSUM and EWMA charts in comparison to Shewhart charts. The results demonstrate that the EWMA chart surpasses both the Shewhart and CUSUM charts. A practical illustration from fertilizer production is provided to exemplify the proposed methodology.
On guaranteed in-control performance for the Shewhart X and control charts
Recently, two methods have been published in this journal to determine adjusted control limits for the Shewhart control chart in order to guarantee a pre-specified in-control performance. One is based on the bootstrap approach (Saleh et al. ( 2015 )), and the other is an analytical approach (Goedhart, Schoonhoven, and Does ( 2017 )). Although both methods lead to the desired control chart performance, they are still difficult to implement by the practitioner. The bootstrap is rather computationally intensive, while the analytical approach consists of multiple integrals and derivatives. In this letter to the editor we simplify the analytical expressions provided in Goedhart, Schoonhoven, and Does ( 2017 ) by using the theory on tolerance intervals for individual observations as given in Krishnamoorthy and Mathew ( 2009 ).
On Enhanced GLM-Based Monitoring: An Application to Additive Manufacturing Process
Innovations in technology assist the manufacturing processes in producing high-quality products and, hence, become a greater challenge for quality engineers. Control charts are frequently used to examine production operations and maintain product quality. The traditional charting structures rely on a response variable and do not incorporate any auxiliary data. To resolve this issue, one popular approach is to design charts based on a linear regression model, usually when the response variable shows a symmetric pattern (i.e., normality). The present work intends to propose new generalized linear model (GLM)-based homogeneously weighted moving average (HWMA) and double homogeneously weighted moving average (DHWMA) charting schemes to monitor count processes employing the deviance residuals (DRs) and standardized residuals (SRs) of the Poisson regression model. The symmetric limits of HWMA and DHWMA structures are derived, as SR and DR statistics showed a symmetric pattern. The performance of proposed and established methods (i.e., EWMA charts) is assessed by using run-length characteristics. The results revealed that SR-based schemes have relatively better performance as compared to DR-based schemes. In particular, the proposed SR-DHWMA chart outperforms the other two, namely SR-EWMA and SR-HWMA charts, in detecting shifts. To illustrate the practical features of the study’s proposal, a real application connected to the additive manufacturing process is offered.
On guaranteed in-control performance for the Shewhart X and X control charts
Recently, two methods have been published in this journal to determine adjusted control limits for the Shewhart control chart in order to guarantee a pre-specified in-control performance. One is based on the bootstrap approach (Saleh et al. (2015)), and the other is an analytical approach (Goedhart, Schoonhoven, and Does (2017)). Although both methods lead to the desired control chart performance, they are still difficult to implement by the practitioner. The bootstrap is rather computationally intensive, while the analytical approach consists of multiple integrals and derivatives. In this letter to the editorwe simplify the analytical expressions provided in Goedhart, Schoonhoven, and Does (2017) by using the theory on tolerance intervals for individual observations as given in Krishnamoorthy and Mathew (2009).
On Approaching Normality Through Rectangular Distribution: Industrial Applications to Monitor Electron Gun and File Server Processes
Normal probability distribution is central to most statistical methods and their applications. In many real scenarios, the normality of the underlying phenomenon is not obvious. However, a deeper investigation can lead to normality through some useful links among various models. The current study aims to present one such approach to the Gaussian model by connecting it with the cumulative distribution function of the rectangular distribution. Some characteristics of the rectangular distribution, such as the quantiles, are used to achieve the said objective. Further, the derived distributional results have been used to design a mechanism to monitor the real-time dependent electron gun and file server processes. The performance of the proposed monitoring methodology is evaluated in terms of probability of signal, average run length, extra quadratic loss and cumulative extra quadratic loss. The expressions for probability to signal are derived mathematically and are supported by some tabular results. The results advocate the usefulness of the proposed methodology for effectively monitoring real-life processes.