Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
167
result(s) for
"Woodall, William H."
Sort by:
An Overview of Phase I Analysis for Process Improvement and Monitoring
by
Steiner, Stefan H.
,
Woodall, William H.
,
Jones-Farmer, L. Allison
in
Assignable Cause
,
Best practice
,
Business process reengineering
2014
We provide an overview and perspective on the Phase I collection and analysis of data for use in process improvement and control charting. In Phase I, the focus is on understanding the process variability, assessing the stability of the process, investigating process-improvement ideas, selecting an appropriate in-control model, and providing estimates of the in-control model parameters. In our article, we review and synthesize many of the important developments that pertain to the analysis of process data in Phase I. We give our view of the major issues and developments in Phase I analysis. We identify the current best practices and some opportunities for future research in this area.
Journal Article
The effect of temporal aggregation level in social network monitoring
2018
Social networks have become ubiquitous in modern society, which makes social network monitoring a research area of significant practical importance. Social network data consist of social interactions between pairs of individuals that are temporally aggregated over a certain interval of time, and the level of such temporal aggregation can have substantial impact on social network monitoring. There have been several studies on the effect of temporal aggregation in the process monitoring literature, but no studies on the effect of temporal aggregation in social network monitoring. We use the degree corrected stochastic block model (DCSBM) to simulate social networks and network anomalies and analyze these networks in the context of both count and binary network data. In conjunction with this model, we use the Priebe scan method as the monitoring method. We demonstrate that temporal aggregation at high levels leads to a considerable decrease in the ability to detect an anomaly within a specified time period. Moreover, converting social network communication data from counts to binary indicators can result in a significant loss of information, hindering detection performance. Aggregation at an appropriate level with count data, however, can amplify the anomalous signal generated by network anomalies and improve detection performance. Our results provide both insights on the practical effects of temporal aggregation and a framework for the study of other combinations of network models, surveillance methods, and types of anomalies.
Journal Article
Effects of Parameter Estimation on Control Chart Properties: A Literature Review
by
Woodall, William H.
,
Jensen, Willis A.
,
Jones-Farmer, L. Allison
in
Applied sciences
,
Computer science; control theory; systems
,
Conditional Distribution
2006
Control charts are powerful tools used to monitor the quality of processes. In practice, control chart limits are often calculated using parameter estimates from an in-control Phase I reference sample. In Phase II of the monitoring scheme, statistics based on new samples are compared with the estimated control limits to monitor for departures from the in-control state. Many studies that evaluate control chart performance in Phase II rely on the assumption that the in-control parameters are known. Although the additional variability introduced into the monitoring scheme through parameter estimation is known to affect the chart performance, many studies do not consider the effect of estimation on the performance of the chart. This paper contains a review of the literature that explicitly considers the effect of parameter estimation on control chart properties. Some recommendations are made and future research ideas in this area are provided.
Journal Article
Can long-term historical data from electronic medical records improve surveillance for epidemics of acute respiratory infections? A systematic evaluation
by
Zheng, Hongzhang
,
Carlson, Abigail L.
,
Woodall, William H.
in
Aberration
,
Alarm systems
,
Algorithms
2018
As the deployment of electronic medical records (EMR) expands, so is the availability of long-term datasets that could serve to enhance public health surveillance. We hypothesized that EMR-based surveillance systems that incorporate seasonality and other long-term trends would discover outbreaks of acute respiratory infections (ARI) sooner than systems that only consider the recent past.
We simulated surveillance systems aimed at discovering modeled influenza outbreaks injected into backgrounds of patients with ARI. Backgrounds of daily case counts were either synthesized or obtained by applying one of three previously validated ARI case-detection algorithms to authentic EMR entries. From the time of outbreak injection, detection statistics were applied daily on paired background+injection and background-only time series. The relationship between the detection delay (the time from injection to the first alarm uniquely found in the background+injection data) and the false-alarm rate (FAR) was determined by systematically varying the statistical alarm threshold. We compared this relationship for outbreak detection methods that utilized either 7 days (early aberrancy reporting system (EARS)) or 2-4 years of past data (seasonal autoregressive integrated moving average (SARIMA) time series modeling).
In otherwise identical surveillance systems, SARIMA detected epidemics sooner than EARS at any FAR below 10%. The algorithms used to detect single ARI cases impacted both the feasibility and marginal benefits of SARIMA modeling. Under plausible real-world conditions, SARIMA could reduce detection delay by 5-16 days. It also was more sensitive at detecting the summer wave of the 2009 influenza pandemic.
Time series modeling of long-term historical EMR data can reduce the time it takes to discover epidemics of ARI. Realistic surveillance simulations may prove invaluable to optimize system design and tuning.
Journal Article
Some Current Directions in the Theory and Application of Statistical Process Monitoring
2014
The purpose of this paper is to provide an overview and our perspective of recent research and applications of statistical process monitoring. The focus is on work done over the past decade or so. We review briefly a number of important areas, including health-related monitoring, spatiotemporal surveillance, profile monitoring, use of autocorrelated data, the effect of estimation error, and high-dimensional monitoring, among others. We briefly discuss the choice of performance metrics. We provide references and offer some directions for further research.
Journal Article
Assessing the Statistical Analyses Used in Basic and Applied Social Psychology After Their p-Value Ban
by
Fricker, Ronald D.
,
Woodall, William H.
,
Burke, Katherine
in
Applied psychology
,
Data
,
Effect size
2019
In this article, we assess the 31 articles published in Basic and Applied Social Psychology (BASP) in 2016, which is one full year after the BASP editors banned the use of inferential statistics. We discuss how the authors collected their data, how they reported and summarized their data, and how they used their data to reach conclusions. We found multiple instances of authors overstating conclusions beyond what the data would support if statistical significance had been considered. Readers would be largely unable to recognize this because the necessary information to do so was not readily available.
Journal Article
Estimating the Standard Deviation in Quality-Control Applications
by
Henderson, G. Robin
,
Mahmoud, Mahmoud A.
,
Woodall, William H.
in
Applications
,
Applied sciences
,
Control Charts
2010
In estimating the standard deviation of a normally distributed random variable, a multiple of the sample range is often used instead of the sample standard deviation in view of the range's computational simplicity. Although it is well known that use of the sample standard deviation is more efficient if the sample size exceeds 2, many statistical quality-control textbooks argue that the loss in efficiency when using the sample range to estimate the process standard deviation is very small with relatively small sample sizes. In this paper, we show that this loss in efficiency can be relatively large even for very small sample sizes and thus strongly advise against using range-based methods. We found that some previously published tables of relative efficiencies were either mislabeled or inaccurate. We also make some recommendations when a number of samples have been taken over time.
Journal Article
A critical note on the exponentiated EWMA chart
2024
In this short note, we reevaluate the run-length performance of the EWMA and exponentiated EWMA (Exp-EWMA) charts using the conditional expected delay metric. It is found that the enhancements offered by the Exp-EWMA chart over the EWMA chart in the zero-state setup are marginal. Given its simplicity in implementation and its ability to encompass the functionality of the Exp-EWMA chart in detecting delayed shifts in the process mean, the EWMA chart remains the preferred choice over the Exp-EWMA chart.
Journal Article
Another Look at the EWMA Control Chart with Estimated Parameters
by
Saleh, Nesma A.
,
Mahmoud, Mahmoud A.
,
Woodall, William H.
in
Bootstrap
,
Bootstrap method
,
Constants
2015
When in-control process parameters are estimated, Phase II control chart performance will vary among practitioners due to the use of different Phase I data sets. The typical measure of Phase II control chart performance, the average run length (ARL), becomes a random variable due to the selection of a Phase I data set for estimation. Aspects of the ARL distribution, such as the standard deviation of the average run length (SDARL), can be used to quantify the between-practitioner variability in control chart performance. In this article, we assess the in-control performance of the exponentially weighted moving average (EWMA) control chart in terms of the SDARL and percentiles of the ARL distribution when the process parameters are estimated. Our results show that the EWMA chart requires a much larger amount of Phase I data than previously recommended in the literature in order to sufficiently reduce the variation in the chart performance. We show that larger values of the EWMA smoothing constant result in higher levels of variability in the in-control ARL distribution; thus, more Phase I data are required for charts with larger smoothing constants. Because it could be extremely difficult to lower the variation in the in-control ARL values sufficiently due to practical limitations on the amount of the Phase I data, we recommend an alternative design criterion and a procedure based on the bootstrap approach.
Journal Article