Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
18 result(s) for "Knoth, Sven"
Sort by:
Dissociating selectivity adjustments from temporal learning–introducing the context-dependent proportion congruency effect
The list-level proportion congruency effect (PCE) and the context-specific PC (CSPC) effect are typical findings in experimental conflict protocols, which competing explanations attribute to different mechanisms. Of these mechanisms, stimulus-unspecific conflict-induced selectivity adjustments have attracted the most interest, from various disciplines. Recent methodological advances have yielded an experimental procedure for entirely ruling out all stimulus-specific alternatives. However, there is a stimulus-unspecific alternative–temporal learning–which cannot even be ruled out as the sole cause of either effect with any established experimental procedure. That is because it is very difficult to create a scenario in which selectivity adjustments and temporal learning make different predictions–with traditional approaches, it is arguably impossible. Here, we take a step towards solving this problem, and experimentally dissociating the two mechanisms. First, we present our novel approach which is a combination of abstract experimental conditions and theoretical assumptions. As we illustrate with two computational models, given this particular combination, the two mechanisms predict opposite modulations of an as yet unexplored hybrid form of the list-level PCE and the CSPC effect, which we term context-dependent PCE (CDPCE). With experimental designs that implement the abstract conditions properly, it is therefore possible to rule out temporal learning as the sole cause of stimulus-unspecific adaptations to PC, and to unequivocally attribute the latter, at least partially, to selectivity adjustments. Secondly, we evaluate methodological and theoretical aspects of the presented approach. Finally, we report two experiments, that illustrate both the promise of and a potential challenge to this approach.
Dating the start of the US house price bubble: an application of statistical process control
An exact dating of the onset of financial crises is important to learn which factors have caused or contributed to the financial turmoil. While most economists agree that the recent worldwide financial crises evolved as a consequence of a bursting bubble on the US housing market, the related literature yet failed to deliver a consensus on the question when exactly the bubble started developing. The estimates in the literature range in between 1997 and 2002, while applications of market-based procedures deliver even later dates. In this paper, we employ the methods of statistical process control to date the likely beginning of the bubble. The results support the view that the bubble on the US housing market already emerged as early as 1996/1997.
The Case Against the Use of Synthetic Control Charts
The synthetic chart principle proposed by Wu and Spedding (2000) initiated a stream of publications in the control charting literature. Originally, it was claimed that the new chart has superior average run length (ARL) properties. Davis and Woodall (2002) indicated that the synthetic chart is nothing else than a particular runs-rule chart. Moreover, they criticized the design of the performance evaluation and advocated use of the steady-state ARL. The latter measure was used then, e.g., in Wu et al. (2010). In most of the papers on synthetic charts that actually used the steady-state framework, it was not rigorously described. See Khoo et al. (2011) as an exception, where it was revealed that the cyclical steady-state design was considered. The aim of this paper is to carefully analyze the steady-state (cyclical and the more popular conditional) for the synthetic chart, the original \"2 of L + 1\" (L < 1) runs-rule chart, and competing EWMA charts with two types of control limits. It turns out that the EWMA chart has a uniformly (over a large range of potential shifts) better steady-state ARL performance than the synthetic chart. Furthermore, the synthetic control chart exhibits the poorest performance among all considered competitors. Thus, we advise not applying synthetic control charts.
ARL Numerics for MEWMA Charts
The FORTRAN code in Bodden and Rigdon (1999) for the in-control average run length (ARL) of multivariate exponentially weighted moving average charts (MEWMA) became quite popular and is widely used in statistical software systems such as MINITAB and STATISTICA. We find that the algorithms' accuracy is poor for low-dimensional processes. The Markov chain approximation described in Runger and Prabhu (1996) is not able to resolve the issue. The same holds for the calculation of the out-of-control ARL as proposed in Ridgon (1995b). We present two concepts that achieve higher accuracy for all dimensions. The competing numerical procedures are implemented in the R package spc.
On ARL-unbiased c-charts for INAR(1) Poisson counts
Counts of nonconformities are frequently assumed to have a Poisson distribution. The integer and asymmetrical character of this distribution and the value of its target mean may prevent the quality control operator to deal with a chart with a pre-specified in-control average run length (ARL) and the ability to promptly detect both increases and decreases in the mean of those counts. Moreover, as far as we know, the c-chart proposed to monitor the mean of first-order integer-valued autoregressive [INAR(1)] Poisson counts tends to be ARL-biased, in the sense that it takes longer, in average, to detect some shifts in the process mean than to trigger a false alarm. In this paper, we capitalize on the randomization of the emission of a signal and on a nested secant rule search procedure not only to eliminate the bias of the ARL function of the c-chart for the mean of INAR(1) Poisson counts, but also to bring its in-control ARL exactly to a pre-specified and desired value. Striking illustrations of the resulting ARL-unbiased c-chart are provided.
Fast initial response features for EWMA control charts
The exponentially weighted moving average (EWMA) control chart became very popular during the last decade. It is characterized by simple handling and good performance. It turns out, however, that the most popular EWMA scheme with fixed-width control limits - the asymptotic control limits are taken and do not change over time - detects early changes rather slowly. For the competing CUSUM chart the so-called fast initial response (head-start) feature is developed which permits rapid response to an initial out-of-control situation. Meanwhile, in some papers similar modifications for EWMA schemes are described. We compare these approaches by using precise computation techniques, which are based on numeric quadrature rules and allow higher accuracy than earlier studies. Moreover, previous comparisons are restricted to the evaluation of the detection speed by comparing the average run lengths (ARL), this is, the parameter of interest is constant during the whole monitoring period. Here, we consider more possible change point locations, which gives the EWMA control chart user a better insight into the scheme performance for early changes. [PUBLICATION ABSTRACT]
The Steady-State Behavior of Multivariate Exponentially Weighted Moving Average Control Charts
Multivariate Exponentially Weighted Moving Average, MEWMA, charts are popular, handy and effective procedures to detect distributional changes in a stream of multivariate data. For doing appropriate performance analysis, dealing with the steady-state behavior of the MEWMA statistic is essential. Going beyond early papers, we derive quite accurate approximations of the respective steady-state densities of the MEWMA statistic. It turns out that these densities could be rewritten as the product of two functions depending on one argument only which allows feasible calculation. For proving the related statements, the presentation of the non-central chisquare density deploying the confluent hypergeometric limit function is applied. Using the new methods it was found that for large dimensions, the steady-state behavior becomes different to what one might expect from the univariate monitoring field. Based on the integral equation driven methods, steady-state and worst-case average run lengths are calculated with higher accuracy than before. Eventually, optimal MEWMA smoothing constants are derived for all considered measures.
Another look at synthetic-type control charts
During the last two decades, in statistical process monitoring plentiful new methods appeared with synthetic-type control charts being a prominent constituent. These charts became popular designs for several reasons. The two most important ones are simplicity and proclaimed excellent change point detection performance. Whereas there is no doubt about the former, we deal here with the latter. We will demonstrate that their performance is questionable. Expanding on some previous skeptical articles we want to critically reflect upon recently developed variants of synthetic-type charts in order to emphasize that there is little reason to apply and to push this special class of control charts.
Controlling the EWMA \\(S^2\\) control chart false alarm behavior when the in-control variance level must be estimated
Investigating the problem of setting control limits in the case of parameter uncertainty is more accessible when monitoring the variance because only one parameter has to be estimated. Simply ignoring the induced uncertainty frequently leads to control charts with poor false alarm performances. Adjusting the unconditional in-control (IC) average run length (ARL) makes the situation even worse. Guaranteeing a minimum conditional IC ARL with some given probability is another very popular approach to solving these difficulties. However, it is very conservative as well as more complex and more difficult to communicate. We utilize the probability of a false alarm within the planned number of points to be plotted on the control chart. It turns out that adjusting this probability produces notably different limit adjustments compared to controlling the unconditional IC ARL. We then develop numerical algorithms to determine the respective modifications of the upper and two-sided exponentially weighted moving average (EWMA) charts based on the sample variance for normally distributed data. These algorithms are made available within an R package. Finally, the impacts of the EWMA smoothing constant and the size of the preliminary sample on the control chart design and its performance are studied.
Structural Health Monitoring with Functional Data: Two Case Studies
Structural Health Monitoring (SHM) is increasingly used in civil engineering. One of its main purposes is to detect and assess changes in infrastructure conditions to reduce possible maintenance downtime and increase safety. Ideally, this process should be automated and implemented in real-time. Recent advances in sensor technology facilitate data collection and process automation, resulting in massive data streams. Functional data analysis (FDA) can be used to model and aggregate the data obtained transparently and interpretably. In two real-world case studies of bridges in Germany and Belgium, this paper demonstrates how a function-on-function regression approach, combined with profile monitoring, can be applied to SHM data to adjust sensor/system outputs for environmental-induced variation and detect changes in construction. Specifically, we consider the R package \\texttt{funcharts} and discuss some challenges when using this software on real-world SHM data. For instance, we show that pre-smoothing of the data can improve and extend its usability.