Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
85 result(s) for "normal probability plot"
Sort by:
Biostatistics series module 3: Comparing groups: Numerical variables
Numerical data that are normally distributed can be analyzed with parametric tests, that is, tests which are based on the parameters that define a normal distribution curve. If the distribution is uncertain, the data can be plotted as a normal probability plot and visually inspected, or tested for normality using one of a number of goodness of fit tests, such as the Kolmogorov-Smirnov test. The widely used Student's t-test has three variants. The one-sample t-test is used to assess if a sample mean (as an estimate of the population mean) differs significantly from a given population mean. The means of two independent samples may be compared for a statistically significant difference by the unpaired or independent samples t-test. If the data sets are related in some way, their means may be compared by the paired or dependent samples t-test. The t-test should not be used to compare the means of more than two groups. Although it is possible to compare groups in pairs, when there are more than two groups, this will increase the probability of a Type I error. The one-way analysis of variance (ANOVA) is employed to compare the means of three or more independent data sets that are normally distributed. Multiple measurements from the same set of subjects cannot be treated as separate, unrelated data sets. Comparison of means in such a situation requires repeated measures ANOVA. It is to be noted that while a multiple group comparison test such as ANOVA can point to a significant difference, it does not identify exactly between which two groups the difference lies. To do this, multiple group comparison needs to be followed up by an appropriate post hoc test. An example is the Tukey's honestly significant difference test following ANOVA. If the assumptions for parametric tests are not met, there are nonparametric alternatives for comparing data sets. These include Mann-Whitney U-test as the nonparametric counterpart of the unpaired Student's t-test, Wilcoxon signed-rank test as the counterpart of the paired Student's t-test, Kruskal-Wallis test as the nonparametric equivalent of ANOVA and the Friedman's test as the counterpart of repeated measures ANOVA.
Randomized Quantile Residuals
In this article we give a general definition of residuals for regression models with independent responses. Our definition produces residuals that are exactly normal, apart from sampling variability in the estimated parameters, by inverting the fitted distribution function for each response value and finding the equivalent standard normal quantile. Our definition includes some randomization to achieve continuous residuals when the response variable is discrete. Quantile residuals are easily computed in computer packages such as SAS, S-Plus, GLIM, or LispStat, and allow residual analyses to be carried out in many commonly occurring situations in which the customary definitions of residuals fail. Quantile residuals are applied in this article to three example data sets.
A Suggestion for Using Powerful and Informative Tests of Normality
For testing that an underlying population is normally distributed the skewness and kurtosis statistics, √b 1 and b 2 , and the D'Agostino-Pearson K 2 statistic that combines these two statistics have been shown to be powerful and informative tests. Their use, however, has not been as prevalent as their usefulness. We review these tests and show how readily available and popular statistical software can be used to implement them. Their relationship to deviations from linearity in normal probability plotting is also presented.
ANALYZING UNREPLICATED FACTORIAL EXPERIMENTS: A REVIEW WITH SOME NEW PROPOSALS
Recently, there have been many proposals for objectively analyzing unreplicated factorial experiments. We review these methods along with some earlier and perhaps lesser known ones. New methods are also proposed. The primary aim of this paper is to compare these methods and their variants via an extensive simulation study. Robustness of the various methods to non-normality is also considered. Many methods are comparable, but clearly some cannot be recommended. The results from the study also suggest some basic principles for evaluating new methods. Finally, we outline some issues that this study has raised and which might benefit from work in other areas such as multiple comparisons, outlier detection, ranking and selection, and robust statistics.
Deterministic/Stochastic Wavelet Decomposition for Recovery of Signal From Noisy Data
In a series of recent articles on nonparametric regression, Donoho and Johnstone developed waveletshrinkage methods for recovering unknown piecewise-smooth deterministic signals from noisy data. Wavelet shrinkage based on the Bayesian approach involves specifying a prior distribution on the wavelet coefficients, which is usually assumed to have a distribution with zero mean. There is no a priori reason why all prior means should be 0; indeed, one can imagine certain types of signals in which this is not a good choice of model. In this article, we take an empirical Bayes approach in which we propose an estimator for the prior mean that is \"plugged into\" the Bayesian shrinkage formulas. Another way we are more general than previous work is that we assume that the underlying signal is composed of a piecewise-smooth deterministic part plus a zero-mean stochastic part; that is, the signal may contain a reasonably large number of nonzero wavelet coefficients. Our goal is to predict this signal from noisy data. We also develop a new estimator for the noise variance based on a geostatistical method that considers the behavior of the variogram near the origin. Simulation studies show that our method (DecompShrink) outperforms the wellknown VisuShrink and SureShrink methods for recovering a wide variety of signals. Moreover, it is insensitive to the choice of the lowest-scale cut-off parameter, which is typically not the case for other wavelet-shrinkage methods.
Pre‐Op TJR Process Improvement – Part 2
As a continuation of the total joint replacement process improvement case in Chapter 12, this case seeks to identify potential causes of avoidable delays in the preoperative process for total knee replacement surgeries. The primary metric is the elapsed time of the total preoperative process. The data also includes dates of the intermediate steps in the preoperative process. The objective in this case is to identify the steps of the preoperative process that take the most time and target them for improvement going forward.
Normal Scores, Normal Plots Tests for Normality
In this article we develop new plotting positions for normal plots. The use of the plots usually centers on detecting irregular tail behavior or outliers. Along with the normal plot, we develop tests for various departures from normality, especially for skewness and heavy tails. The tests can be considered as components of a Shapiro-Wilk type test that has been decomposed into different sources of nonnormality. Convergence to the limiting distributions is slow, so finite sample corrections are included to make the tests useful for small sample sizes.
Graphical Data Analysis
In statistics, graphs or plots provide a very powerful means to visualize the meaning of data. This chapter discusses the plotting of data and particularly techniques for making probability plots. The display of data in graphs may allow the comparison of distributions and the identification of data that likely do or do not belong in the distribution under consideration. The chapter discusses graphs that represent the cumulative distribution or the reliability function versus time. It describes various parametric plots, such as Weibull plot, exponential plot, and normal probability plot, together with their confidence intervals. Finally, as for power law reliability growth, the Duane model and the Crow AMSAA model are discussed. These are connected to the Poisson distribution and are very useful for monitoring improvement programmes and checking the overall trend in reliability development.
Full Factorial Experiments at Two Levels
In many scientific investigations, the interest lies in the study of effects of two or more factors simultaneously. Factorial designs are most commonly used for this type of investigation. This chapter considers the important class of factorial designs for factors at two levels. It also considers the estimation and testing of factorial effects for location and dispersion models for replicated and unreplicated experiments. The chapter discusses optimal blocking schemes for full factorial designs. It describes how the factorial effects can be computed using regression analysis. The chapter also discusses three fundamental principles: effect hierarchy principle, effect sparsity principle, and effect heredity principle. These principles are often used to justify the development of factorial design theory and data analysis strategies. The chapter also describes a graphical method that uses the normal probability plot for assessing the normality assumption.
Examining Associations between Occupation and Health by using Routinely Collected Data
When examining a large number of associations simultaneously, as happens when routinely collected data are used to assess associations between occupation and health, it is not obvious how best to identify associations requiring further investigation since some risks may be high, or low, by chance alone. We have developed an approach to deal with this problem which is relatively easy to apply and appropriate to applications where data are not too sparse. Observed to expected ratios are estimated using an empirical Bayes procedure. Anomalous associations can be seen as outliers in a normal probability plot of the log‐ratios. The method is illustrated in the analysis of 252 000 cancers registered in men in England during 1981–87.