Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
38 result(s) for "Loy, Adam"
Sort by:
Bringing Visual Inference to the Classroom
In the classroom, we traditionally visualize inferential concepts using static graphics or interactive apps. For example, there is a long history of using apps to visualize sampling distributions. The lineup protocol for visual inference is a recent development in statistical graphics that has created an opportunity to build student understanding. Lineups are created by embedding plots of observed data into a field of null (noise) plots. This arrangement facilitates comparison and helps build student intuition about the difference between signal and noise. Lineups can be used to visualize randomization/permutation tests, diagnose models, and even conduct valid inference when distributional assumptions break down. This article provides an overview of how the lineup protocol for visual inference can be used to build understanding of key statistical topics throughout the statistics curriculum. Supplementary materials for this article are available online.
Climate change beliefs, concerns, and attitudes toward adaptation and mitigation among farmers in the Midwestern United States
A February 2012 survey of almost 5,000 farmers across a region of the U.S. that produces more than half of the nation’s corn and soybean revealed that 66 % of farmers believed climate change is occurring (8 % mostly anthropogenic, 33 % equally human and natural, 25 % mostly natural), while 31 % were uncertain and 3.5 % did not believe that climate change is occurring. Results of initial analyses indicate that farmers’ beliefs about climate change and its causes vary considerably, and the relationships between those beliefs, concern about the potential impacts of climate change, and attitudes toward adaptive and mitigative action differ in systematic ways. Farmers who believed that climate change is occurring and attributable to human activity were significantly more likely to express concern about impacts and support adaptive and mitigative action. On the other hand, farmers who attributed climate change to natural causes, were uncertain about whether it is occurring, or did not believe that it is occurring were less concerned, less supportive of adaptation, and much less likely to support government and individual mitigative action. Results suggest that outreach with farmers should account for these covariances in belief, concerns, and attitudes toward adaptation and mitigation.
Supporting Data Science in the Statistics Curriculum
This article describes a collaborative project across three institutions to develop, implement, and evaluate a series of tutorials and case studies that highlight fundamental tools of data science—such as visualization, data manipulation, and database usage—that instructors at a wide-range of institutions can incorporate into existing statistics courses. The resulting materials are flexible enough to serve both introductory and advanced students, and aim to provide students with the skills to experiment with data, find their own patterns, and ask their own questions. In this article, we discuss a tutorial on data visualization and a case study synthesizing data wrangling and visualization skills in detail, and provide references to additional class-tested materials. R and R Markdown are used for all of the activities.
Questions (and Answers) for Incorporating Nontraditional Grading in Your Statistics Courses
Nontraditional grading methods have recently become more common, and as with any large pedagogical shift, there are a number of questions to consider when applying a new grading scheme to a course. This article summarizes four types of nontraditional grading and shares experiences from the authors who have applied them to a variety of courses in statistics. This article is structured as a set of questions and answers, seeking to address many of the concerns and considerations that one may face as they transition a course’s grading structure. Supplementary materials for this article are available online.
Variations of Q-Q Plots: The Power of Our Eyes
In statistical modeling, we strive to specify models that resemble data collected in studies or observed from processes. Consequently, distributional specification and parameter estimation are central to parametric models. Graphical procedures, such as the quantile-quantile (Q-Q) plot, are arguably the most widely used method of distributional assessment, though critics find their interpretation to be overly subjective. Formal goodness of fit tests are available and are quite powerful, but only indicate whether there is a lack of fit, not why there is lack of fit. In this article, we explore the use of the lineup protocol to inject rigor into graphical distributional assessment and compare its power to that of formal distributional tests. We find that lineup tests are considerably more powerful than traditional tests of normality. A further investigation into the design of Q-Q plots shows that de-trended Q-Q plots are more powerful than the standard approach as long as the plot preserves distances in x and y to be the same. While we focus on diagnosing nonnormality, our approach is general and can be directly extended to the assessment of other distributions.
Model Choice and Diagnostics for Linear Mixed-Effects Models Using Statistics on Street Corners
The complexity of linear mixed-effects (LME) models means that traditional diagnostics are rendered less effective. This is due to a breakdown of asymptotic results, boundary issues, and visible patterns in residual plots that are introduced by the model fitting process. Some of these issues are well known and adjustments have been proposed. Working with LME models typically requires that the analyst keeps track of all the special circumstances that may arise. In this article, we illustrate a simpler but generally applicable approach to diagnosing LME models. We explain how to use new visual inference methods for these purposes. The approach provides a unified framework for diagnosing LME fits and for model selection. We illustrate the use of this approach on several commonly available datasets. A large-scale Amazon Turk study was used to validate the methods. R code is provided for the analyses. Supplementary materials for this article are available online.
Are You Normal? The Problem of Confounded Residual Structures in Hierarchical Linear Models
We encounter hierarchical data structures in a wide range of applications. Regular linear models are extended by random effects to address correlation between observations in the same group. Inference for random effects is sensitive to distributional misspecifications of the model, making checks for (distributional) assumptions particularly important. The investigation of residual structures is complicated by the presence of different levels and corresponding dependencies. Ignoring these dependencies leads to erroneous conclusions using our familiar tools, such as Q-Q plots or normal tests. We first show the extent of the problem, then we introduce the fraction of confounding as a measure of the level of confounding in a model and finally introduce rotated random effects as a solution to assessing distributional model assumptions. This article has supplementary materials online.
A tale of four cities: exploring the soul of State College, Detroit, Milledgeville and Biloxi
Can data help us explore and expose the soul of the community? This was the challenge posed by the 2013 Data Exposition. The Knight Foundation, in cooperation with Gallup, furnished data from 43,000 people over 3 years (2008–2010) in 26 communities, which we explored in an effort to discover variables associated with community attachment. Our analysis focused on four cities that stood out after our initial exploration of the data set: State College, PA; Detroit, MI; Milledgeville, GA; and Biloxi, MS. We present our use of survey-weighted binned scatterplots to graphically explore the association between an individual’s community attachment and perceived economic outlook. Additionally, we present a few other analyses we found interesting during our initial exploration which we view as a collection of “short stories”.
Bringing Visual Inference to the Classroom
In the classroom, we traditionally visualize inferential concepts using static graphics or interactive apps. For example, there is a long history of using apps to visualize sampling distributions. Recent developments in statistical graphics have created an opportunity to bring additional visualizations into the classroom to hone student understanding. Specifically, the lineup protocol for visual inference provides a framework for students see the difference between signal and noise by embedding a plot of observed data in a field of null (noise) plots. Lineups have proven valuable in visualizing randomization/permutation tests, diagnosing models, and even conducting valid inference when distributional assumptions break down. This paper provides an overview of how the lineup protocol for visual inference can be used to hone understanding of key statistical topics throughout the statistics curricula.
Bootstrapping Clustered Data in R using lmeresampler
Linear mixed-effects models are commonly used to analyze clustered data structures. There are numerous packages to fit these models in R and conduct likelihood-based inference. The implementation of resampling-based procedures for inference are more limited. In this paper, we introduce the lmeresampler package for bootstrapping nested linear mixed-effects models fit via lme4 or nlme. Bootstrap estimation allows for bias correction, adjusted standard errors and confidence intervals for small samples sizes and when distributional assumptions break down. We will also illustrate how bootstrap resampling can be used to diagnose this model class. In addition, lmeresampler makes it easy to construct interval estimates of functions of model parameters.