Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,546 result(s) for "Measurement, Statistics, and Research Design"
Sort by:
Examining the Rule of Thumb of Not Using Multilevel Modeling: The \Design Effect Smaller Than Two\ Rule
Educational researchers commonly use the rule of thumb of \"design effect smaller than 2\" as the justification of not accounting for the multilevel or clustered structure in their data. The rule, however, has not yet been systematically studied in previous research. In the present study, we generated data from three different models (which differ in the location of the clustering effect). With a 3 (design effect) × 5 (cluster size) × 4 (number of clusters) Monte Carlo simulation study we found that the rule should not be applied when researchers: (a) are interested in the effects of higher-level predictors, or (b) have a cluster size less than 10. Implications of the findings and limitations of the study are discussed.
The Surprisingly Modest Relationship Between SES and Educational Achievement
Measures of socioeconomic status (SES) are routinely used in analyses of achievement data to increase statistical power, statistically control for the effects of SES, and enhance causality arguments under the premise that the SES-achievement relationship is moderate to strong. Empirical evidence characterizing the strength of the SES-achievement relationship and its moderators suggests that this relationship is surprisingly modest, with an average SES-achievement correlation of .22, although it appears to have strengthened in the past 3 decades. The modest SES-achievement relationship has important implications for using SES measures in educational data analyses. We provide evidence of this relationship and of the need to use theoretical models to guide the construction and selection of SES measures in analyses of achievement data.
Doubly Latent Multilevel Analyses of Classroom Climate: An Illustration
Many classroom climate studies suffer from 2 critical problems: They (a) treat climate as a student-level (L1) variable in single-level analyses instead of a classroom-level (L2) construct in multilevel analyses; and (b) rely on manifest-variable models rather than on latent-variable models that control measurement error at L1 and L2, and sampling error in the aggregation of L1 ratings to form L2 constructs. On the basis of an analysis of 2,541 students in Grades 5 or 6 from 89 classrooms, the authors demonstrate doubly latent multilevel structural equation models that overcome both of these problems. The results show that L2 classroom climate (a higher-order factor representing classroom mastery goal orientation, challenge, and teacher caring) had positive effects on self-efficacy and achievement. The authors conclude with a discussion of related issues (e.g., the meaning of L2 constructs vs. L1 residuals, the dimensionality of climate constructs at L2) and guidelines for future research. [Supplementary materials are available for this article. Go to the publisher's online edition of The Journal of Experimental Education for the following free supplemental resource(s): Appendices and Supplemental Tables.]
Application of Exploratory Structural Equation Modeling to Evaluate the Academic Motivation Scale
In this research, the authors examined the construct validity of scores of the Academic Motivation Scale using exploratory structural equation modeling. Study 1 and Study 2 involved 1,416 college students and 4,498 high school students, respectively. First, results of both studies indicated that the factor structure tested with exploratory structural equation modeling provides better fit to the data than the one tested with confirmatory factor analysis. Second, the factor structure was gender invariant in the exploratory structural equation modeling framework. Third, the pattern of convergent and divergent correlations among Academic Motivation Scale factors was more in line with theoretical expectations when computed with exploratory structural equation modeling rather than confirmatory factor analysis. Fourth, the configuration of convergent and divergent correlations connecting each Academic Motivation Scale factors to a validity criterion was more in line with theoretical expectations with exploratory structural equation modeling than with confirmatory factor analysis.
Using the Modification Index and Standardized Expected Parameter Change for Model Modification
Model modification is oftentimes conducted after discovering a badly fitting structural equation model. During the modification process, the modification index (MI) and the standardized expected parameter change (SEPC) are 2 statistics that may be used to aid in the selection of parameters to add to a model to improve the fit. The purpose of this study was to extend the literature by examining the performance of the MI and the SEPC used independently and in conjunction with one another in terms of arriving at the correct confirmatory factor model. The results indicated that, in general, the SEPC outperformed the MI when arriving at the correct confirmatory factor model. However, they performed more similarly as factor loading size, sample size, and misspecified parameter size increased. The author provides recommendations on when the MI and SEPC perform more optimally.
Comparing Methods for Addressing Missingness in Longitudinal Modeling of Panel Data
Respondent attrition is a common problem in national longitudinal panel surveys. To make full use of the data, weights are provided to account for attrition. Weight adjustments are based on sampling design information and data from the base year; information from subsequent waves is typically not utilized. Alternative methods to address bias from nonresponse are full information maximum likelihood (FIML) or multiple imputation (MI). The effects on bias of growth parameter estimates from using these methods are compared via a simulation study. The results indicate that caution needs to be taken when utilizing panel weights when there is missing data, and to consider methods like FIML and MI, which are not as susceptible to the omission of important auxiliary variables.
Multilevel Modeling and Ordinary Least Squares Regression: How Comparable Are They?
Studies analyzing clustered data sets using both multilevel models (MLMs) and ordinary least squares (OLS) regression have generally concluded that resulting point estimates, but not the standard errors, are comparable with each other. However, the accuracy of the estimates of OLS models is important to consider, as several alternative techniques (e.g., bootstrapping) used when analyzing clustered data sets only make adjustments to standard errors but not to the regression coefficients. Using a Monte Carlo simulation, we analyzed 54,000 data sets using both MLM and OLS under varying conditions and we show that coefficients of not just OLS models, but MLMs as well, may be biased when relevant higher-level variables are omitted from a model, a situation that is likely to occur when using large-scale, secondary data sets. However, we demonstrate that by including aggregated level-one variables at the higher level, the resulting bias can be effectively removed.
The Development and Validation of the University Belonging Questionnaire
Although belonging in K-12 school settings has been abundantly researched and clearly defined, at the university level the research and construct definition is still in its infancy (Tovar & Simon, 2010). The present study sought to develop and validate an instrument measuring university belonging-the University Belonging Questionnaire (UBQ). In Study 1, an exploratory factor analysis was conducted with a sample of university students (N = 421), finding a reliable scale with three factors: (a) university affiliation, (b) university support and acceptance, and (c) faculty and staff relations. In Study 2, a confirmatory factor analysis on a new sample (N = 290), confirmed the final 3-factor, 24-item model. Further analyses demonstrated the convergent and incremental validity of the UBQ, as it positively correlated with measures of perceived social support, social connectedness, and general belonging. Implications and recommendations for university belonging research are discussed.
Propensity Score Matching for Education Data: Worked Examples
Randomized controlled trials are not always feasible in educational research, so researchers must use alternative methods to study treatment effects. Propensity score matching is one such method for observational studies that has shown considerable growth in popularity since it was first introduced in the early 1980s. This paper outlines the concept of propensity scores by explaining their theoretical principles and providing two examples of their usefulness within the realm of educational research. Through worked examples, we highlight the effectiveness of propensity scores as a method for reducing bias and increasing the balance between treatment and comparison groups. To aid in the understanding and future use of propensity scores, we provide R syntax for all our analyses.
Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study
This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test p values (RTcombiP). Four factors were manipulated: mean intervention effect, number of cases included in a study, number of measurement occasions for each case, and between-case variance. Under the simulated conditions, Type I error rate was under control at the nominal 5% level for both HLM and RTcombiP. Furthermore, for both procedures, a larger number of combined cases resulted in higher statistical power, with many realistic conditions reaching statistical power of 80% or higher. Smaller values for the between-case variance resulted in higher power for HLM. A larger number of data points resulted in higher power for RTcombiP.