Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
660 result(s) for "ordinal data"
Sort by:
Generalized joint attribute modeling for biodiversity analysis: median-zero, multivariate, multifarious data
Probabilistic forecasts of species distribution and abundance require models that accommodate the range of ecological data, including a joint distribution of multiple species based on combinations of continuous and discrete observations, mostly zeros. We develop a generalized joint attribute model (GJAM), a probabilistic framework that readily applies to data that are combinations of presence-absence, ordinal, continuous, discrete, composition, zero-inflated, and censored. It does so as a joint distribution over all species providing inference on sensitivity to input variables, correlations between species on the data scale, prediction, sensitivity analysis, definition of community structure, and missing data imputation. GJAM applications illustrate flexibility to the range of species-abundance data. Applications to forest inventories demonstrate species relationships responding as a community to environmental variables. It shows that the environment can be inverse predicted from the joint distribution of species. Application to microbiome data demonstrates how inverse prediction in the GJAM framework accelerates variable selection, by isolating effects of each input variable's influence across all species.
OpenMx 2.0: Extended Structural Equation and Statistical Modeling
The new software package OpenMx 2.0 for structural equation and other statistical modeling is introduced and its features are described. OpenMx is evolving in a modular direction and now allows a mix-and-match computational approach that separates model expectations from fit functions and optimizers. Major backend architectural improvements include a move to swappable open-source optimizers such as the newly written CSOLNP. Entire new methodologies such as item factor analysis and state space modeling have been implemented. New model expectation functions including support for the expression of models in LISREL syntax and a simplified multigroup expectation function are available. Ease-of-use improvements include helper functions to standardize model parameters and compute their Jacobian-based standard errors, access to model components through standard R $ mechanisms, and improved tab completion from within the R Graphical User Interface.
Best Practices for Binary and Ordinal Data Analyses
The measurement of many human traits, states, and disorders begins with a set of items on a questionnaire. The response format for these questions is often simply binary (e.g., yes/no) or ordered (e.g., high, medium or low). During data analysis, these items are frequently summed or used to estimate factor scores. In clinical applications, such assessments are often non-normally distributed in the general population because many respondents are unaffected, and therefore asymptomatic. As a result, in many cases these measures violate the statistical assumptions required for subsequent analyses. To reduce the influence of the non-normality and quasi-continuous assessment, variables are frequently recoded into binary (affected–unaffected) or ordinal (mild–moderate–severe) diagnoses. Ordinal data therefore present challenges at multiple levels of analysis. Categorizing continuous variables into ordered categories typically results in a loss of statistical power, which represents an incentive to the data analyst to assume that the data are normally distributed, even when they are not. Despite prior zeitgeists suggesting that, e.g., variables with more than 10 ordered categories may be regarded as continuous and analyzed as if they were, we show via simulation studies that this is not generally the case. In particular, using Pearson product-moment correlations instead of maximum likelihood estimates of polychoric correlations biases the estimated correlations towards zero. This bias is especially severe when a plurality of the observations fall into a single observed category, such as a score of zero. By contrast, estimating the ordinal correlation by maximum likelihood yields no estimation bias, although standard errors are (appropriately) larger. We also illustrate how odds ratios depend critically on the proportion or prevalence of affected individuals in the population, and therefore are sub-optimal for studies where comparisons of association metrics are needed. Finally, we extend these analyses to the classical twin model and demonstrate that treating binary data as continuous will underestimate genetic and common environmental variance components, and overestimate unique environment (residual) variance. These biases increase as prevalence declines. While modeling ordinal data appropriately may be more computationally intensive and time consuming, failing to do so will likely yield biased correlations and biased parameter estimates from modeling them.
A consistent test of independence based on a sign covariance related to Kendall's tau
The most popular ways to test for independence of two ordinal random variables are by means of Kendall's tau and Spearman's rho. However, such tests are not consistent, only having power for alternatives with \"monotonie\" association. In this paper, we introduce a natural extension of Kendall's tau, called τ*, which is non-negative and zero if and only if independence holds, thus leading to a consistent independence test. Furthermore, normalization gives a rank correlation which can be used as a measure of dependence, taking values between zero and one. A comparison with alternative measures of dependence for ordinal random variables is given, and it is shown that, in a well-defined sense, τ* is the simplest, similarly to Kendall's tau being the simplest of ordinal measures of monotone association. Simulation studies show our test compares well with the alternatives in terms of average p-values.
Guidelines for the use and statistical analysis of the Home Office fingermark grading scheme for comparing fingermark development techniques
•Data generated using the Home Office fingermark grading scheme should not be analysed using averages.•The different degrees of fingermark development should be categorised as Class 0 to Class 4 and never numerically 0 to 4.•Summing the frequency of fingermarks which score a Class 3 or Class 4 allows for simple analysis and presentation.•Statistical tests are not mandatory, so long as researchers do not overstate their conclusions. This paper provides guidance on how to properly analyse data generated from the Home Office fingermark grading scheme. The core of the issue is that it creates ordinal data and should therefore not be analysed using averages. To reduce confusion, it is recommended to label the different degrees of fingermark development as classes rather than numerical scores. Appropriate statistical tests are provided to properly analyse Home Office fingermark grading scheme data, however, not using statistical tests is perfectly acceptable so long as conclusions are worded appropriately and do not exaggerate the significance of the findings. Some guidance is provided on estimating sample size and optimal methods for presenting results.
Conducting Measurement Invariance Tests with Ordinal Data: A Guide for Social Work Researchers
Objective: The validity of measures across groups is a major concern for social work researchers and practitioners. Many social workers use scales, or sets of questionnaire items, with ordinal response options. However, a review of social work literature indicates the appropriate treatment of ordinal data in measurement invariance tests is rare; only 3 of 57 articles published in 26 social work journals over the past 12 years used proper testing procedures. This article synthesizes information from the literature and provides recommendations for appropriate measurement invariance procedures with ordinal data. Method: We use data from the Cebu Longitudinal Health and Nutrition Survey to demonstrate applications of invariance testing with ordinal data. Using a robust weighted least squares estimator and polychoric correlation matrix, we examine invariance of a 10-item Perceived Stress Scale (PSS) across 2 young adult groups defined by health status. We describe 2 competing approaches: a 4-step approach, in which factor loadings and thresholds are tested and constrained separately; and a 3-step approach, in which loadings and thresholds are tested and constrained in tandem. Results: Both approaches lead to the same conclusion that the 2 dimensions of the PSS are noninvariant across health status. In the absence of invariance, mean scores on the PSS factors cannot be validly compared across groups, nor should latent variables be used in the hypothesis testing across the 2 groups. Readers are directed to online resources. Conclusions: Careful examination of social work scales is likely to reveal fit or noninvariance problems across some groups. Use of appropriate methods for invariance testing will reduce misuse of measures in practice and improve the rigor and quality of social work publications.
The Relationship Between Subjective Wellbeing and Subjective Wellbeing Inequality: An Important Role for Skewness
We argue that the relationship between individual satisfaction with life (SWL) and SWL inequality is more complex than described by earlier research. Our measures of SWL inequality include indices designed specifically for ordinal data as well as often used (but inappropriate) measures suited to cardinal data. Using inequality indices derived by Cowell and Flachaire designed for use with ordinal data, our analysis shows that skewness of the SWL distribution, rather than inequality per se, matters for individual SWL outcomes. The empirical analysis is based on repeated cross-section data obtained from the World Values Survey. Our results are consistent with there being negative externalities for an individual’s SWL arising from people who are low in the SWL distribution, with positive externalities arising from people who are high in the SWL distribution.
Statistical analysis of Likert-based ordinal scales: a guide for clinical trialists
Background Likert-based scales are a popular tool in clinical trials for assessing patient-reported outcomes. A key analytical decision involves whether to treat these data as binary, continuous, or ordinal. Each approach has implications for statistical power, bias, and interpretation of the results. In this report, we examine methods for evaluating Likert scales, with a particular focus on ordinal approaches, including win probability methods and proportional odds models. Methods We examined the use of proportional odds logistic regression, win probability estimation, dichotomisation with binary logistic regression, and linear regression for analysis of Likert scale-based outcomes. We applied these analytical approaches to patient-reported discomfort data from MyTEMP, a randomised trial comparing personalised cooler dialysate to standard-temperature dialysate in patients undergoing hemodialysis. We also conducted a simulation study to evaluate bias, coverage, and statistical power for each method under proportional and non-proportional odds scenarios across varying sample sizes and outcome distributions. Results In the MyTEMP trial, ordinal analyses showed patients receiving personalised cooler dialysate reported greater discomfort related to feeling cold than those receiving standard dialysate (win probability 64%, win difference 28%, win ratio 1.70; all p  ≤ 0.001). The proportional odds model suggested an average twofold increase in the odds of greater discomfort for the intervention (odds ratio 2.25), though the model assumption was violated. A partial proportional odds model revealed stronger intervention effects at higher discomfort thresholds (e.g., nearly sixfold odds at the highest discomfort scores). Simulations demonstrated that ordinal methods (win probability and proportional odds models) generally had higher statistical power and lower bias than methods involving dichotomisation or treating ordinal data as continuous, particularly in the presence of skewed outcome distributions. Conclusion Our work demonstrated that analysing Likert-based outcomes using ordinal methods yields greater statistical power and more nuanced interpretations than dichotomisation or treating data as continuous.
A Novel Performance Measurement of Innovation and R&D Projects for Driving Digital Transformation in Construction Using Ordinal Priority Approach
“The COVID-19 pandemic” and “digital transformation” are prevailing mega-trends of volatile, uncertain, complex, and ambiguous world. Pandemic made organizations to employ strategies for “surviving, resilience , and thriving” for different periods. Digital transformation motivates organizations to grasp disruptive opportunities by conducting innovation, R&D projects. This study addresses “Which innovation and R&D projects can drive digital transformation in a construction organization, and meet its organizational criteria for survival, resilience, and thriving?” The significant issue with evaluating R&D projects is the lack of “quantitative data.” The current study proposes a novel performance measurement framework based on the Ordinal Priority Approach (OPA), which considers decision-makers’ preferences as “ordinal data.” The proposed model considers various degrees of importance for periods during performance measurement, handles any number of criteria and decision-making units (DMUs), while Data Envelopment Analysis (DEA) has some limitations regarding the number of DMUs compared to the number of criteria, Moreover, proposed OPA-based framework receives positive and negative criteria without the need of transforming the data. Comparing OPA and Ordinal Data Envelopment Analysis (DEA) through a pilot experiment shows that the OPA-based framework is straightforward to use, with more reliable outputs.
Multiple Ordinal Correlation Based on Kendall’s Tau Measure: A Proposal
The joint analysis of various ordinal variables is necessary in many experimental studies within research fields such as sociology and psychology. Therefore, the necessary measures of multiple ordinal dependence must be easy to interpret and facilitate the interpretation of multivariate models that fit ordinal data. The main objective of this article is to propose a multiple ordinal correlation measure based on a bivariate correlation measure: Kendall’s tau. A sample version of the measure is proposed for its estimation. Furthermore, a confidence interval and a multiple ordinal independence test are proposed. The measure is applied to various simulations, covering a wide range of multiple ordinal dependency scenarios, in order to illustrate the adequacy of the measure and the proposed inferential techniques. Finally, the measure is applied to a real-world study based on a social survey of the levels of life satisfaction and the happiness index of a population.