Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
37,261 result(s) for "Psychological tests"
Sort by:
Adapting educational and psychological tests for cross-cultural assessment
Adapting Educational and Psychological Tests for Cross-Cultural Assessment critically examines and advances new methods and practices for adapting tests for cross-cultural assessment and research. The International Test Commission (ITC) guidelines for test adaptation and conceptual and methodological issues in test adaptation are described in detail, and questions of ethics and concern for validity of test scores in cross-cultural contexts are carefully examined. Advances in test translation and adaptation methodology, including statistical identification of flawed test items, establishing equivalence of different language versions of a test, and methodologies for comparing tests in multiple languages, are reviewed and evaluated. The book also focuses on adapting ability, achievement, and personality tests for cross-cultural assessment in educational, industrial, and clinical settings. This book furthers the ITC's mission of stimulating research on timely topics associated with assessment. It provides an excellent resource for courses in psychometric methods, test construction, and educational and/or psychological assessment, testing, and measurement. Written by internationally known scholars in psychometric methods and cross-cultural psychology, the collection of chapters should also provide essential information for educators and psychologists involved in cross-cultural assessment, as well as students aspiring to such careers. Contents: Preface. Part I: Cross-Cultural Adaptation of Educational and Psychological Tests: Theoretical and Methodological Issues. R.K. Hambleton, Issues, Designs, and Technical Guidelines for Adapting Tests Into Multiple Languages and Cultures. F.J.R. van de Vijver, Y.H. Poortinga, Conceptual and Methodological Issues in Adapting Tests. T. Oakland, Selected Ethical Issues Relevant to Test Adaptations. S.G. Sireci, L. Patsula, R.K. Hambleton, Statistical Methods for Identifying Flaws in the Test Adaptation Process. S.G. Sireci, Using Bilinguals to Evaluate the Comparability of Different Language Versions of a Test. L.L. Cook, A.P. Schmitt-Cascallar, Establishing Score Comparability for Tests Given in Different Languages. L.L. Cook, A.P. Schmitt-Cascallar, C. Brown, Adapting Achievement and Aptitude Tests: A Review of Methodological Issues. Part II: Cross-Cultural Adaptation of Educational and Psychological Tests: Applications to Achievement, Aptitude, and Personality Tests. C.T. Fitzgerald, Test Adaptation in a Large-Scale Certification Program. C.Y. Maldonado, K.F. Geisinger, Conversion of the Wechsler Adult Intelligence Scale Into Spanish: An Early Test Adaption Effort of Considerable Consequence. N.K. Tanzer, Developing Tests for Use in Multiple Languages and Cultures: A Plea for Simultaneous Development. F. Drasgow, T.M. Probst, The Psychometrics of Adaptation: Evaluating Measurement Equivalence Across Languages and Cultures. M. Beller, N. Gafni, P. Hanani, Constructing, Adapting, and Validating Admissions Tests in Multiple Languages: The Israeli Case. P.F. Merenda, Cross-Cultural Adaptation of Educational and Psychological Testing. C.D. Spielberger, M.S. Moscoso, T.M. Brunner, Cross-Cultural Assessment of Emotional States and Personality Traits.
Statistical approaches to measurement invariance
This book reviews the statistical procedures used to detect measurement bias. Measurement bias is examined from a general latent variable perspective so as to accommodate different forms of testing in a variety of contexts including cognitive or clinical variables, attitudes, personality dimensions, or emotional states. Measurement models that underlie psychometric practice are described, including their strengths and limitations. Practical strategies and examples for dealing with bias detection are provided throughout. The book begins with an introduction to the general topic, followed by a review of the measurement models used in psychometric theory. Emphasis is placed on latent variable models, with introductions to classical test theory, factor analysis, and item response theory, and the controversies associated with each, being provided. Measurement invariance and bias in the context of multiple populations is defined in chapter 3 followed by chapter 4 that describes the common factor model for continuous measures in multiple populations and its use in the investigation of factorial invariance. Identification problems in confirmatory factor analysis are examined along with estimation and fit evaluation and an example using WAIS-R data. The factor analysis model for discrete measures in multiple populations with an emphasis on the specification, identification, estimation, and fit evaluation issues is addressed in the next chapter. An MMPI item data example is provided. Chapter 6 reviews both dichotomous and polytomous item response scales emphasizing estimation methods and model fit evaluation. The use of models in item response theory in evaluating invariance across multiple populations is then described, including an example that uses data from a large-scale achievement test. Chapter 8 examines item bias evaluation methods that use observed scores to match individuals and provides an example that applies item response theory to data introduced earlier in the book. The book concludes with the implications of measurement bias for the use of tests in prediction in educational or employment settings. (DIPF/Orig.).
Assessment in counseling : practice and applications
\"We focus on the application of the theoretical and measurement concepts of assessment in counseling. We use a conversational style of writing and emphasize the skills used in assessment. In this book we present theoretical basis of assessment and emphasize the practical components to enhance practice in counseling\"-- Provided by publisher.
Use of risk assessment instruments to predict violence and antisocial behaviour in 73 samples involving 24 827 people: systematic review and meta-analysis
Objective To investigate the predictive validity of tools commonly used to assess the risk of violence, sexual, and criminal behaviour.Design Systematic review and tabular meta-analysis of replication studies following PRISMA guidelines.Data sources PsycINFO, Embase, Medline, and United States Criminal Justice Reference Service Abstracts.Review methods We included replication studies from 1 January 1995 to 1 January 2011 if they provided contingency data for the offending outcome that the tools were designed to predict. We calculated the diagnostic odds ratio, sensitivity, specificity, area under the curve, positive predictive value, negative predictive value, the number needed to detain to prevent one offence, as well as a novel performance indicator—the number safely discharged. We investigated potential sources of heterogeneity using metaregression and subgroup analyses.Results Risk assessments were conducted on 73 samples comprising 24 847 participants from 13 countries, of whom 5879 (23.7%) offended over an average of 49.6 months. When used to predict violent offending, risk assessment tools produced low to moderate positive predictive values (median 41%, interquartile range 27-60%) and higher negative predictive values (91%, 81-95%), and a corresponding median number needed to detain of 2 (2-4) and number safely discharged of 10 (4-18). Instruments designed to predict violent offending performed better than those aimed at predicting sexual or general crime.Conclusions Although risk assessment tools are widely used in clinical and criminal justice settings, their predictive accuracy varies depending on how they are used. They seem to identify low risk individuals with high levels of accuracy, but their use as sole determinants of detention, sentencing, and release is not supported by the current evidence. Further research is needed to examine their contribution to treatment and management.
Standardizing ADOS Scores for a Measure of Severity in Autism Spectrum Disorders
The aim of this study is to standardize Autism Diagnostic Observation Schedule (ADOS) scores within a large sample to approximate an autism severity metric. Using a dataset of 1,415 individuals aged 2–16 years with autism spectrum disorders (ASD) or nonspectrum diagnoses, a subset of 1,807 assessments from 1,118 individuals with ASD were divided into narrow age and language cells. Within each cell, severity scores were based on percentiles of raw totals corresponding to each ADOS diagnostic classification. Calibrated severity scores had more uniform distributions across developmental groups and were less influenced by participant demographics than raw totals. This metric should be useful in comparing assessments across modules and time, and identifying trajectories of autism severity for clinical, genetic, and neurobiological research.
Validity evidence based on internal structure
Validity evidence based on the internal structure of an assessment is one of the five forms of validity evidence stipulated in the Standards for Educational and Psychological Testing of the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. In this paper, we describe the concepts underlying internal structure and the statistical methods for gathering and analyzing internal structure. An in-depth description of the traditional and modern techniques for evaluating the internal structure of an assessment. Validity evidence based on the internal structure of an assessment is necessary for building a validity argument to support the use of a test for a particular purpose. The methods described in this paper provide practitioners with a variety of tools for assessing dimensionality, measurement invariance and reliability for an educational test or other types of assessment.