Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
184
result(s) for
"Psychometrics Computer programs."
Sort by:
SPSS explained
This title provides the student with all that they need to undertake statistical analysis using SPSS. It combines a step-by-step approach to each procedure with easy to follow screenshots at each stage of the process. A number of other helpful features are provided: regular advice boxes with tips specific to each test explanations divided into 'essential' and 'advanced' sections to suit readers at different levels frequently asked questions at the end of each chapter.
Excel 2010 – Business Basics & Beyond
2012,2013
Microsoft Excel is one of the most powerful tools a business owner, manager, or new employee has at their disposal, and this guide teaches how to harness business data and put it to use. Using real-world examples of a small business in operation, the book covers topics such as preparing financial statements, how to best display data for maximum impact with formatting tools, data tables, charts and pivot tables, using customer information to create customized letters with mail merge, importing data from programs such as QuickBooks, calculating the costs of doing business with financial formulas, and much more. Helpful screenshots are spread throughout the text, and the book explains how to find ready-made templates online for free.
Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research
2017
The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman's eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study.
Journal Article
The European KIDSCREEN approach to measure quality of life and well-being in children: development, current application, and future advances
by
Bullinger, Monika
,
Herdman, Michael
,
Ravens-Sieberer, Ulrike
in
Adolescent
,
Adolescents
,
Child
2014
Purpose The KIDSCREEN questionnaires were developed by a collaborative effort of European pediatric researchers for use in epidemiologic public health surveys, clinical intervention studies, and research projects. The article gives an overview of the development of the tool, summarizes its extensive applications in Europe, and describes the development of a new computerized adaptive test (KIDS-CAT) based on KIDSCREEN experiences. Methods The KIDSCREEN versions (self-report and proxy versions with 52, 27, and 10 items) were simultaneously developed in 13 different European countries to warrant cross-cultural applicability, using methods based on classical test theory (CTT: descriptive statistics, CFA and MAP, internal consistency, retest reliability measures) and item response theory (IRT: Rasch modeling, DIF analyses, etc.). The KIDS-CAT was developed (in cooperation with the US pediatric PROMIS project) based on archival data of European KIDSCREEN health surveys using IRT more extensively (IRC). Results Research has shown that the KIDSCREEN is a reliable, valid, sensitive, and conceptually/linguistically appropriate QoL measure in 38 countries/languages by now. European and national norm data are available. New insights from KIDSCREEN studies stimulate pediatric health care. Based on KIDSCREEN, the Kids-CAT promises to facilitate a very efficient, precise, as well as reliable and valid assessment of QoL. Conclusions The KIDSCREEN has standardized QoL measurement in Europe in children as a valid and cross-cultural comparable tool. The Kids-CAT has the potential to further advance pediatric health measurement and care via Internet application.
Journal Article
Efficiency of Static and Computer Adaptive Short Forms Compared to Full-Length Measures of Depressive Symptoms
by
Pilkonis, Paul A.
,
Hays, Ron D.
,
Reise, Steven P.
in
Applied psychology
,
Banking industry
,
Banks
2010
Purpose Short-form patient-reported outcome measures are popular because they minimize patient burden. We assessed the efficiency of static short forms and computer adaptive testing (CAT) using data from the Patient-Reported Outcomes Measurement Information System (PROMIS) project. Methods We evaluated the 28-item PROMIS depressive symptoms bank. We used post hoc simulations based on the PROMIS calibration sample to compare several shortform selection strategies and the PROMIS CAT to the total item bank score. Results Compared with full-bank scores, all short forms and CAT produced highly correlated scores, but CAT outperformed each static short form in almost all criteria. However, short-form selection strategies performed only marginally worse than CAT. The performance gap observed in static forms was reduced by using a two-stage branching test format. Conclusions Using several polytomous items in a calibrated unidimensional bank to measure depressive symptoms yielded a CAT that provided marginally superior efficiency compared to static short forms. The efficiency of a two-stage semi-adaptive testing strategy was so close to CAT that it warrants further consideration and study.
Journal Article
Statistically Controlling for Confounding Constructs Is Harder than You Think
2016
Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest--in some cases approaching 100%--when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity.
Journal Article
Regular gaming behavior and internet gaming disorder in European adolescents: results from a cross-national representative survey of prevalence, predictors, and psychopathological correlates
2015
Excessive use of online computer games which leads to functional impairment and distress has recently been included as Internet Gaming Disorder (IGD) in Section III of the DSM-5. Although nosological classification of this phenomenon is still a matter of debate, it is argued that IGD might be described best as a non-substance-related addiction. Epidemiological surveys reveal that it affects up to 3 % of adolescents and seems to be related to heightened psychosocial symptoms. However, there has been no study of prevalence of IGD on a multi-national level relying on a representative sample including standardized psychometric measures. The research project EU NET ADB was conducted to assess prevalence and psychopathological correlates of IGD in seven European countries based on a representative sample of 12,938 adolescents between 14 and 17 years. 1.6 % of the adolescents meet full criteria for IGD, with further 5.1 % being at risk for IGD by fulfilling up to four criteria. The prevalence rates are slightly varying across the participating countries. IGD is closely associated with psychopathological symptoms, especially concerning aggressive and rule-breaking behavior and social problems. This survey demonstrated that IGD is a frequently occurring phenomenon among European adolescents and is related to psychosocial problems. The need for youth-specific prevention and treatment programs becomes evident.
Journal Article
Use of the TELE-ASD-PEDS for Autism Evaluations in Response to COVID-19: Preliminary Outcomes and Clinician Acceptability
by
Wagner, Liliana
,
Francis, Sara
,
Stone, Caitlin
in
Acceptability
,
Autism
,
Autism Spectrum Disorder - diagnosis
2021
The COVID-19 pandemic has caused unprecedented disruptions to healthcare, including direct impacts on service delivery related to autism spectrum disorder (ASD). Caregiver-mediated tele-assessment offers an opportunity to continue services while adhering to social distancing guidelines. The present study describes a model of tele-assessment for ASD in young children, implemented in direct response to disruptions in care caused by the COVID-19 pandemic. We present preliminary data on the outcomes and provider perceptions of tele-assessments, together with several lessons learned during the period of initial implementation.
Journal Article
Automated Item Generation with Recurrent Neural Networks
2018
Utilizing technology for automated item generation is not a new idea. However, test items used in commercial testing programs or in research are still predominantly written by humans, in most cases by content experts or professional item writers. Human experts are a limited resource and testing agencies incur high costs in the process of continuous renewal of item banks to sustain testing programs. Using algorithms instead holds the promise of providing unlimited resources for this crucial part of assessment development. The approach presented here deviates in several ways from previous attempts to solve this problem. In the past, automatic item generation relied either on generating clones of narrowly defined item types such as those found in language free intelligence tests (e.g., Raven’s progressive matrices) or on an extensive analysis of task components and derivation of schemata to produce items with pre-specified variability that are hoped to have predictable levels of difficulty. It is somewhat unlikely that researchers utilizing these previous approaches would look at the proposed approach with favor; however, recent applications of machine learning show success in solving tasks that seemed impossible for machines not too long ago. The proposed approach uses deep learning to implement probabilistic language models, not unlike what Google brain and Amazon Alexa use for language processing and generation.
Journal Article
Development and Internal Validation of the Digital Health Readiness Questionnaire: Prospective Single-Center Survey Study
2023
While questionnaires for assessing digital literacy exist, there is still a need for an easy-to-use and implementable questionnaire for assessing digital readiness in a broader sense. Additionally, learnability should be assessed to identify those patients who need additional training to use digital tools in a health care setting.
The aim of the development of the Digital Health Readiness Questionnaire (DHRQ) was to create a short, usable, and freely accessible questionnaire that was designed from a clinical practice perspective.
It was a prospective single-center survey study conducted in Jessa Hospital Hasselt in Belgium. The questionnaire was developed with a panel of field experts with questions in following 5 categories: digital usage, digital skills, digital literacy, digital health literacy, and digital learnability. All participants who were visiting the cardiology department as patients between February 1, 2022, and June 1, 2022, were eligible for participation. Cronbach α and confirmatory factor analysis were performed.
A total number of 315 participants were included in this survey study, of which 118 (37.5%) were female. The mean age of the participants was 62.6 (SD 15.1) years. Cronbach α analysis yielded a score of >.7 in all domains of the DHRQ, which indicates acceptable internal consistency. The fit indices of the confirmatory factor analysis showed a reasonably good fit: standardized root-mean-square residual=0.065, root-mean-square error of approximation=0.098 (95% CI 0.09-0.106), Tucker-Lewis fit index=0.895, and comparative fit index=0.912.
The DHRQ was developed as an easy-to-use, short questionnaire to assess the digital readiness of patients in a routine clinical setting. Initial validation demonstrates good internal consistency, and future research will be needed to externally validate the questionnaire. The DHRQ has the potential to be implemented as a useful tool to gain insight into the patients who are treated in a care pathway, tailor digital care pathways to different patient populations, and offer those with low digital readiness but high learnability appropriate education programs in order to let them take part in the digital pathways.
Journal Article