Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
133 result(s) for "Tools and Questionnaires in Human Factors Evaluation"
Sort by:
The Adult Inpatient eHealth Literacy Scale (AIPeHLS): Development and Validation Study
The rapid evolution of digital health technologies, particularly within the Web 3.0 framework, has underscored eHealth literacy (eHL) as a critical competency for patients engaging with digital health care platforms. Patients in sustained hospital stays, often in vulnerable conditions, face unique challenges in using eHealth tools effectively. However, existing eHL assessment tools are insufficient to address the intricate and dynamic demands of contemporary health care systems, especially for individuals under continuous hospital care. This study aimed to develop the Adult Inpatient eHealth Literacy Scale (AIPeHLS), a comprehensive, multidimensional tool grounded in the Lily Model, to evaluate eHL among adult inpatients within the context of digital health care innovations. The development of the AIPeHLS followed a systematic, multiphase process. Initial item pool generation was informed by a literature review and then refined using the Delphi method, resulting in a preliminary set of 53 items spanning 6 dimensions of the Lily Model. The scale was refined through a pilot survey among 100 individuals requiring inpatient care, followed by item analysis and exploratory factor analysis (EFA). Validation was achieved via a cross-sectional study with 532 participants, using confirmatory factor analysis (CFA) to verify the scale structure, alongside evaluations of convergent, discriminant, criterion-related, and content validity. Reliability was assessed using Cronbach α, Omega, and split-half reliability. The finalized AIPeHLS comprised 44 items across 6 dimensions: traditional literacy, information literacy, media literacy, health literacy, computer literacy, and scientific literacy, reflecting the skills necessary in the Web 3.0 context. Both EFA and CFA confirmed the 6-factor structure, demonstrating acceptable model fit indices (χ²=1974.654 (df=887), root mean square error of approximation=0.048, comparative fit index=0.957, normed fit index=0.925, and incremental fit index=0.957). The scale exhibited robust content validity, convergent and discriminant validity, criterion-related validity, and high internal consistency, with a Cronbach α of .965, Omega coefficient of 0.962, and a split-half reliability of 0.791 for the entire scale. The 44-item AIPeHLS was found to be a reliable and valid instrument for assessing eHL in adult inpatients in the evolving Web 3.0 context. Its comprehensive framework and strong psychometric properties make it an effective tool for health care providers to understand patients' digital health competencies and tailor interventions accordingly. For researchers, our findings provided opportunities to explore the relationship between eHL and health outcomes, while offering valuable insights into the development of more effective eHealth interventions and policies.
Designing Electronic Problem-Solving Training for Individuals With Traumatic Brain Injury: Mixed Methods, Community-Based, Participatory Research Case Study
Traditional rehabilitation research often excludes the voices of individuals with lived experience of traumatic brain injury (TBI), resulting in interventions that lack relevance, accessibility, and effectiveness. Community-based participatory research (CBPR) offers an alternative framework that emphasizes collaboration, power sharing, and sustained engagement with patients, caregivers, and clinicians. This study aimed to apply CBPR to guide front-end design (empathy interviews, empathy mapping, personas) and to evaluate the sociotechnical-pedagogical usability of the Electronic Problem-Solving Training (ePST) mobile health (mHealth) intervention with TBI partners. A multistep, mixed methods design case methodology was adopted, guided by CBPR principles and learning experience design. Participatory mechanisms included a 33-member Community Advisory Board and 10 Community Engagement Studios that engaged TBI survivors, caregivers, clinicians, and researchers throughout the Discover, Define, Develop, and Deliver phases of the Double Diamond model. Iterative activities included empathy interviews (n=14), persona development (n=10), rapid prototyping, and usability testing with 5 participants with TBI using think-aloud protocols and the Comprehensive Assessment of Usability for Learning Technologies instrument. The co-design process successfully translated community feedback into an empathy-informed, user-centered prototype and systematically identified design considerations that single-partner approaches overlook. TBI-specific design requirements emerged, including the need for linear content progression over branching navigation, higher technical performance standards, and explicit content signaling with clarity prioritized over novel interface design. Think-aloud protocols revealed that participants struggled with mobile navigation and branching structures but excelled with sequential content progression. In addition, the input from individuals with TBI, caregivers, clinicians, and researchers led to practical refinements such as shorter microlearning lessons (5-12 min), clearer voiceover tone, and simplified navigation, directly addressing the study's objective of improving accessibility and emotional resonance. Overall usability was high, measured using the Comprehensive Assessment of Usability for Learning Technologies (CAUSLT), with an average score of 4.25 out of 5 (SD 0.72; 95% CI 3.36-5.15; n=5). Knowledge accuracy was 80% (8/10 items; 95% CI 49%-94%; n=5 participants; 2 items each), indicating that the system effectively supported learning and comprehension. Module completion was 100% (5/5; 95% CI 56.6%-100%). Average time-on-task for 10 lesson completions was 11.47 (SD 5.28; range 4.6-21.42) minutes per lesson, demonstrating strong task efficiency and engagement. Highest ratings were observed in the pedagogical usability domain, reflecting that the interface was clear, intuitive, and conducive to learning. Collectively, these findings suggest that applying CBPR across all design stages produced a technically sound, easy-to-use, and pedagogically meaningful mHealth tool specifically tailored for individuals with TBI. Sustained CBPR across full design and development cycles resulted in high usability for ePST for individuals with TBI. Ultimately, this study operationalized a full-cycle pipeline that links sustained community partnership to measured usability outcomes, producing community-informed design principles and a reproducible mixed methods approach for formative mHealth development for TBI.
Validity and Reliability of the Psychodynamic Organizational Diagnostic Instrument SyMOA: Protocol for a Mixed Methods Study
A comprehensive understanding of organizations is fundamental for implementing successful change measures. However, to date, there is no empirically testable, operationalized systems-psychodynamic organizational diagnostic method that can capture the deeper, more complex dynamics that are crucial for sustainable transformation. To address this gap, we developed the Systematic Multidimensional Organizational Assessment (SyMOA), a qualitative instrument based on an evidence-based clinical diagnostic framework, the Operationalized Psychodynamic Diagnostics III. SyMOA integrates clinical, organizational, and systemic psychodynamic theory and analyzes an organization's challenges based on invisible and unconscious aspects, that is, those lurking beneath the surface. It hypothesizes 3 organizational dimensions: (1) current challenges based on the sociotechnical integration and organizational internal functioning level, (2) internal relationship dynamics, and (3) unconscious organizational conflicts. The SyMOA dimensions are operationalized into a semistructured interview guide and coding protocols for the analysis of the content. By capturing the underlying dynamics, SyMOA aims to provide a deeper understanding of an organization's challenges and establish a solid foundation for targeted interventions. This study aims to evaluate the validity and intercoder reliability of the SyMOA instrument. For this purpose, semistructured interviews will be conducted with employees of at least 3 different companies in Germany. The evaluation will be carried out by calculating Krippendorff α to determine intercoder reliability. In addition, construct validity, content validity, and external validity will also be analyzed. Recruitment and training commenced in May 2025. Data collection is planned for the second half of 2025, with analysis to follow thereafter. As this is a study protocol, no results are available yet. At the time of submission, 46 participants have been recruited. This study will give methodological insights into the validity, reliability, feasibility, and acceptability of the SyMOA instrument. The findings are expected to help further instrument refinement and inform the application of systems-psychodynamic approaches in organizational diagnostics.
Development and Validation of the Kazakhstan Version of the Questionnaire Based on the Telehealth Usability Questionnaire and Model for Assessment of Telemedicine Models for Evaluating the Usability and Effectiveness of Telemedicine Services Among Physicians: Multiphase Cross-Sectional Study
Kazakhstan has lacked validated tools to comprehensively assess physicians' perceptions, usability, and perceived effectiveness of telemedicine services. International frameworks such as the Telehealth Usability Questionnaire (TUQ) and the Model for Assessment of Telemedicine (MAST) have not previously been adapted to the national clinical and organizational context. This study aims to develop and validate TUQ-MAST-KZ, a Kazakhstan-adapted questionnaire integrating components of the TUQ and MAST models to assess physicians' perceptions, usability, and effectiveness of telemedicine services. A multiphase study was conducted, including literature review, questionnaire development, linguistic and cultural adaptation, expert content validity assessment, and pilot testing. An online survey (Google Forms) was administered to 156 physicians representing different regions and levels of health care delivery in Kazakhstan. Internal consistency (Cronbach α) and content validity indices were calculated. Additional evaluations covered clarity, structure, and practical applicability. The final TUQ-MAST-KZ instrument contains 27 items capturing technological, clinical, organizational, and behavioral dimensions of telemedicine use. The scale demonstrated high content validity (scale-level content validity index=0.94). Internal consistency was excellent, with an overall Cronbach α of 0.924. Respondents reported that the questionnaire was clearly structured, easy to complete, and relevant to clinical practice. Organizational items identified key barriers to telemedicine adoption, including limited infrastructure, insufficient managerial support, and the need for additional training. TUQ-MAST-KZ is a valid, reliable, and practice-oriented instrument for assessing physicians' perceptions of telemedicine services in Kazakhstan. It can support digital health monitoring, implementation analysis, educational planning, and policy development. Future studies should evaluate its applicability across broader samples and diverse clinical specialties.
Swedish Version of the System Usability Scale: Translation, Adaption, and Psychometric Evaluation
The Swedish health care system is undergoing a transformation. eHealth technologies are increasingly being used. The System Usability Scale is a widely used tool, offering a standardized and reliable measure for assessing the usability of digital health solutions. However, despite the existence of several translations of the System Usability Scale into Swedish, none have undergone psychometric validation. This highlights the urgent need for a validated and standardized Swedish version of the System Usability Scale to ensure accurate and reliable usability evaluations. The aim of the study was to translate and psychometrically evaluate a Swedish version of the System Usability Scale. The study utilized a 2-phase design. The first phase translated the System Usability Scale into Swedish and the second phase tested the scale's psychometric properties. A total of 62 participants generated a total of 82 measurements. Descriptive statistics were used to visualize participants' characteristics. The psychometric evaluation consisted of data quality, scaling assumptions, and acceptability. Construct validity was evaluated by convergent validity, and reliability was evaluated by internal consistency. The Swedish version of the System Usability Scale demonstrated high conformity with the original version. The scale showed high internal consistency with a Cronbach α of .852 and corrected item-total correlations ranging from 0.454 to 0.731. The construct validity was supported by a significant positive correlation between the System Usability Scale and domain 5 of the eHealth Literacy Questionnaire (P=.001). The Swedish version of the System Usability Scale demonstrated satisfactory psychometric properties. It can be recommended for use in a Swedish context. The positive correlation with domain 5 of the eHealth Literacy Questionnaire further supports the construct validity of the Swedish version of the System Usability Scale, affirming its suitability for evaluating digital health solutions. Additional tests of the Swedish version of the System Usability Scale, for example, in the evaluation of more complex eHealth technology, would further validate the scale.
Examining the Factor Structure of Objective Health Literacy and Numeracy Scales: Large-Scale Cross-Sectional Study
Scales for measuring health literacy and numeracy have been broadly classified into performance-based (objective) and self-reported (subjective) scales. Both types of scales have been widely used in research and practice; however, they are not always consistent and may assess different latent constructs. Furthermore, an increasing number of objective measures have been developed, and it is unclear how many latent factors should be assumed. This study aimed to examine the psychometric properties and factor structure of items assessing objective health literacy across multiple scales and to clarify which aspects of objective health literacy would be correlated with subjective measures, as well as health behaviors and lifestyles. A total of 5 objective scales (72 items in total) were administered to Japanese-speaking adults (N=16,097; women: 7722/16,097, 48%; mean age 54.89, SD 16.46 years). The analyzed scales included items assessing the numeracy, comprehension, and application of health information, some of which were contextualized for specific diseases, such as diabetes and cancer. Participants' responses were submitted to exploratory factor analysis, and individual factor scores were calculated to test correlations with subjective health literacy, health behavior, and lifestyle. Exploratory factor analysis identified 3 factors, which were interpreted as conceptual knowledge, numeracy, and synthesis. The conceptual knowledge factor consisted of items about medical word comprehension. All numeracy items loaded onto the same factor, even when contextualized for different diseases. The synthesis factor was characterized by items assessing the ability to read and understand health-related information and make judgments on it using one's own knowledge. The identified factors showed high interfactor correlations (r values 0.53-0.64) and small-to-moderate correlations with subjective health literacy (r values 0.14-0.45). Additionally, each factor indicated small positive correlations with healthy diet and nutrition and lower substance use (r values 0.17-0.26). Our findings suggest that scales of objective health literacy have at least three latent constructs (ie, conceptual knowledge, numeracy, and synthesis) and that disease specificity is not psychometrically prominent. Each factor has some overlap with subjective health literacy, but overall, subjective and objective health literacy should be interpreted as independent constructs, given the small-to-modest correlations.
Italian Version of the mHealth App Usability Questionnaire (Ita-MAUQ): Translation and Validation Study in People With Multiple Sclerosis
Telemedicine and mobile health (mHealth) apps have emerged as powerful tools in health care, offering convenient access to services and empowering participants in managing their health. Among populations with chronic and progressive disease such as multiple sclerosis (MS), mHealth apps hold promise for enhancing self-management and care. To be used in clinical practice, the validity and usability of mHealth tools should be tested. The most commonly used method for assessing the usability of electronic technologies are questionnaires. This study aimed to translate and validate the English version of the mHealth App Usability Questionnaire into Italian (ita-MAUQ) in a sample of people with MS. The 18-item mHealth App Usability Questionnaire was forward- and back-translated from English into Italian by an expert panel, following scientific guidelines for translation and cross-cultural adaptation. The ita-MAUQ (patient version for stand-alone apps) comprises 3 subscales, which are ease of use, interface and satisfaction, and usefulness. After interacting with DIGICOG-MS (Digital Assessment of Cognitive Impairment in Multiple Sclerosis), a novel mHealth app for cognitive self-assessment in MS, people completed the ita-MAUQ and the System Usability Scale, included to test construct validity of the translated questionnaire. Confirmatory factor analysis, internal consistency, test-retest reliability, and construct validity were assessed. Known-groups validity was examined based on disability levels as indicated by the Expanded Disability Status Scale (EDSS) score and gender. In total, 116 people with MS (female n=74; mean age 47.2, SD 14 years; mean EDSS 3.32, SD 1.72) were enrolled. The ita-MAUQ demonstrated acceptable model fit, good internal consistency (Cronbach α=0.92), and moderate test-retest reliability (intraclass coefficient correlation 0.84). Spearman coefficients revealed significant correlations between the ita-MAUQ total score; the ease of use (5 items), interface and satisfaction (7 items), and usefulness subscales; and the System Usability Scale (all P values <.05). Known-group analysis found no difference between people with MS with mild and moderate EDSS (all P values >.05), suggesting that ambulation ability, mainly detected by the EDSS, did not affect the ita-MAUQ scores. Interestingly, a statistical difference between female and male participants concerning the ease of use ita-MAUQ subscale was found (P=.02). The ita-MAUQ demonstrated high reliability and validity and it might be used to evaluate the usability, utility, and acceptability of mHealth apps in people with MS.
Evaluating the Construct Validity of the Charité Alarm Fatigue Questionnaire using Confirmatory Factor Analysis
The Charité Alarm Fatigue Questionnaire (CAFQa) is a 9-item questionnaire that aims to standardize how alarm fatigue in nurses and physicians is measured. We previously hypothesized that it has 2 correlated scales, one on the psychosomatic effects of alarm fatigue and the other on staff's coping strategies in working with alarms. We aimed to validate the hypothesized structure of the CAFQa and thus underpin the instrument's construct validity. We conducted 2 independent studies with nurses and physicians from intensive care units in Germany (study 1: n=265; study 2: n=1212). Responses to the questionnaire were analyzed using confirmatory factor analysis with the unweighted least-squares algorithm based on polychoric covariances. Convergent validity was assessed by participants' estimation of their own alarm fatigue and exposure to false alarms as a percentage. In both studies, the χ2 test reached statistical significance (study 1: χ226=44.9; P=.01; study 2: χ226=92.4; P<.001). Other fit indices suggested a good model fit (in both studies: root mean square error of approximation <0.05, standardized root mean squared residual <0.08, relative noncentrality index >0.95, Tucker-Lewis index >0.95, and comparative fit index >0.995). Participants' mean scores correlated moderately with self-reported alarm fatigue (study 1: r=0.45; study 2: r=0.53) and weakly with self-perceived exposure to false alarms (study 1: r=0.3; study 2: r=0.33). The questionnaire measures the construct of alarm fatigue as proposed in our previous study. Researchers and clinicians can rely on the CAFQa to measure the alarm fatigue of nurses and physicians.
A Computerized Adaptive Test for the Knowledge of Effective Parenting Test–Internalizing Module: Instrument Validation Study
The development of efficient, scalable, and precise tools to assess knowledge of evidence-based parenting strategies is critical, particularly as increased parenting knowledge is a core target of many intervention programs. This study aimed to develop and evaluate a computerized adaptive testing version of the Knowledge of Effective Parenting Test-Internalizing module (KEPT-I CAT). Using computerized adaptive testing simulations from a large (n=1000) national dataset, we compared the performance of the KEPT-I CAT to both the full-length Knowledge of Effective Parenting Test-Internalizing module and a 10-item static short form (KEPT-I Brief). Results indicated that the KEPT-I CAT achieved comparable efficiency to the KEPT-I Brief (10 items), while demonstrating superior psychometric properties and modestly reducing the potential for practice effects. Given these advantages, the KEPT-I CAT is well-suited for post-intervention assessment and may facilitate research examining how increases in parenting knowledge relate to changes in behavior and reductions in child internalizing symptoms.
Design and Performance of an Email-Based Patient Recruitment Campaign in Primary Care Research: Formative Secondary Analysis
Recruiting patients in primary care research remains challenging due to clinical workload, staffing constraints, and the need to limit disruption to routine care. Traditional recruitment methods often place a substantial burden on clinics, prompting research teams to adopt low-burden and scalable approaches such as email-based recruitment. Despite its growing use, limited empirical evidence describes how email recruitment campaigns are designed and how they perform when targeting primary care patients in real-world settings. This article aims to descriptively examine engagement metrics from an email recruitment campaign targeting primary care patients. We conducted a formative, descriptive secondary analysis of engagement metrics generated during a large-scale email recruitment campaign conducted as part of the Quebec component of the PaRIS-OECD survey. Between June 2023 and January 2024, 12 primary care clinics invited eligible adult patients (≥45 years) to complete an online survey using a standardized email template distributed via an email marketing platform. Collected engagement metrics included delivery rates, open rates, click-through rates, conversion rates and device type. Analyses were descriptive and conducted at the clinic level. Invitations were successfully delivered to 14,758 patients (97%). The mean open rate for the initial invitation was 73% (range: 57%-88%), decreasing with reminders. Most emails were opened on computers (85%). A total of 445 emails were undelivered due to technical issues (n = 42) or incorrect email addresses (n = 403). The overall conversion rate was 10%. Click-through rates varied by content, with the highest engagement observed for the survey link and lower engagement for supplementary video materials. Reminder emails substantially increased survey participation across clinics (200%). Participants who completed the questionnaire were predominantly aged. This formative analysis suggests that email-based recruitment is a feasible and low-burden approach for engaging primary care patients in research. Engagement metrics offer valuable insights at the implementation level to inform the design, adaptation, and monitoring of digital recruitment strategies in real-world primary care settings. These findings provide practical, implementation-oriented insights to inform the design, refinement, and evaluation of email recruitment campaigns in primary care research.