Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
45 result(s) for "Elsworth, Gerald"
Sort by:
Applying the Electronic Health Literacy Lens: Systematic Review of Electronic Health Interventions Targeted at Socially Disadvantaged Groups
Electronic health (eHealth) has the potential to improve health outcomes. However, eHealth systems need to match the eHealth literacy needs of users to be equitably adopted. Socially disadvantaged groups have lower access and skills to use technologies and are at risk of being digitally marginalized, leading to the potential widening of health disparities. This systematic review aims to explore the role of eHealth literacy and user involvement in developing eHealth interventions targeted at socially disadvantaged groups. A systematic search was conducted across 10 databases for eHealth interventions targeted at older adults, ethnic minority groups, low-income groups, low-literacy groups, and rural communities. The eHealth Literacy Framework was used to examine the eHealth literacy components of reviewed interventions. The results were analyzed using narrative synthesis. A total of 51 studies reporting on the results of 48 interventions were evaluated. Most studies were targeted at older adults and ethnic minorities, with only 2 studies focusing on low-literacy groups. eHealth literacy was not considered in the development of any of the studies, and no eHealth literacy assessment was conducted. User involvement in designing interventions was limited, and eHealth intervention developmental frameworks were rarely used. Strategies to assist users in engaging with technical systems were seldom included in the interventions, and accessibility features were limited. The results of the included studies also provided inconclusive evidence on the effectiveness of eHealth interventions. The findings highlight that eHealth literacy is generally overlooked in developing eHealth interventions targeted at socially disadvantaged groups, whereas evidence about the effectiveness of such interventions is limited. To ensure equal access and inclusiveness in the age of eHealth, eHealth literacy of disadvantaged groups needs to be addressed to help avoid a digital divide. This will assist the realization of recent technological advancements and, importantly, improve health equity.
Cross-cultural adaptation of the Health Education Impact Questionnaire: experimental study showed expert committee, not back-translation, added value
To assess the contribution of back-translation and expert committee to the content and psychometric properties of a translated multidimensional questionnaire. Recommendations for questionnaire translation include back-translation and expert committee, but their contribution to measurement properties is unknown. Four English to French translations of the Health Education Impact Questionnaire were generated with and without committee or back-translation. Face validity, acceptability, and structural properties were compared after random assignment to people with rheumatoid arthritis (N = 1,168), chronic renal failure (N = 2,368), and diabetes (N = 538). For face validity, 15 bilingual people compared translations quality with the original. Psychometric properties were examined using confirmatory factor analysis (metric and scalar invariance) and item response theory. Qualitatively, there were five types of translation errors: style, intensity, frequency/time frame, breadth, and meaning. Bilingual assessors ranked best the translations with committee (P = 0.0026). All translations had good structural properties (root mean square error of approximation <0.05; comparative fit index [CFI], ≥0.899; and Tucker–Lewis index, ≥0.889). Full measurement invariance was observed between translations (ΔCFI ≤ 0.01) with metric invariance between translations and original (lowest ΔCFI = 0.022 between fully constrained models and models with free intercepts). Item characteristic curve analyses revealed no significant differences. This is the first experimental evidence that back-translation has moderate impact, whereas expert committee helps to ensure accurate content.
German translation, cultural adaptation, and validation of the Health Literacy Questionnaire (HLQ)
The Health Literacy Questionnaire (HLQ), developed in Australia in 2012 using a 'validity-driven' approach, has been rapidly adopted and is being applied in many countries and languages. It is a multidimensional measure comprising nine distinct domains that may be used for surveys, needs assessment, evaluation and outcomes assessment as well as for informing service improvement and the development of interventions. The aim of this paper is to describe the German translation of the HLQ and to present the results of the validation of the culturally adapted version. The HLQ comprises 44 items, which were translated and culturally adapted to the German context. This study uses data collected from a sample of 1,058 persons with chronic conditions. Statistical analyses include descriptive and confirmatory factor analyses. In one-factor congeneric models, all scales demonstrated good fit after few model adjustments. In a single, highly restrictive nine-factor model (no cross-loadings, no correlated errors) replication of the original English-language version was achieved with fit indices and psychometric properties similar to the original HLQ. Reliability for all scales was excellent, with a Cronbach's Alpha of at least 0.77. High to very high correlations between some HLQ factors were observed, suggesting that higher order factors may be present. Our rigorous development and validation protocol, as well as strict adaptation processes, have generated a remarkable reproduction of the HLQ in German. The results of this validation provide evidence that the HLQ is robust and can be recommended for use in German-speaking populations. German Clinical Trial Registration (DRKS): DRKS00000584. Registered 23 March 2011.
The grounded psychometric development and initial validation of the Health Literacy Questionnaire (HLQ)
Background Health literacy has become an increasingly important concept in public health. We sought to develop a comprehensive measure of health literacy capable of diagnosing health literacy needs across individuals and organisations by utilizing perspectives from the general population, patients, practitioners and policymakers. Methods Using a validity-driven approach we undertook grounded consultations (workshops and interviews) to identify broad conceptually distinct domains. Questionnaire items were developed directly from the consultation data following a strict process aiming to capture the full range of experiences of people currently engaged in healthcare through to people in the general population. Psychometric analyses included confirmatory factor analysis (CFA) and item response theory. Cognitive interviews were used to ensure questions were understood as intended. Items were initially tested in a calibration sample from community health, home care and hospital settings (N=634) and then in a replication sample (N=405) comprising recent emergency department attendees. Results Initially 91 items were generated across 6 scales with agree/disagree response options and 5 scales with difficulty in undertaking tasks response options. Cognitive testing revealed that most items were well understood and only some minor re-wording was required. Psychometric testing of the calibration sample identified 34 poorly performing or conceptually redundant items and they were removed resulting in 10 scales. These were then tested in a replication sample and refined to yield 9 final scales comprising 44 items. A 9-factor CFA model was fitted to these items with no cross-loadings or correlated residuals allowed. Given the very restricted nature of the model, the fit was quite satisfactory: χ 2 WLSMV (866 d.f.) = 2927, p<0.000, CFI = 0.936, TLI = 0.930, RMSEA = 0.076, and WRMR = 1.698. Final scales included: Feeling understood and supported by healthcare providers; Having sufficient information to manage my health; Actively managing my health; Social support for health; Appraisal of health information; Ability to actively engage with healthcare providers; Navigating the healthcare system; Ability to find good health information; and Understand health information well enough to know what to do. Conclusions The HLQ covers 9 conceptually distinct areas of health literacy to assess the needs and challenges of a wide range of people and organisations. Given the validity-driven approach, the HLQ is likely to be useful in surveys, intervention evaluation, and studies of the needs and capabilities of individuals.
Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity
Background Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. Objective This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. Illustrative example The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Discussion Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.
Systematic development and implementation of interventions to OPtimise Health Literacy and Access (Ophelia)
Background The need for healthcare strengthening to enhance equity is critical, requiring systematic approaches that focus on those experiencing lesser access and outcomes. This project developed and tested the Ophelia (OPtimising HEalth LIteracy and Access) approach for co-design of interventions to improve health literacy and equity of access. Eight principles guided this development: Outcomes focused; Equity driven, Needs diagnosis, Co-design, Driven by local wisdom, Sustainable, Responsive and Systematically applied. We report the application of the Ophelia process where proof-of-concept was defined as successful application of the principles. Methods Nine sites were briefed on the aims of the project around health literacy, co-design and quality improvement. The sites were rural/metropolitan, small/large hospitals, community health centres or municipalities. Each site identified their own priorities for improvement; collected health literacy data using the Health Literacy Questionnaire (HLQ) within the identified priority groups; engaged staff in co-design workshops to generate ideas for improvement; developed program-logic models; and implemented their projects using Plan-Do-Study-Act (PDSA) cycles. Evaluation included assessment of impacts on organisations, practitioners and service users, and whether the principles were applied. Results Sites undertook co-design workshops involving discussion of service user needs informed by HLQ ( n  = 813) and interview data. Sites generated between 21 and 78 intervention ideas and then planned their selected interventions through program-logic models. Sites successfully implemented interventions and refined them progressively with PDSA cycles. Interventions generally involved one of four pathways: development of clinician skills and resources for health literacy, engagement of community volunteers to disseminate health promotion messages, direct impact on consumers’ health literacy, and redesign of existing services. Evidence of application of the principles was found in all sites. Conclusions The Ophelia approach guided identification of health literacy issues at each participating site and the development and implementation of locally appropriate solutions. The eight principles provided a framework that allowed flexible application of the Ophelia approach and generation of a diverse set of interventions. Changes were observed at organisational, staff, and community member levels. The Ophelia approach can be used to generate health service improvements that enhance health outcomes and address inequity of access to healthcare.
Distribution of health literacy strengths and weaknesses across socio-demographic groups: a cross-sectional survey using the Health Literacy Questionnaire (HLQ)
Background Recent advances in the measurement of health literacy allow description of a broad range of personal and social dimensions of the concept. Identifying differences in patterns of health literacy between population sub-groups will increase understanding of how health literacy contributes to health inequities and inform intervention development. The aim of this study was to use a multi-dimensional measurement tool to describe the health literacy of adults in urban and rural Victoria, Australia. Methods Data were collected from clients (n = 813) of 8 health and community care organisations, using the Health Literacy Questionnaire (HLQ). Demographic and health service data were also collected. Data were analysed using descriptive statistics. Effect sizes (ES) for standardised differences in means were used to describe the magnitude of difference between demographic sub-groups. Results Mean age of respondents was 72.1 (range 19–99) years. Females comprised 63 % of the sample, 48 % had not completed secondary education, and 96 % reported at least one existing health condition. Small to large ES were seen for mean differences in HLQ scales between most demographic groups. Compared with participants who spoke English at home, those not speaking English at home had much lower scores for most HLQ scales including the scales ‘Understanding health information well enough to know what to do’ (ES −1.09 [95 % confidence interval (CI) -1.33 to −0.84]), ‘Ability to actively engage with healthcare providers’ (ES −1.00 [95 % CI −1.24, −0.75]), and ‘Navigating the healthcare system’ (ES −0.72 [95 % CI −0.97, −0.48]). Similar patterns and ES were seen for participants born overseas compared with those born in Australia. Smaller ES were seen for sex, age group, private health insurance status, number of chronic conditions, and living alone. Conclusions This study has revealed some large health literacy differences across nine domains of health literacy in adults using health services in Victoria. These findings provide insights into the relationship between health literacy and socioeconomic position in vulnerable groups and, given the focus of the HLQ, provide guidance for the development of equitable interventions.
Translation method is validity evidence for construct equivalence: analysis of secondary data routinely collected during translations of the Health Literacy Questionnaire (HLQ)
Background Cross-cultural research with patient-reported outcomes measures (PROMs) assumes that the PROM in the target language will measure the same construct in the same way as the PROM in the source language. Yet translation methods are rarely used to qualitatively maximise construct equivalence or to describe the intents of each item to support common understanding within translation teams. This study aimed to systematically investigate the utility of the Translation Integrity Procedure (TIP), in particular the use of item intent descriptions, to maximise construct equivalence during the translation process, and to demonstrate how documented data from the TIP contributes evidence to a validity argument for construct equivalence between translated and source language PROMs. Methods Analysis of secondary data was conducted on routinely collected data in TIP Management Grids of translations ( n  = 9) of the Health Literacy Questionnaire (HLQ) that took place between August 2014 and August 2015: Arabic, Czech, French (Canada), French (France), Hindi, Indonesian, Slovak, Somali and Spanish (Argentina). Two researchers initially independently deductively coded the data to nine common types of translation errors. Round two of coding included an identified 10th code. Coded data were compared for discrepancies, and checked when needed with a third researcher for final code allocation. Results Across the nine translations, 259 changes were made to provisional forward translations and were coded into 10 types of errors. Most frequently coded errors were Complex word or phrase ( n  = 99), Semantic ( n  = 54) and Grammar ( n  = 27). Errors coded least frequently were Cultural errors ( n  = 7) and Printed errors ( n  = 5). Conclusions To advance PROM validation practice, this study investigated a documented translation method that includes the careful specification of descriptions of item intents. Assumptions that translated PROMs have construct equivalence between linguistic contexts can be incorrect due to errors in translation. Of particular concern was the use of high level complex words by translators, which, if undetected, could cause flawed interpretation of data from people with low literacy. Item intent descriptions can support translations to maximise construct equivalence, and documented translation data can contribute evidence to justify score interpretation and use of translated PROMS in new linguistic contexts.
Validity Evidence of the eHealth Literacy Questionnaire (eHLQ) Part 2: Mixed Methods Approach to Evaluate Test Content, Response Process, and Internal Structure in the Australian Community Health Setting
Digital technologies have changed how we manage our health, and eHealth literacy is needed to engage with health technologies. Any eHealth strategy would be ineffective if users' eHealth literacy needs are not addressed. A robust measure of eHealth literacy is essential for understanding these needs. On the basis of the eHealth Literacy Framework, which identified 7 dimensions of eHealth literacy, the eHealth Literacy Questionnaire (eHLQ) was developed. The tool has demonstrated robust psychometric properties in the Danish setting, but validity testing should be an ongoing and accumulative process. This study aims to evaluate validity evidence based on test content, response process, and internal structure of the eHLQ in the Australian community health setting. A mixed methods approach was used with cognitive interviewing conducted to examine evidence on test content and response process, whereas a cross-sectional survey was undertaken for evidence on internal structure. Data were collected at 3 diverse community health sites in Victoria, Australia. Psychometric testing included both the classical test theory and item response theory approaches. Methods included Bayesian structural equation modeling for confirmatory factor analysis, internal consistency and test-retest for reliability, and the Bayesian multiple-indicators, multiple-causes model for testing of differential item functioning. Cognitive interviewing identified only 1 confusing term, which was clarified. All items were easy to read and understood as intended. A total of 525 questionnaires were included for psychometric analysis. All scales were homogenous with composite scale reliability ranging from 0.73 to 0.90. The intraclass correlation coefficient for test-retest reliability for the 7 scales ranged from 0.72 to 0.95. A 7-factor Bayesian structural equation modeling using small variance priors for cross-loadings and residual covariances was fitted to the data, and the model of interest produced a satisfactory fit (posterior productive P=.49, 95% CI for the difference between observed and replicated chi-square values -101.40 to 108.83, prior-posterior productive P=.92). All items loaded on the relevant factor, with loadings ranging from 0.36 to 0.94. No significant cross-loading was found. There was no evidence of differential item functioning for administration format, site area, and health setting. However, discriminant validity was not well established for scales 1, 3, 5, 6, and 7. Item response theory analysis found that all items provided precise information at different trait levels, except for 1 item. All items demonstrated different sensitivity to different trait levels and represented a range of difficulty levels. The evidence suggests that the eHLQ is a tool with robust psychometric properties and further investigation of discriminant validity is recommended. It is ready to be used to identify eHealth literacy strengths and challenges and assist the development of digital health interventions to ensure that people with limited digital access and skills are not left behind.
Measuring health literacy in community agencies: a Bayesian study of the factor structure and measurement invariance of the health literacy questionnaire (HLQ)
Background The development of the Health Literacy Questionnaire (HLQ), reported in 2013, attracted widespread international interest. While the original study samples were drawn from clinical and home-based aged-care settings, the HLQ was designed for the full range of healthcare contexts including community-based health promotion and support services. We report a follow-up study of the psychometric properties of the HLQ with respondents from a diverse range of community-based organisations with the principal goal of contributing to the development of a soundly validated evidence base for its use in community health settings. Methods Data were provided by 813 clients of 8 community agencies in Victoria, Australia who were administered the HLQ during the needs assessment stage of the Ophelia project, a health literacy-based intervention. Most analyses were conducted using Bayesian structural equation modelling that enables rigorous analysis of data but with some relaxation of the restrictive requirements for zero cross-loadings and residual correlations of ‘classical’ confirmatory factor analysis. Scale homogeneity was investigated with one-factor models that allowed for the presence of small item residual correlations while discriminant validity was studied using the inter-factor correlations and factor loadings from a full 9-factor model with similar allowance for small residual correlations and cross-loadings. Measurement invariance was investigated scale-by-scale using a model that required strict invariance of item factor loadings, thresholds, residual variances and co-variances. Results All HLQ scales were found to be homogenous with composite reliability ranging from 0.80 to 0.89. The factor structure of the HLQ was replicated and 6 of the 9 scales were found to exhibit clear-cut discriminant validity. With a small number of exceptions involving non-invariance of factor loadings, strict measurement invariance was established across the participating organisations and the gender, language background, age and educational level of respondents. Conclusions The HLQ is highly reliable, even with only 4 to 6 items per scale. It provides unbiased mean estimates of group differences across key demographic indicators. While measuring relatively narrow constructs, the 9 dimensions are clearly separate and therefore provide fine-grained data on the multidimensional area of health literacy. These analyses provide researchers, program managers and policymakers with a range of robust evidence by which they can make judgements about the appropriate use of the HLQ for their community-based setting.