Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
191 result(s) for "Research Instruments, Questionnaires, and Tools"
Sort by:
Digital Technologies and Biomarkers for Locomotor Capacity Assessment in Older Adults: Systematic Review
Locomotor capacity, encompassing endurance, balance, muscle strength, muscle function, muscle power, and joint function of the body, is a key determinant of functional ability in older adults. Assessment tools based on digital technologies for objectively assessing locomotor capacity are increasingly being developed, but their reliability, validity, and clinical potential remain underexplored. This systematic review aims to evaluate the current state of digital technologies, assess their validity and reliability for assessing locomotor capacity, and facilitate their effective implementation in clinical settings. Systematic literature searches were performed in 6 electronic databases from inception to March 7, 2025. Citation lists from the included studies and gray literature from Google Scholar were additionally searched. Studies focusing on the reliability and validity of digital technologies for assessing locomotor capacity in general older adults were included. Standardized forms were used to extract information on study characteristics, participant demographics, digital technology details, and validity and reliability results. Methodological quality assessment and rating of measurement properties were conducted in accordance with the COSMIN (Consensus-Based Standards for the Selection of Health Measurement Instruments) guidelines. A total of 14 studies were included, of which 13 assessed balance using inertial measurement units, smartphones, balance boards, and force plates, and 1 assessed muscle power using smartphones. Fifty-one digital biomarkers were identified, including 47 for balance and 4 for muscle power assessment. Test-retest reliability coefficients ranged from 0.016 to 0.97, and validity was context specific. Overall, 13 studies demonstrated sufficient test-retest reliability and validity, whereas 1 study was rated as insufficient for convergent validity. Methodological quality was rated as \"doubtful\" or \"inadequate\" in 11 studies. This review provides a comprehensive summary of digital technologies for assessing locomotor capacity in older adults and identifies 51 digital biomarkers with generally acceptable reliability and validity. Unlike previous studies that focused on specific sensor types or disease-specific populations, this review integrates evidence across technologies within general older populations, providing insights into the clinical application potential of digital biomarkers as well as the key translational barriers limiting their real-world implementation. Specifically, existing digital technologies show considerable promise for early detection of functional decline, longitudinal monitoring, and informing personalized interventions. However, their clinical applicability remains constrained by limited assessment of certain locomotor components and by methodological shortcomings across current studies. Future research should prioritize rigorous, high-quality investigations that expand evaluation to a broader range of locomotor components in real-world settings while developing age-friendly tools with enhanced clinical interpretability.
Validation of the Updated Digital Health Literacy Instrument and Development of a Short Form: Online Survey Study of the General Population
The digital health literacy instrument (DHLI) was developed in 2017 to measure individuals' ability to access, understand, evaluate, and apply online health information. Since that time, digital health has shifted from desktop-based internet use to mobile devices, and there has been a rapidly expanding range of health apps. Additionally, heightened privacy and data security requirements have increased the complexity of user competencies needed to engage with digital health tools. These developments underscore the need to update the original DHLI. This study aimed to create an updated version of the DHLI (DHLI 2.0) that reflects current digital health practices and to examine its reliability and validity by exploring associations with user characteristics. Additionally, we aimed to develop a short-form version to facilitate broader use in research and practice. The instrument was iteratively updated and pilot-tested to retain the original theoretical framework while reflecting current digital health practices, devices, and emerging challenges such as mobile use and data security. Several items were reworded and a new 2-item subscale on digital safety was added. The full DHLI 2.0 comprises 24 items across 8 skill domains. A 16-item short form was developed by iteratively removing 1 or 2 items per subscale based on the \"α if item deleted\" criterion, while retaining the same subscale structure as the full form. Data to validate the new version of the instrument were collected in June 2024 through an online survey among members of a representative citizen panel in Friesland, a province in the Netherlands (N=2728). Sociodemographics, internet and health-related internet use, general health literacy (measured with the Single Item Literacy Screener), self-reported health, and health care use were assessed. Internal consistency was evaluated using Cronbach α, and construct validity was assessed via Spearman ρ correlations with related constructs. Internal consistency was high for both the full (α=0.94) and short-form (α=0.90) scales. Most subscales showed satisfactory to excellent reliability (α=0.71-0.93), while \"Securing privacy\" and \"Using security measures\" demonstrated moderate reliability (α=0.65-0.66). The DHLI 2.0 total scores were approximately normally distributed (skewness -0.5; kurtosis 0.4). As expected, digital health literacy was negatively correlated with age (ρ=-0.39, P<.001) and positively correlated with education (ρ=0.22, P<.001), income (ρ=0.27, P<.001), time spent online (ρ=0.32, P<.001), and general health literacy (ρ=-0.42, P<.001). The DHLI 2.0 provides an updated, reliable, and valid measure of digital health literacy covering 8 key domains, including data security. The 16-item short form offers a concise alternative suitable for research and possibly practical applications in health and eHealth contexts.
The Adult Inpatient eHealth Literacy Scale (AIPeHLS): Development and Validation Study
The rapid evolution of digital health technologies, particularly within the Web 3.0 framework, has underscored eHealth literacy (eHL) as a critical competency for patients engaging with digital health care platforms. Patients in sustained hospital stays, often in vulnerable conditions, face unique challenges in using eHealth tools effectively. However, existing eHL assessment tools are insufficient to address the intricate and dynamic demands of contemporary health care systems, especially for individuals under continuous hospital care. This study aimed to develop the Adult Inpatient eHealth Literacy Scale (AIPeHLS), a comprehensive, multidimensional tool grounded in the Lily Model, to evaluate eHL among adult inpatients within the context of digital health care innovations. The development of the AIPeHLS followed a systematic, multiphase process. Initial item pool generation was informed by a literature review and then refined using the Delphi method, resulting in a preliminary set of 53 items spanning 6 dimensions of the Lily Model. The scale was refined through a pilot survey among 100 individuals requiring inpatient care, followed by item analysis and exploratory factor analysis (EFA). Validation was achieved via a cross-sectional study with 532 participants, using confirmatory factor analysis (CFA) to verify the scale structure, alongside evaluations of convergent, discriminant, criterion-related, and content validity. Reliability was assessed using Cronbach α, Omega, and split-half reliability. The finalized AIPeHLS comprised 44 items across 6 dimensions: traditional literacy, information literacy, media literacy, health literacy, computer literacy, and scientific literacy, reflecting the skills necessary in the Web 3.0 context. Both EFA and CFA confirmed the 6-factor structure, demonstrating acceptable model fit indices (χ²=1974.654 (df=887), root mean square error of approximation=0.048, comparative fit index=0.957, normed fit index=0.925, and incremental fit index=0.957). The scale exhibited robust content validity, convergent and discriminant validity, criterion-related validity, and high internal consistency, with a Cronbach α of .965, Omega coefficient of 0.962, and a split-half reliability of 0.791 for the entire scale. The 44-item AIPeHLS was found to be a reliable and valid instrument for assessing eHL in adult inpatients in the evolving Web 3.0 context. Its comprehensive framework and strong psychometric properties make it an effective tool for health care providers to understand patients' digital health competencies and tailor interventions accordingly. For researchers, our findings provided opportunities to explore the relationship between eHL and health outcomes, while offering valuable insights into the development of more effective eHealth interventions and policies.
The Impact of Individual Factors on Careless Responding Across Different Mental Disorder Screenings: Cross-Sectional Study
Online questionnaires are widely used for large-scale screening. However, careless responding (CR) from participants can compromise the reliability of screening outcomes. Prior studies have focused on the effects of individual and environmental factors on CR, but the effect of questionnaire type remains underexplored. This study investigates the individual factors influencing CR in online mental health screening and assesses how the effect of these factors varies across different psychological questionnaires. This study analyzed data from 24,367 participants across 4 questionnaires (PHQ-9 [Patient Health Questionnaire-9], PSS [Perceived Stress Scale], ISI [Insomnia Severity Index], and GAD-7 [Generalized Anxiety Disorder-7 Scale]). CR was defined as the proportion of items completed in less than 2 seconds per item. We used a multiple linear regression model to examine the effect of individual factors (sex, age, education, smoking, and drinking) on CR across 4 questionnaires. In addition, response times were visualized to identify patterns between careless and careful responders. Females demonstrate lower levels of CR than males when completing the PHQ-9 (β=-.172, 95% CI -0.104 to -0.089; P<.001), PSS (β=-.234, 95% CI -0.162 to -0.14; P<.001), ISI (β=-.207, 95% CI -0.13 to -0.114; P<.001), and GAD-7 (β=-.177, 95% CI -0.108 to -0.093; P<.001). Older participants demonstrated lower levels of CR on the PHQ-9 (β=-.036, 95% CI -0.007 to -0.003; P<.001), ISI (β=-.036, 95% CI -0.007 to -0.003; P<.001), and GAD-7 (β=-.053, 95% CI -0.009 to -0.005; P<.001), but their age was unrelated to CR on the PSS. Interestingly, compared with participants with an associate-level education, those with a high education (bachelor's, master's, or doctoral degree) demonstrated higher levels of CR, especially those with a master's degree (PHQ-9: β=.098, 95% CI 0.136 to 0.188; P<.001 and GAD-7: β=.091, 95% CI 0.125 to 0.178; P<.001). Smokers exhibited varied patterns, with current smokers demonstrating lower levels of CR on the PHQ-9 (β=-.022, 95% CI -0.064 to -0.016; P=.001) and GAD-7 (β=-.014, 95% CI -0.051 to -0.002; P=.03), whereas occasional smokers demonstrated higher levels of CR on the PSS (β=.019, 95% CI 0.010 to 0.050; P=.003) than nonsmokers. Drinkers demonstrated lower levels of CR than nondrinkers, with the strongest effect among occasional drinkers on the PHQ-9 (β=-.163, 95% CI -0.103 to -0.087; P<.001). Analysis of response times revealed that participants tended to spend less time on PHQ-9 and GAD-7 surveys, and CR on PSS and ISI surveys was characterized by skipping questions. The effect of individual factors on CR varies across questionnaire types. These findings offer valuable insights for questionnaire designers and administrators, highlighting the need for targeted intervention.
Perspective Mapping: Tutorial for Collecting Quantifiable Qualitative Interview Data
Mixed methods research is essential to development of patient-reported outcome measures, digital technology, and endpoint selection for clinical drug trials and to advance clinical care when complex health-related experiences cannot be fully understood by quantitative or qualitative approaches alone. New technology and opportunities for remote data collection have changed the ways in which qualitative and quantitative data can be collected, enabling researchers to capture human experiences in ways not previously possible. This paper describes Perspective Mapping, a new online interviewing technique that uses mind mapping software to capture in-depth qualitative data inside a quantitative measurement framework to understand and measure individual experiences. The objective of this tutorial is to review the theoretical underpinnings, present instructions for study design and implementation, and address strengths, limitations, and potential applications of this technique in health and behavioral sciences. During videoconferencing interviews, mind-mapping software is used to visually depict experiences. Structured concept maps are cocreated in real time with participants, focusing on building detailed narrative descriptions about experiences and categorizing these within a predefined quantitative framework, such as the relative importance of different experiences relevant to a phenomenon. The approach combines semistructured interviewing with technology-enhanced card-sorting techniques, allowing participants to define and prioritize what matters most. This method ensures narrative richness alongside structured data collection, facilitating deeper understanding of phenomena. Perspective Mapping emphasizes participant engagement in data generation and analysis and enables the simultaneous collection of qualitative narratives and quantitative assessment of key concepts. The variations of the technique have been successfully applied in research on chronic illness, symptom burden, and digital health technology. Advantages of the approach include systematic collection of qualitative data, transparent and structured data outputs, real-time data validation, and the ability to return maps to participants as a form of reciprocity. Feasibility factors, such as interviewer capabilities, participant literacy, interview duration, and technology resources must be considered. Perspective Mapping offers an innovative and engaging way to gather complementary qualitative and quantitative data remotely. By blending qualitative depth with quantitative structure, the technique supports richer, more actionable insights for health research, policy, and beyond. This technique holds promise for applications in health, psychology, education, and other social sciences where comprehensive understanding of experiences is essential.
Integrating Food Preference Profiling, Behavior Change Strategies, and Machine Learning for Cardiovascular Disease Prevention in a Personalized Nutrition Digital Health Intervention: Conceptual Pipeline Development and Proof-of-Principle Study
Personalized dietary advice needs to consider the individual's health risks as well as specific food preferences, offering healthier options aligned with personal tastes. This study aimed to develop a digital health intervention (DHI) that provides personalized nutrition recommendations based on individual food preference profiles (FPP), using data from the UK Biobank. Data from 61,229 UK Biobank participants were used to develop a conceptual pipeline for a DHIs. The pipeline included three steps: (1) developing a simplified food preference profiling tool, (2) creating a cardiovascular disease (CVD) prediction model using the subsequent profiles, and (3) selecting intervention features. The CVD prediction model was created using 3 different predictor sets (Framingham set, diet set, and FPP set) across 4 machine learning models: logistic regression, linear discriminant analysis, random forest, and support vector machine. Intervention functions were designed using the Behavior Change Wheel, and behavior change techniques were selected for the DHI features. The feature selection process identified 14 food items out of 140 that effectively classify FPPs. The food preference profile prediction set, which did not include blood measurements or detailed nutrient intake, demonstrated comparable accuracy (across the 4 models: 0.721-0.725) to the Framingham set (0.724-0.727) and diet set (0.722-0.725). Linear discriminant analysis was chosen as the best-performing model. Four key features of the DHI were identified: food source and portion information, recipes, a dietary recommendation system, and community exchange platforms. The FPP and CVD risk prediction model serve as inputs for the dietary recommendation system. Two levels of personalized nutrition advice were proposed: level 1-based on food portion intake and FPP; and level 2-based on nutrient intake, FPP, and CVD risk probability. This study presents proof of principle for a conceptual pipeline for a DHI that empowers users to make informed dietary choices and reduce CVD risk by catering to person-specific needs and preferences. By making healthy eating more accessible and sustainable, the DHI has the potential to significantly impact public health outcomes.
eHealth Literacy Assessment Instruments: Scoping Review
eHealth literacy is a necessary competency for individuals to achieve health self-management in the digital age, and the evaluation of eHealth literacy is an important foundation for clarifying individual eHealth literacy levels and implementing eHealth behavior interventions. This study reviews the research progress of eHealth literacy assessment instruments to offer suggestions for further development and improvement as well as to provide a reference to eHealth intervention. We reviewed papers on Web of Science, Scopus, PubMed, and EBSCO in English between 2006 and 2024 and included studies involving the development of eHealth literacy assessment instruments, which must be published in peer-reviewed journals. An analysis in terms of the development process, instrument characteristics, and assessment themes was conducted to reveal the content, features, and application of currently available eHealth literacy assessment instruments. Searches yielded 2972 studies, of which 13 studies were included in the final analysis. The analysis of the 13 studies indicated that the development of instruments is improving constantly, as the concept of eHealth literacy evolves. In total, 9 of the 13 tools are subjective assessments, with eHealth Literacy Scale being the most widely used. In contrast, the remaining 4 comprehensive assessment tools incorporate objective evaluation criteria. The 13 instruments' reliability ranged from 0.52 to 0.976. Validity was reported for 12 tools (excluding eHealth Literacy Scale), covering 5 types: content validity, structural validity, discriminant validity, external validity, and convergent validity. Regarding assessment themes, skill factors are involved in many instruments, but psychology factors and information factors are less concerned. The evaluation of the characteristics of existing eHealth literacy assessment tools in this paper can provide a reference for the selection of assessment tools. Overall, subjective and comprehensive assessment tools for eHealth literacy have their own advantages and disadvantages. Subjective assessment tools have a friendly evaluation method, but their test validity is relatively low. There is a risk of time-consuming and low recognition for comprehensive evaluation tools. Future research should be based on the deepening of eHealth literacy connotation, further verifying the effectiveness of existing eHealth literacy assessment tools and adding objective evaluation dimensions.
Deep Research Agents: Major Breakthrough or Incremental Progress for Medical AI?
Deep research agents are autonomous large language model–based systems capable of iterative web search, retrieval, and synthesis. They are increasingly positioned as the next major leap in medical artificial intelligence. In this viewpoint, we argue that while these agents mark progress in information access and workflow automation, they represent an incremental evolution rather than a paradigm shift. We review current applications of deep research agents in biomedical scenarios, including literature review generation, clinical evidence synthesis, guideline comparison, and patient education. Across these early use cases, the tools demonstrate the ability to rapidly gather and structure up-to-date information, often producing outputs that appear comprehensive and well-referenced. However, these strengths coexist with unresolved and clinically significant limitations. Citation fidelity remains inconsistent across models, with subtle misinterpretations or unreliable references still common. Their retrieval processes and evidence-ranking mechanisms remain opaque, raising concerns about reproducibility and hidden biases. Moreover, overreliance on artificial intelligence–generated syntheses risks eroding clinicians’ critical appraisal skills and may introduce automation bias at a time when medicine increasingly requires deeper scrutiny of information sources. Safety constraints are also less predictable within multistep research pipelines, increasing the risk of harmful or inappropriate outputs. Finally, current evidence is largely limited to proof-of-concept evaluations, with little evidence from real-life clinical deployment. We contend that deep research agents should be embraced as assistive research tools rather than pseudoexperts. Their value lies in accelerating information gathering, not replacing rigorous human judgment. Realizing their potential will require transparent retrieval architectures, robust benchmarking, and explicit educational integration to preserve clinicians’ evaluative reasoning. Used judiciously, these systems could enrich medical research and practice; used uncritically, they risk amplifying errors at scale. We contend that deep research agents should be embraced as assistive research tools rather than pseudoexperts. Their value lies in accelerating information gathering, not replacing rigorous human judgment. Realizing their potential will require transparent retrieval architectures, robust benchmarking, and explicit educational integration to preserve clinicians’ evaluative reasoning. Used judiciously, these systems could enrich medical research and practice; used uncritically, they risk amplifying errors at scale.
Artificial Intelligence Tools for Automating Evidence Synthesis: Scoping Review
Rapidly and accurately synthesizing large volumes of evidence is a time- and resource-intensive process. Once published, reviews often risk becoming outdated, limiting their usefulness for decision makers. Recent advancements in artificial intelligence (AI) have enabled researchers to automate stages of the evidence synthesis process, from literature searching and screening to data extraction and analysis. As previous reviews on this topic have been published, a significant number of tools have been further developed and evaluated. Furthermore, as generative AI increasingly automates evidence synthesis, understanding how it is studied and applied is crucial, given both its benefits and risks. This review aimed to map the current landscape of evaluated AI tools used to automate evidence synthesis. Following the Joanna Briggs Institute methodology for scoping reviews, we searched Ovid MEDLINE, Ovid Embase, Scopus, and Web of Science in February 2025 and conducted a gray literature search in April 2025. We included articles published in any language from January 2021 onward. Two reviewers independently screened citations using Rayyan, and data were extracted based on study design and key AI-related technical features. We identified 7841 unique citations through database searches and 19 records through gray literature searching. A total of 222 articles were included in the review. We identified 65 AI tools and 25 open-source models or machine learning (ML) algorithms that automate parts of or the whole evidence synthesis pathway. A total of 54.1% (n=120) of the studies were published in 2024, reflecting a trend toward researching general-purpose large language models (LLMs) for evidence synthesis automation. The most popular tool studied was generative pretrained transformer models, including its conversational interface ChatGPT (n=70, 31.5%). Moreover, 31.1% (n=69) studied tools automated by traditional ML algorithms. No studies compared traditional ML tools to LLM-based tools. In addition, 61.7% (n=137) and 26.1% (n=58) studied AI-assisted automation of title and abstract screening and data extraction, respectively, the 2 most intensive stages and, therefore, amenable to automation. Technical performance outcomes were the most frequently reported, with only 4.1% (n=9) of studies reporting time- or workload-specific outcomes. Few studies pragmatically evaluated AI tools in real-world evidence synthesis settings. This review comprehensively captures the broad, evolving suite of AI automation tools available to support evidence synthesis, leveraged by increasingly complex AI approaches that range from traditional ML to LLMs. The notable shift toward studying general-purpose generative AI tools reflects how these technologies are actively transforming evidence synthesis practice. The lack of studies in our review comparing different AI approaches for specific automation stages or evaluating their effectiveness pragmatically represents a significant research gap. Optimal tool selection will likely depend on the review topic and methodology and researcher priorities. While they offer potential for reducing workload, ongoing evaluation to mitigate AI bias and to ensure the integrity of reviews is essential for safeguarding evidence-based decision-making.
Increasing Rigor in Online Health Surveys Through the Reduction of Fraudulent Data
Online surveys have become a key tool of modern health research, offering a fast, cost-effective, and convenient means of data collection. It enables researchers to access diverse populations, such as those underrepresented in traditional studies, and facilitates the collection of stigmatized or sensitive behaviors through greater anonymity. However, the ease of participation also introduces significant challenges, particularly around data integrity and rigor. As fraudulent responses—whether from bots, repeat responders, or individuals misrepresenting themselves—become more sophisticated and pervasive, ensuring the rigor of online surveys has never been more crucial. This article provides a comprehensive synthesis of practical strategies that help to increase the rigor of online surveys through the detection and removal of fraudulent data. Drawing on recent literature and case studies, we outline several options that address the full research cycle from predata collection strategies to validation post data collection. We emphasize the integration of automated screening techniques (eg, CAPTCHAs and honeypot questions) and attention checks (eg, trap questions) for purposeful survey design. Robust recruitment procedures (eg, concealed eligibility criteria and 2-stage screening) and a proper incentive or compensation structure can also help to deter fraudulent participation. We examine the merits and limitations of different sampling methodologies, including river sampling, online panels, and crowdsourcing platforms, offering guidance on how to select samples based on specific research objectives. Post data collection, we discuss metadata-based techniques to detect fraudulent data (eg, duplicate email or IP addresses, response time analysis), alongside methods to better screen for low-quality responses (eg, inconsistent response patterns and improbable qualitative responses). The escalating sophistication of fraud tactics, particularly with the growth of artificial intelligence (AI), demands that researchers continuously adapt and stay vigilant. We propose the use of dynamic protocols, combining multiple strategies into a multipronged approach that can better filter for fraudulent data and evolve depending on the type of responses received across the data collection process. However, there is still significant room for strategies to develop, and it should be a key focus for upcoming research. As online surveys become increasingly integral to health research, investing in robust strategies to screen for fraudulent data and increasing the rigor of studies is key to upholding scientific integrity.