Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
25 result(s) for "Data reporting quality framework"
Sort by:
Disparity in the quality of COVID-19 data reporting across India
Background Transparent and accessible reporting of COVID-19 data is critical for public health efforts. Each Indian state has its own mechanism for reporting COVID-19 data, and the quality of their reporting has not been systematically evaluated. We present a comprehensive assessment of the quality of COVID-19 data reporting done by the Indian state governments between 19 May and 1 June, 2020. Methods We designed a semi-quantitative framework with 45 indicators to assess the quality of COVID-19 data reporting. The framework captures four key aspects of public health data reporting – availability, accessibility, granularity, and privacy. We used this framework to calculate a COVID-19 Data Reporting Score (CDRS, ranging from 0–1) for each state. Results Our results indicate a large disparity in the quality of COVID-19 data reporting across India. CDRS varies from 0.61 (good) in Karnataka to 0.0 (poor) in Bihar and Uttar Pradesh, with a median value of 0.26. Ten states do not report data stratified by age, gender, comorbidities or districts. Only ten states provide trend graphics for COVID-19 data. In addition, we identify that Punjab and Chandigarh compromised the privacy of individuals under quarantine by publicly releasing their personally identifiable information. The CDRS is positively associated with the state’s sustainable development index for good health and well-being (Pearson correlation: r =0.630, p =0.0003). Conclusions Our assessment informs the public health efforts in India and serves as a guideline for pandemic data reporting. The disparity in CDRS highlights three important findings at the national, state, and individual level. At the national level, it shows the lack of a unified framework for reporting COVID-19 data in India, and highlights the need for a central agency to monitor or audit the quality of data reporting done by the states. Without a unified framework, it is difficult to aggregate the data from different states, gain insights, and coordinate an effective nationwide response to the pandemic. Moreover, it reflects the inadequacy in coordination or sharing of resources among the states. The disparate reporting score also reflects inequality in individual access to public health information and privacy protection based on the state of residence.
“Best fit” framework synthesis: refining the method
Background Following publication of the first worked example of the “best fit” method of evidence synthesis for the systematic review of qualitative evidence in this journal, the originators of the method identified a need to specify more fully some aspects of this particular derivative of framework synthesis. Methods and Results We therefore present a second such worked example in which all techniques are defined and explained, and their appropriateness is assessed. Specified features of the method include the development of new techniques to identify theories in a systematic manner; the creation of an a priori framework for the synthesis; and the “testing” of the synthesis. An innovative combination of existing methods of quality assessment, analysis and synthesis is used to complete the process. This second worked example was a qualitative evidence synthesis of employees’ views of workplace smoking cessation interventions, in which the “best fit” method was found to be practical and fit for purpose. Conclusions The method is suited to producing context-specific conceptual models for describing or explaining the decision-making and health behaviours of patients and other groups. It offers a pragmatic means of conducting rapid qualitative evidence synthesis and generating programme theories relating to intervention effectiveness, which might be of relevance both to researchers and policy-makers.
The Fishery Performance Indicators: A Management Tool for Triple Bottom Line Outcomes
Pursuit of the triple bottom line of economic, community and ecological sustainability has increased the complexity of fishery management; fisheries assessments require new types of data and analysis to guide science-based policy in addition to traditional biological information and modeling. We introduce the Fishery Performance Indicators (FPIs), a broadly applicable and flexible tool for assessing performance in individual fisheries, and for establishing cross-sectional links between enabling conditions, management strategies and triple bottom line outcomes. Conceptually separating measures of performance, the FPIs use 68 individual outcome metrics--coded on a 1 to 5 scale based on expert assessment to facilitate application to data poor fisheries and sectors--that can be partitioned into sector-based or triple-bottom-line sustainability-based interpretative indicators. Variation among outcomes is explained with 54 similarly structured metrics of inputs, management approaches and enabling conditions. Using 61 initial fishery case studies drawn from industrial and developing countries around the world, we demonstrate the inferential importance of tracking economic and community outcomes, in addition to resource status.
Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool
Background Qualitative evidence synthesis is increasingly used alongside reviews of effectiveness to inform guidelines and other decisions. To support this use, the GRADE-CERQual approach was developed to assess and communicate the confidence we have in findings from reviews of qualitative research. One component of this approach requires an appraisal of the methodological limitations of studies contributing data to a review finding. Diverse critical appraisal tools for qualitative research are currently being used. However, it is unclear which tool is most appropriate for informing a GRADE-CERQual assessment of confidence. Methodology We searched for tools that were explicitly intended for critically appraising the methodological quality of qualitative research. We searched the reference lists of existing methodological reviews for critical appraisal tools, and also conducted a systematic search in June 2016 for tools published in health science and social science databases. Two reviewers screened identified titles and abstracts, and then screened the full text of potentially relevant articles. One reviewer extracted data from each article and a second reviewer checked the extraction. We used a best-fit framework synthesis approach to code checklist criteria from each identified tool and to organise these into themes. Results We identified 102 critical appraisal tools: 71 tools had previously been included in methodological reviews, and 31 tools were identified from our systematic search. Almost half of the tools were published after 2010. Few authors described how their tool was developed, or why a new tool was needed. After coding all criteria, we developed a framework that included 22 themes. None of the tools included all 22 themes. Some themes were included in up to 95 of the tools. Conclusion It is problematic that researchers continue to develop new tools without adequately examining the many tools that already exist. Furthermore, the plethora of tools, old and new, indicates a lack of consensus regarding the best tool to use, and an absence of empirical evidence about the most important criteria for assessing the methodological limitations of qualitative research, including in the context of use with GRADE-CERQual.
A review and synthesis of frameworks for engagement in health research to identify concepts of knowledge user engagement
Background Engaging those who influence, administer and/or who are active users (“knowledge users”) of health care systems, as co-producers of health research, can help to ensure that research products will better address real world needs. Our aim was to identify and review frameworks of knowledge user engagement in health research in a systematic manner, and to describe the concepts comprising these frameworks. Methods An international team sharing a common interest in knowledge user engagement in health research used a consensus-building process to: 1) agree upon criteria to identify articles, 2) screen articles to identify existing frameworks, 3) extract, analyze data, and 4) synthesize and report the concepts of knowledge user engagement described in health research frameworks. We utilized the Patient Centered Outcomes Research Institute Engagement in Health Research Literature Explorer (PCORI Explorer) as a source of articles related to engagement in health research. The search includes articles from May 1995 to December 2017. Results We identified 54 articles about frameworks for knowledge user engagement in health research and report on 15 concepts. The average number of concepts reported in the 54 articles is n  = 7, and ranges from n  = 1 to n  = 13 concepts. The most commonly reported concepts are: knowledge user - prepare, support ( n  = 44), relational process ( n  = 39), research agenda ( n  = 38). The least commonly reported concepts are: methodology ( n  = 8), methods ( n  = 10) and analysis ( n  = 18). In a comparison of articles that report how research was done ( n  = 26) versus how research should be done ( n  = 28), articles about how research was done report concepts more often and have a higher average number of concepts ( n  = 8 of 15) in comparison to articles about how research should be done ( n  = 6 of 15). The exception is the concept “evaluate” and that is more often reported in articles that describe how research should be done. Conclusions We propose that research teams 1) consider engagement with the 15 concepts as fluid, and 2) consider a form of partnered negotiation that takes place through all phases of research to identify and use concepts appropriate to their team needs. There is a need for further work to understand concepts for knowledge user engagement.
AIMD - a validated, simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies
Background Proliferation of terms describing the science of effectively promoting and supporting the use of research evidence in healthcare policy and practice has hampered understanding and development of the field. To address this, an international Terminology Working Group developed and published a simplified framework of interventions to promote and integrate evidence into health practices, systems, and policies. This paper presents results of validation work and a second international workgroup meeting, culminating in the updated AIMD framework [Aims, Ingredients, Mechanism, Delivery]. Methods Framework validity was evaluated against terminology schemas ( n  = 51); primary studies ( n  = 37); and reporting guidelines ( n  = 10). Framework components were independently categorized as fully represented, partly represented, or absent by two researchers. Opportunities to refine the framework were systematically recorded. A meeting of the expanded international Terminology Working Group updated the framework by reviewing and deliberating upon validation findings and refinement proposals. Results There was variation in representativeness of the components across the three types of literature, in particular for the component ‘causal mechanisms’ . Analysis of primary studies revealed that representativeness of this concept lowered from 92 to 68% if only explicit, rather than explicit and non-explicit references to causal mechanisms were included. All components were very well represented in reporting guidelines, however the level of description of these was lower than in other types of literature. Twelve opportunities were identified to improve the framework, 9 of which were operationalized at the meeting. The updated AIMD framework comprises four components: (1) A ims: what do you want your intervention to achieve and for whom? (2) I ngredients: what comprises the intervention? (3) M echanisms: how do you propose the intervention will work? and (4) D elivery: how will you deliver the intervention? Conclusions The draft simplified framework was validated with reference to a wide range of relevant literature and improvements have enhanced useability. The AIMD framework could aid in the promotion of evidence into practice, remove barriers to understanding how interventions work, enhance communication of interventions and support knowledge synthesis. Future work needs to focus on developing and testing resources and educational initiatives to optimize use of the AIMD framework in collaboration with relevant end-user groups.
An Ontology to Standardize Research Output of Nutritional Epidemiology: From Paper-Based Standards to Linked Content
Background: The use of linked data in the Semantic Web is a promising approach to add value to nutrition research. An ontology, which defines the logical relationships between well-defined taxonomic terms, enables linking and harmonizing research output. To enable the description of domain-specific output in nutritional epidemiology, we propose the Ontology for Nutritional Epidemiology (ONE) according to authoritative guidance for nutritional epidemiology. Methods: Firstly, a scoping review was conducted to identify existing ontology terms for reuse in ONE. Secondly, existing data standards and reporting guidelines for nutritional epidemiology were converted into an ontology. The terms used in the standards were summarized and listed separately in a taxonomic hierarchy. Thirdly, the ontologies of the nutritional epidemiologic standards, reporting guidelines, and the core concepts were gathered in ONE. Three case studies were included to illustrate potential applications: (i) annotation of existing manuscripts and data, (ii) ontology-based inference, and (iii) estimation of reporting completeness in a sample of nine manuscripts. Results: Ontologies for “food and nutrition” (n = 37), “disease and specific population” (n = 100), “data description” (n = 21), “research description” (n = 35), and “supplementary (meta) data description” (n = 44) were reviewed and listed. ONE consists of 339 classes: 79 new classes to describe data and 24 new classes to describe the content of manuscripts. Conclusion: ONE is a resource to automate data integration, searching, and browsing, and can be used to assess reporting completeness in nutritional epidemiology.
Leveraging AI to Optimize Maintenance of Health Evidence and Offer a One-Stop Shop for Quality-Appraised Evidence Syntheses on the Effectiveness of Public Health Interventions: Quality Improvement Project
Health Evidence provides access to quality appraisals for >10,000 evidence syntheses on the effectiveness and cost-effectiveness of public health and health promotion interventions. Maintaining Health Evidence has become increasingly resource-intensive due to the exponential growth of published literature. Innovative screening methods using artificial intelligence (AI) can potentially improve efficiency. The objectives of this project are to: (1) assess the ability of AI-assisted screening to correctly predict nonrelevant references at the title and abstract level and investigate the consistency of this performance over time, and (2) evaluate the impact of AI-assisted screening on the overall monthly manual screening set. Training and testing were conducted using the DistillerSR AI Preview & Rank feature. A set of manually screened references (n=43,273) was uploaded and used to train the AI feature and assign probability scores to each reference to predict relevance. A minimum threshold was established where the AI feature correctly identified all manually screened relevant references. The AI feature was tested on a separate set of references (n=72,686) from the May 2019 to April 2020 monthly searches. The testing set was used to determine an optimal threshold that ensured >99% of relevant references would continue to be added to Health Evidence. The performance of AI-assisted screening at the title and abstract screening level was evaluated using recall, specificity, precision, negative predictive value, and the number of references removed by AI. The number and percentage of references removed by AI-assisted screening and the change in monthly manual screening time were estimated using an implementation reference set (n=272,253) from November 2020 to 2023. The minimum threshold in the training set of references was 0.068, which correctly removed 37% (n=16,122) of nonrelevant references. Analysis of the testing set identified an optimal threshold of 0.17, which removed 51,706 (71.14%) references using AI-assisted screening. A slight decrease in recall between the 0.068 minimum threshold (99.68%) and the 0.17 optimal threshold (94.84%) was noted, resulting in four missed references included via manual screening at the full-text level. This was accompanied by an increase in specificity from 35.95% to 71.70%, doubling the proportion of references AI-assisted screening correctly predicted as not relevant. Over 3 years of implementation, the number of references requiring manual screening was reduced by 70%, reducing the time spent manually screening by an estimated 382 hours. Given the magnitude of newly published peer-reviewed evidence, the curation of evidence supports decision makers in making informed decisions. AI-assisted screening can be an important tool to supplement manual screening and reduce the number of references that require manual screening, ensuring that the continued availability of curated high-quality synthesis evidence in public health is possible.
Assessing the credibility of how climate adaptation aid projects are categorised
This article presents the findings of a re-evaluation of all 5,200 aid projects that OECD donors reported for 2012 as \"climate change adaptation\"-related, based on the \"Rio marker\" classification system. The findings confirm those from the academic and grey literature that the absence of independent quality control makes the adaptation Rio marker data almost entirely unreliable. This lack of credibility impedes meaningful assessments of progress toward the mainstreaming of adaptation in development cooperation activities. It also erodes trust in international climate negotiations, given that these data are frequently used in the financial reporting of developed countries to the UNFCCC.
EU MRV Data-Based Review of the Ship Energy Efficiency Framework
The International Maritime Organization (IMO) has set a goal to reach net-zero greenhouse gas emissions from international shipping by or around 2050. The ship energy efficiency framework has played a positive role over the past decade in improving carbon intensity and reducing greenhouse gas emissions by employing the technical and operational energy efficiency metrics as effective appraisal tools. To quantify the ship energy efficiency performance and review the existing energy efficiency framework, this paper analyzed the data for the reporting year of 2023 extracted from the European Union (EU) monitoring, reporting, and verification (MRV) system, and investigated the operational profiles and energy efficiency for the ships calling at EU ports. The results show that the data accumulated in the EU MRV system could provide powerful support for conducting ship energy efficiency appraisals, which could facilitate the formulation of decarbonization policies for global shipping and management decisions for stakeholders. However, data quality, ship operational energy efficiency metrics, and co-existence with the IMO data collection system (DCS) remain issues to be addressed. With the improvement of IMO DCS system and the implementation of IMO Net-Zero Framework, harmonizing the two systems and avoiding duplicated regulation of shipping emissions at the EU and global levels are urgent.