Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
33,318
result(s) for
"Review committees"
Sort by:
Blockchain Applications Presentation Commentry
by
Dr. Anjum Khurshid, Director Of Data Integration, Department of Population Health, Assistant Professor, Department of Population Health, The University Of Texas at Austin Dell Medical School, USA
,
Dr. Vijayakumar Varadarajan, Adjunct Professor, School of Computer Science and Engineering, The University of New South Wales, Australia
,
Dr. Kayo Fujimoto, Distinguished Professor In Social Determinants of Health, Professor In School Of Public Health At The University Of Texas Health Science Center At Houston, USA
in
blockchain academic scientific review committee
,
blockchain academic track
,
blockchain applications in healthcare
2022
The 2021 ConV2X Annual Symposium featured a scientific program of academic/research presentations in addition to business and industry talks. The research track focused on exploring and sharing developments in blockchain and emerging technologies in health and clinical medicine. Submissions were based on original research, conceptual frameworks, proposed applications, position papers, case studies, and real-world implementation. Selection was based on a peer-review process. Faculty, students, and industry researchers were encouraged to submit abstracts to present ideas before an informed and knowledgeable audience of industry leaders, policy makers, funders, and researchers. All presentations were reviewed by a sub-group of the scientific reveiw committee. This video presentation is an example of the discussions that transpired for each category of submissions, specifically, blockchain applications. Submission Review Committee • Dave Kochalko, CEO of ARTiFACTS • Anjum Khurshid, UT Austin • Carlos Caldas, UT Engineering • Gil Alterovitz, Harvard Medical School • Kayo Fujimoto, UT Health Houston • Lei Zhang, University of Glasglow • Sean Manion, CSciO of ConsenSys Health • Vijayakuman Varadarajan, University of South Wales • Vikram Dhillon, Wayne State University • Yuichi Ikeda, Kyoto University
Journal Article
Healthy Voices, Unhealthy Silence
by
Gusmano, Michael K
,
Grogan, Colleen M
in
Advisory Committees
,
Advisory Committees -- Connecticut
,
Connecticut
2007
Public silence in policymaking can be deafening. When advocates for a disadvantaged group decline to speak up, not only are their concerns not recorded or acted upon, but also the collective strength of the unspoken argument is lessened-a situation that undermines the workings of deliberative democracy by reflecting only the concerns of more powerful interests. But why do so many advocates remain silent on key issues they care about and how does that silence contribute to narrowly defined policies? What can individuals and organizations do to amplify their privately expressed concerns for policy change? InHealthy Voices, Unhealthy Silence, Colleen M. Grogan and Michael K. Gusmano address these questions through the lens of state-level health care advocacy for the poor. They examine how representatives for the poor participate in an advisory board process by tying together existing studies; extensive interviews with key players; and an in-depth, first-hand look at the Connecticut Medicaid advisory board's deliberations during the managed care debate. Drawing on the concepts of deliberative democracy, agenda setting, and nonprofit advocacy, Grogan and Gusmano reveal the reasons behind advocates' often unexpected silence on major issues, assess how capable nonprofits are at affecting policy debates, and provide prescriptive advice for creating a participatory process that adequately addresses the health care concerns of the poor and dispossessed. Though exploring specifically state-level health care advocacy for the poor, the lessons Grogan and Gusmano offer here are transferable across issue areas and levels of government. Public policy scholars, advocacy organizations, government workers, and students of government administration will be well-served by this significant study.
A randomized controlled trial on anonymizing reviewers to each other in peer review discussions
by
Song, Xiangchen
,
Jin, Zhijing
,
Rastogi, Charvi
in
Agreements
,
Artificial Intelligence
,
Clinical trials
2024
Many peer-review processes involve reviewers submitting their independent reviews, followed by a discussion between the reviewers of each paper. A common question among policymakers is whether the reviewers of a paper should be anonymous to each other during the discussion. We shed light on this question by conducting a randomized controlled trial at the Conference on Uncertainty in Artificial Intelligence (UAI) 2022 conference where reviewer discussions were conducted over a typed forum. We randomly split the reviewers and papers into two conditions–one with anonymous discussions and the other with non-anonymous discussions. We also conduct an anonymous survey of all reviewers to understand their experience and opinions. We compare the two conditions in terms of the amount of discussion, influence of seniority on the final decisions, politeness, reviewers’ self-reported experiences and preferences. Overall, this experiment finds small, significant differences favoring the anonymous discussion setup based on the evaluation criteria considered in this work.
Journal Article
A scoping review on biomedical journal peer review guides for reviewers
by
Park, Sunju
,
Kim, Kyeong Han
,
Jun, Jihee
in
Alternative medicine
,
Biology and Life Sciences
,
Biomedical Research
2021
Peer review is widely used in academic fields to assess a manuscript's significance and to improve its quality for publication. This scoping review will assess existing peer review guidelines and/or checklists intended for reviewers of biomedical journals and provide an overview on the review guidelines.
PubMed, Embase, and Allied and Complementary Medicine (AMED) databases were searched for review guidelines from the date of inception until February 19, 2021. There was no date restriction nor article type restriction. In addition to the database search, websites of journal publishers and non-publishers were additionally hand-searched.
Of 14,633 database publication records and 24 website records, 65 publications and 14 websites met inclusion criteria for the review (78 records in total). From the included records, a total of 1,811 checklist items were identified. The items related to Methods, Results, and Discussion were found to be the highly discussed in reviewer guidelines.
This review identified existing literature on peer review guidelines and provided an overview of the current state of peer review guides. Review guidelines were varying by journals and publishers. This calls for more research to determine the need to use uniform review standards for transparent and standardized peer review.
The protocol for this study has been registered at Research Registry (www.researchregistry.com): reviewregistry881.
Journal Article
Expertise versus Bias in Evaluation: Evidence from the NIH
2017
Evaluators with expertise in a particular field may have an informational advantage in separating good projects from bad. At the same time, they may also have personal preferences that impact their objectivity. This paper examines these issues in the context of peer review at the US National Institutes of Health. I show that evaluators are both better informed and more biased about the quality of projects in their own area. On net, the benefits of expertise weakly dominate the costs of bias. As such, policies designed to limit bias by seeking impartial evaluators may reduce the quality of funding decisions.
Journal Article
How to assess a survey report: a guide for readers and peer reviewers
2015
Although designing and conducting surveys may appear straightforward, there are important factors to consider when reading and reviewing survey research. Several guides exist on how to design and report surveys, but few guides exist to assist readers and peer reviewers in appraising survey methods.1-9 We have developed a guide to aid readers and reviewers to discern whether the information gathered from a survey is reliable, unbiased and from a representative sample of the population. In our guide, we pose seven broad questions and specific subquestions to assist in assessing the quality of articles reporting on self-administered surveys (Box 1). We explain the rationale for each question posed and cite literature addressing its relevance in appraising the methodologic and reporting quality of survey research. Throughout the guide, we use the term \"questionnaire\" to refer to the instrument administered to respondents and \"survey\" to define the process of administering the questionnaire. We use \"readers\" to encompass both readers and peer reviewers. Several types of questionnaire testing can be performed, including pilot, clinical sensibility, reliability and validity testing. Readers should assess whether the investigators conducted formal testing to identify problems that may affect how respondents interpret and respond to individual questions and to the questionnaire as a whole. At a minimum, each questionnaire should have undergone pilot testing. Readers should evaluate what process was used for pilot testing the questionnaire (e.g., investigators sought feedback in a semi-structured format), the number and type of people involved (e.g., individuals similar to those in the sampling frame) and what features (e.g., the flow, salience and acceptability of the questionnaire) were assessed. Both pretesting and pilot testing minimize the chance that respondents will misinterpret questions. Whereas pretesting focuses on the wording of the questionnaire, pilot testing assesses the flow and relevance of the entire questionnaire, as well as individual questions, to identify unusual, irrelevant, poorly worded or redundant questions and responses.18 Through testing, the authors identify problems with questions and response formats so that modifications can be made to enhance questionnaire reliability, validity and responsiveness. Types of validity assessments include face, content, construct and criterion validity. Readers should assess whether any validity testing was conducted. Although the number of validity assessments depends on current or future use of the questionnaire, investigators should have assessed at a minimum the face validity of their questionnaire during clinical sensibility testing.2 In face validity, experts in the field or a sample of respondents similar to the target population determine whether the questionnaire measures what it aims to measure.20 In content validity, experts assess whether the content of the questionnaire includes all aspects considered essential to the construct or topic. Investigators evaluate construct validity when specific criteria to define the concept of interest are unknown; they verify whether key constructs were included using content validity assessments made by experts in the field or using statistical methods (e.g., factor analysis).2 In criterion validity, investigators compare responses to items with a gold standard.2
Journal Article
A Qualitative Study of the Ethics of Community Scientists’ Role in Environmental Health Research from the Perspective of Community Scientists and Institutional Review Board Staff
by
Gonzalez, Ana
,
Flores, Deysi
,
Harari, Homero
in
Beliefs, opinions and attitudes
,
Bioethics
,
Citizen scientists
2025
Community engagement in research, including community scientists' (CSs) participation in environmental exposure assessments, promotes the bidirectional flow of information between communities and researchers and improves the development of interventions to reduce environmental health inequities. Nonetheless, institutional review boards (IRBs) with limited experience with CS research tend to struggle when reviewing protocols given CS participants' dual role as research participants and co-creators of data.
We collected focus group data from 35 Latina housecleaners eliciting their bioethical reflections on their experience as CSs before and after participation in the collection of data about their exposures to chemical compounds in cleaning products. We shared findings from CS participants and collected impressions and challenges from IRB staff from five New York City biomedical research institutions. We used a modified approach to conventional content analysis to guide data analysis and combined deductive and inductive approaches to generate codes.
The CS participants emphasized their shared responsibility in the research process and bidirectional learning with the research team, which they saw as educating and empowering themselves and their broader community to create safer cleaning practices to improve the community's health and wellbeing. CS participants embraced the importance of sound science by their recognition that their community relied on the quality and accuracy of their work as CSs. Perspectives from IRB staff similarly recognized the value of participant engagement but emphasized the importance of disentangling CS activities as research participants from activities as research team members to better determine the appropriate mechanisms and authorities for assuring ethical protections.
Findings suggest that existing bioethical principles of beneficence, respect for persons, and justice, when interpreted by participants as inclusive of protections and benefits for both the CSs and their community's collective good, reflect the bioethical values of our CS participants. However, better guidance and training is needed for researchers, IRBs, and community collaborators to apply these values and respect and protect the full range of roles for community members participating in research. https://doi.org/10.1289/EHP15824.
Journal Article
The International Health Regulations: The Governing Framework for Global Health Security
2016
Context: The International Health Regulations (IHR) have been the governing framework for global health security for the past decade and are a nearly universally recognized World Health Organization (WHO) treaty, with 196 States Parties. In the wake of the Ebola epidemic, major global commissions have cast doubt on the future effectiveness of the IHR and the leadership of the WHO. Methods: We conducted a review of the historical origins of the IHR and their performance over the past 10 years and analyzed all of the ongoing reform panel efforts to provide a series of politically feasible recommendations for fundamental reform. Findings: We propose a series of recommendations with realistic pathways for change. These recommendations focus on the development and strengthening of IHR core capacities; independently assessed metrics; new financing mechanisms; harmonization with the Global Health Security Agenda, Performance of Veterinary Services (PVS) Pathways, the Pandemic Influenza Preparedness Framework, and One Health strategies; public health and clinical workforce development; Emergency Committee transparency and governance; tiered public health emergency of international concern (PHEIC) processes; enhanced compliance mechanisms; and an enhanced role for civil society. Conclusions: Empowering the WHO and realizing the IHR's potential will shore up global health security—a vital investment in human and animal health—while reducing the vast economic consequences of the next global health emergency.
Journal Article
Sample size determinations in original research protocols for randomised clinical trials submitted to UK research ethics committees: review
by
Clark, Timothy
,
Berger, Ursula
,
Mansmann, Ulrich
in
Bias
,
Clinical trials
,
Diplomatic protocol
2013
Objectives To assess the completeness of reporting of sample size determinations in unpublished research protocols and to develop guidance for research ethics committees and for statisticians advising these committees.Design Review of original research protocols.Study selection Unpublished research protocols for phase IIb, III, and IV randomised clinical trials of investigational medicinal products submitted to research ethics committees in the United Kingdom during 1 January to 31 December 2009.Main outcome measures Completeness of reporting of the sample size determination, including the justification of design assumptions, and disagreement between reported and recalculated sample size. Results 446 study protocols were reviewed. Of these, 190 (43%) justified the treatment effect and 213 (48%) justified the population variability or survival experience. Only 55 (12%) discussed the clinical importance of the treatment effect sought. Few protocols provided a reasoned explanation as to why the design assumptions were plausible for the planned study. Sensitivity analyses investigating how the sample size changed under different design assumptions were lacking; six (1%) protocols included a re-estimation of the sample size in the study design. Overall, 188 (42%) protocols reported all of the information to accurately recalculate the sample size; the assumed withdrawal or dropout rate was not given in 177 (40%) studies. Only 134 of the 446 (30%) sample size calculations could be accurately reproduced. Study size tended to be over-estimated rather than under-estimated. Studies with non-commercial sponsors justified the design assumptions used in the calculation more often than studies with commercial sponsors but less often reported all the components needed to reproduce the sample size calculation. Sample sizes for studies with non-commercial sponsors were less often reproduced.Conclusions Most research protocols did not contain sufficient information to allow the sample size to be reproduced or the plausibility of the design assumptions to be assessed. Greater transparency in the reporting of the determination of the sample size and more focus on study design during the ethical review process would allow deficiencies to be resolved early, before the trial begins. Guidance for research ethics committees and statisticians advising these committees is needed.
Journal Article