Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
114 result(s) for "scientific rigor"
Sort by:
Open Science: My Insights Into Data Sharing, Preregistration, and Replication
After a decade of implementing open science practices as a principal investigator, mentor, data repository founder, and editor-in-chief, I have learned that the question is not whether researchers should adopt these practices but how to adapt them meaningfully. This commentary, based on a talk given at the 2024 Canadian Society for Brain, Behaviour, and Cognitive Science conference, argues for two key principles: First, open science implementation must be context-dependent rather than one-size-fits-all, and second, practical research realities require flexible approaches to idealized policies. Through personal examples, from my evolution with preregistration from \"recipe\" to \"guide\" during COVID-19 research to challenges with Registered Reports using existing data sets, I show how open science practices work best when researchers approach them as evolving tools rather than rigid rules. I also discuss field-specific differences in open science uptake between psychology and education and the importance of equity considerations in implementation. The commentary concludes with concrete recommendations for researchers and journals, emphasizing that sustainable open science requires meeting researchers where they are while maintaining transparency and scientific rigour. Après avoir consacré une décennie à la promotion et à la mise en œuvre des pratiques de la science ouverte en tant que chercheuse principale, mentore, fondatrice d'une base de données et rédactrice en chef, j'ai réalisé qu'il ne s'agit pas de savoir si les chercheurs doivent adopter ces pratiques, mais plutôt de comprendre comment les intégrer de manière significative. Ce commentaire, basé sur une présentation donnée lors de la conférence 2024 de la Société canadienne des sciences du cerveau, du comportement et de la cognition, plaide en faveur de deux principes clés : premièrement, la mise en œuvre de la science ouverte doit être adaptée au contexte plutôt que fondée sur une approche universelle et, deuxièmement, les réalités pratiques de la recherche exigent des approches flexibles face aux politiques idéalisées. À travers des exemples personnels - de mon évolution dans l'utilisation de la préinscription, passée d'une « recette » à un « guide » pendant mes recherches liées à la COVID-19, aux défis rencontrés avec les rapports enregistrés utilisant des ensembles de données existants -, je montre que les pratiques de science ouverte fonctionnent le mieux lorsque les chercheurs les considèrent comme des outils en constante évolution plutôt que comme des règles rigides. Je discute également des différences spécifiques au domaine dans l'adoption de la science ouverte entre la psychologie et l'éducation, ainsi que de l'importance des considérations d'équité dans la mise en œuvre. Le commentaire se termine par des recommandations concrètes à l'intention des chercheurs et des revues, en soulignant que la durabilité de la science ouverte repose sur une approche qui rejoint les chercheurs là où ils en sont, tout en maintenant la transparence et la rigueur scientifique. Public Significance Statement Open science practices like sharing research data and preplanning studies can make research more trustworthy and useful, but researchers often struggle with how to implement these practices in real-world situations. This commentary shows that rather than having rigid, one-size-fits-all rules for open science, we need flexible approaches that work for different types of research while still maintaining transparency and scientific quality. These insights can help Canadian researchers, funding agencies, and journals develop more practical and equitable policies that actually support better science rather than creating barriers for researchers with different resources or research contexts.
Eleven strategies for making reproducible research and open science training the norm at research institutions
Reproducible research and open science practices have the potential to accelerate scientific progress by allowing others to reuse research outputs, and by promoting rigorous research that is more likely to yield trustworthy results. However, these practices are uncommon in many fields, so there is a clear need for training that helps and encourages researchers to integrate reproducible research and open science practices into their daily work. Here, we outline eleven strategies for making training in these practices the norm at research institutions. The strategies, which emerged from a virtual brainstorming event organized in collaboration with the German Reproducibility Network, are concentrated in three areas: (i) adapting research assessment criteria and program requirements; (ii) training; (iii) building communities. We provide a brief overview of each strategy, offer tips for implementation, and provide links to resources. We also highlight the importance of allocating resources and monitoring impact. Our goal is to encourage researchers – in their roles as scientists, supervisors, mentors, instructors, and members of curriculum, hiring or evaluation committees – to think creatively about the many ways they can promote reproducible research and open science practices in their institutions.
Systematic review of guidelines for internal validity in the design, conduct and analysis of preclinical biomedical experiments involving laboratory animals
Over the last two decades, awareness of the negative repercussions of flaws in the planning, conduct and reporting of preclinical research involving experimental animals has been growing. Several initiatives have set out to increase transparency and internal validity of preclinical studies, mostly publishing expert consensus and experience. While many of the points raised in these various guidelines are identical or similar, they differ in detail and rigour. Most of them focus on reporting, only few of them cover the planning and conduct of studies. The aim of this systematic review is to identify existing experimental design, conduct, analysis and reporting guidelines relating to preclinical animal research. A systematic search in PubMed, Embase and Web of Science retrieved 13 863 unique results. After screening these on title and abstract, 613 papers entered the full-text assessment stage, from which 60 papers were retained. From these, we extracted unique 58 recommendations on the planning, conduct and reporting of preclinical animal studies. Sample size calculations, adequate statistical methods, concealed and randomised allocation of animals to treatment, blinded outcome assessment and recording of animal flow through the experiment were recommended in more than half of the publications. While we consider these recommendations to be valuable, there is a striking lack of experimental evidence on their importance and relative effect on experiments and effect sizes.
The effectiveness of transcranial magnetic stimulation for dysphagia in stroke patients: an umbrella review of systematic reviews and meta-analyses
Numerous studies have explored the use of repetitive Transcranial Magnetic Stimulation (rTMS) intervention in post-stroke dysphagia. The primary aim of this umbrella review was to appraise the methodological quality of systematic reviews (SRs), with and without meta-analyses (MAs), that synthesized the findings of randomized controlled trials (RCTs) exploring the effectiveness of rTMS in the management of dysphagia post-stroke. A secondary aim of was to evaluate the consistency and reliability of translational implications of rTMS for swallowing recovery after stroke across these SRs and MAs. We searched several databases from inception to the 14th of May 2023, to identify SRs and MAs that examined the effectiveness of rTMS in the management of dysphagia post-stroke. The methodological quality of the included studies was evaluated utilizing the AMSTAR 2 (A Measurement Tool to Assess Systematic Reviews) instrument. To investigate the extent of literature overlap among the primary studies included in the SRs, the Graphical Overview of Evidence (GROOVE) was utilized. Of the 19 SRs that were identified, two studies received low quality ratings, while the rest (17) were rated with critically low quality based on the AMSTAR 2 rating. A high literature overlap across the SRs was observed. In all SRs and MAs reviewed, there was a consistent presence of at least some significant evidence supporting the effectiveness of rTMS in enhancing swallowing outcomes for individuals with dysphagia post-stroke, that is, all MAs reported at least a moderate overall effect in favor of rTMS (SMD range = [0.59, 6.23]). While rTMS shows promise for improving dysphagia post-stroke, the current evidence remains limited and inconclusive due to the methodological flaws observed in the published SRs and their respective MAs on the topic so far. Concerning the limitations of our study, language restrictions and methodological shortcomings may affect the generalizability of our findings.
A primer for the practice of reflexivity in conservation science
Rigorous scientific practice relies on the tenet of transparency. However, despite regular transparency in areas such as data availability and methodological practice, the influence of personal and professional values in research design and dissemination is often not disclosed or discussed in conservation science. Conservation scientists are increasingly driven to work in collaboration with communities where their work takes place, which raises important questions about the research process, especially as the field remains largely represented by a Western scientific worldview. The process of reflexivity, and the creation of positionality statements as one form of a reflexive practice, is an important component of transparency, rigor, and best practice in contemporary conservation science. In our own professional practices, however, we have found that guidance on how to produce positionality statements and maintain reflexivity throughout the lifecycle of research is too often lacking. In response, we build on existing literature and our own experience to offer a primer as a starting point to the practice of reflexivity. Rather than being prescriptive, we seek to demonstrate flexible approaches that researchers may consider when communicating reflexive practice to enhance research transparency. We explore the challenges and potential pitfalls in a reflexive practice and offer considerations and advice based on our collective professional experience.
Bubble effect: including internet search engines in systematic reviews introduces selection bias and impedes scientific reproducibility
Background Using internet search engines (such as Google search) in systematic literature reviews is increasingly becoming a ubiquitous part of search methodology. In order to integrate the vast quantity of available knowledge, literature mostly focuses on systematic reviews, considered to be principal sources of scientific evidence at all practical levels. Any possible individual methodological flaws present in these systematic reviews have the potential to become systemic. Main text This particular bias, that could be referred to as (re)search bubble effect, is introduced because of inherent, personalized nature of internet search engines that tailors results according to derived user preferences based on unreproducible criteria. In other words, internet search engines adjust their user’s beliefs and attitudes, leading to the creation of a personalized (re)search bubble, including entries that have not been subjected to rigorous peer review process. The internet search engine algorithms are in a state of constant flux, producing differing results at any given moment, even if the query remains identical. There are many more subtle ways of introducing unwanted variations and synonyms of search queries that are used autonomously, detached from user insight and intent. Even the most well-known and respected systematic literature reviews do not seem immune to the negative implications of the search bubble effect, affecting reproducibility. Conclusion Although immensely useful and justified by the need for encompassing the entirety of knowledge, the practice of including internet search engines in systematic literature reviews is fundamentally irreconcilable with recent emphasis on scientific reproducibility and rigor, having a profound impact on the discussion of the limits of scientific epistemology. Scientific research that is not reproducible, may still be called science, but represents one that should be avoided. Our recommendation is to use internet search engines as an additional literature source, primarily in order to validate initial search strategies centered on bibliographic databases.
Methodological Aspects in the Construction of the Protocol in Semi-Structured Interviews
The purpose of this study is to identify the methodological aspects in the construction of the semi-structured interview protocol. The development of the protocol implies the methodological articulation (i.e. the coherence between question, objective, research object, design, scenario, participants, and the technique to be used). Regarding the construction of the semi-structured interview protocol, four specific phases are considered: (1) the identification of the research topics, (2) the construction of the interview script, (3) the external evaluation of the protocol, and (4) the piloting and fine-tuning of the protocol. Each of these phases guarantee greater rigor in the qualitative research under development.
Framework for advancing rigorous research
There is a pressing need to increase the rigor of research in the life and biomedical sciences. To address this issue, we propose that communities of 'rigor champions' be established to campaign for reforms of the research culture that has led to shortcomings in rigor. These communities of rigor champions would also assist in the development and adoption of a comprehensive educational platform that would teach the principles of rigorous science to researchers at all career stages.
Amputation for Complex Regional Pain Syndrome: Meta-Analysis and Validation of a Histopathology Scoring System
Objective Pathology can provide crucial insights into the etiology of disease. The goal of this review is to evaluate the rigor of histopathology reports of Complex Regional Pain Syndrome (CRPS). Methods A systematic search of multiple databases identified papers that described amputation for CRPS with pathology findings. Control pathology articles were randomly chosen from the same journals. Landmark articles in Surgical Pathology were previously identified. Papers were categorized by the use of histology: Anatomic (microscopic description), Diagnostic (binary result), and Substrate (special studies only). A novel Histopathology Score assigned 1 point for each of 10 History elements and 15 Pathology elements. All articles were scored and analyzed by appropriate statistics. Results The search identified 22 CRPS, 50 Control and 50 Landmark articles. Multivariable analysis of the Pathology Score showed a significantly higher score for Anatomic vs Non-Anatomic papers (Incidence Rate Ratio (IRR) 1.54, P < .001) and Landmark vs CRPS articles (IRR 1.39, P value .003). CRPS papers reported some elements infrequently: diagnostic criteria (31.8%), routine stain (50%), any clinic-pathologic correlation (40.9%), and sample size >2 (27.3%). Conclusions The Pathology Score is a useful quality assessment tool to evaluate studies. As expected, Anatomic papers scored significantly higher than Non-Anatomic papers. CRPS papers had small sample sizes (median 1) and infrequent reporting of diagnostic criteria, routine stain, any clinical pathologic correlation. These particular elements are crucial for analyzing and reviewing pathologic features. The analysis explains why it is quite difficult to write a meaningful systematic review of CRPS histology at this time.
THE RATIONALE FOR SATURATION IN QUALITATIVE RESEARCH: WHEN PRACTICE INFORMS THEORY
This paper acknowledges the significance of data saturation in research as a methodological instrument governing the non-negotiable yet highly questioned scientific rigor in research. Therefore, it employs a reflective research-practice based approach to evaluate the importance of data saturation in qualitative research. It draws on context and time-bound first-hand research practices of sampling and collecting quality data using face-to-face, semi-structured interviews with migrant entrepreneurs in London between 2017-2021. This paper shows that data saturation is a complex phenomenon expanding beyond the theoretical rationale experienced as a before, during and after an iterative and reflective process of engaging with the research participants and data (i.e., triangulation of sources, disciplinary traditions, researcher’s experiences and participants’ willingness and readiness to share), which anchors researcher’s decision to resume data collection. This paper employs reflective fieldwork-based practices to demonstrate how saturation is reached in a phenomenological interpretative study. Thus, it contributes to the qualitative research scholarship by addressing the misalignment between theory and practice and bringing a practice-based, triangulated perspective on data saturation.