Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
37,200
result(s) for
"Research validity"
Sort by:
Validating psychological constructs : historical, philosophical, and practical dimensions
\"This book critically examines the historical and philosophical foundations of construct validity theory (CVT), and how these have and continue to inform and constrain the conceptualization of validity and its application in research. CVT has had an immense impact on how researchers in the behavioural sciences conceptualize and approach their subject matter. Yet, there is equivocation regarding the foundations of the CVT framework as well as ambiguities concerning the nature of the 'constructs' that are its raison d'etre. The book is organized in terms of three major parts that speak, respectively, to the historical, philosophical, and pragmatic dimensions of CVT. The primary objective is to provide researchers and students with a critical lens through which a deeper understanding may be gained of both the utility and limitations of CVT and the validation practices to which it has given rise.\"-- Back cover.
Is it possible to overcome issues of external validity in preclinical animal research? Why most animal models are bound to fail
2018
Background
The pharmaceutical industry is in the midst of a productivity crisis and rates of translation from bench to bedside are dismal. Patients are being let down by the current system of drug discovery; of the several 1000 diseases that affect humans, only a minority have any approved treatments and many of these cause adverse reactions in humans. A predominant reason for the poor rate of translation from bench to bedside is generally held to be the failure of preclinical animal models to predict clinical efficacy and safety. Attempts to explain this failure have focused on problems of internal validity in preclinical animal studies (e.g. poor study design, lack of measures to control bias). However there has been less discussion of another key factor that influences translation, namely the
external
validity of preclinical animal models.
Review of problems of external validity
External validity is the extent to which research findings derived in one setting, population or species can be reliably applied to other settings, populations and species. This paper argues that the reliable translation of findings from animals to humans will only occur if preclinical animal studies are both internally
and
externally valid. We review several key aspects that impact external validity in preclinical animal research, including unrepresentative animal samples, the inability of animal models to mimic the complexity of human conditions, the poor applicability of animal models to clinical settings and animal–human species differences. We suggest that while some problems of external validity can be overcome by improving animal models, the problem of species differences can never be overcome and will always undermine external validity and the reliable translation of preclinical findings to humans.
Conclusion
We conclude that preclinical animal models can never be fully valid due to the uncertainties introduced by species differences. We suggest that even if the next several decades were spent improving the internal and external validity of animal models, the clinical relevance of those models would, in the end, only improve
to some extent
. This is because species differences would continue to make extrapolation from animals to humans unreliable. We suggest that to improve clinical translation and ultimately benefit patients, research should focus instead on human-relevant research methods and technologies.
Journal Article
The Hong Kong Principles for assessing researchers: Fostering research integrity
by
Moher, David
,
Sham, Mai Har
,
Foeger, Nicole
in
Computer and Information Sciences
,
Conferences, meetings and seminars
,
Epidemiology
2020
For knowledge to benefit research and society, it must be trustworthy. Trustworthy research is robust, rigorous, and transparent at all stages of design, execution, and reporting. Assessment of researchers still rarely includes considerations related to trustworthiness, rigor, and transparency. We have developed the Hong Kong Principles (HKPs) as part of the 6th World Conference on Research Integrity with a specific focus on the need to drive research improvement through ensuring that researchers are explicitly recognized and rewarded for behaviors that strengthen research integrity. We present five principles: responsible research practices; transparent reporting; open science (open research); valuing a diversity of types of research; and recognizing all contributions to research and scholarly activity. For each principle, we provide a rationale for its inclusion and provide examples where these principles are already being adopted.
Journal Article
On the External Validity of Social Preference Games: A Systematic Lab-Field Study
2019
We present a lab-field experiment designed to systematically assess the external validity of social preferences elicited in a variety of experimental games. We do this by comparing behavior in the different games with several behaviors elicited in the field and with self-reported behaviors exhibited in the past, using the same sample of participants. Our results show that the experimental social preference games do a poor job explaining both social behaviors in the field and social behaviors from the past. We also include a systematic review and meta-analysis of previous literature on the external validity of social preference games.
Data are available at
https://doi.org/10.1287/mnsc.2017.2908
.
This paper was accepted by John List, behavioral economics.
Journal Article
What is replication?
by
Nosek, Brian A.
,
Errington, Timothy M.
in
Biology and Life Sciences
,
Data Interpretation, Statistical
,
Diagnostic systems
2020
Credibility of scientific claims is established with evidence for their replicability using new data. According to common understanding, replication is repeating a study's procedure and observing whether the prior finding recurs. This definition is intuitive, easy to apply, and incorrect. We propose that replication is a study for which any outcome would be considered diagnostic evidence about a claim from prior research. This definition reduces emphasis on operational characteristics of the study and increases emphasis on the interpretation of possible outcomes. The purpose of replication is to advance theory by confronting existing understanding with new evidence. Ironically, the value of replication may be strongest when existing understanding is weakest. Successful replication provides evidence of generalizability across the conditions that inevitably differ from the original study; Unsuccessful replication indicates that the reliability of the finding may be more constrained than recognized previously. Defining replication as a confrontation of current theoretical expectations clarifies its important, exciting, and generative role in scientific progress.
Journal Article
Scientific method: Statistical errors
2014
P
values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume.
Journal Article
Research integrity: nine ways to move from talk to walk
2020
Counselling, coaches and collegiality — how institutions can share resources to promote best practice in science.
Counselling, coaches and collegiality — how institutions can share resources to promote best practice in science.
Journal Article
Methods and Meanings: Credibility and Trustworthiness of Qualitative Research
2014
Historically, qualitative research has been viewed as \"soft\" science and criticized for lacking scientific rigor compared to quantitative research, which uses experimental, objective methods (Mays & Pope, 1995). Common criticisms are that qualitative research is subjective, anecdotal, subject to researcher bias, and lacking generalizability by producing large quantities of detailed information about a single, unique phenomenon or setting (Koch & Harrington, 1998). However, qualitative research is not inferior research, but a different approach in studying humans. Qualitative research emphasizes exploring individual experiences, describing phenomenon, and developing theory (Vishnevsky & Beanlands, 2004).
Journal Article
Reproducibility standards for machine learning in the life sciences
by
Lee, Su-In
,
Hicks, Stephanie C
,
Hoffman, Michael M
in
Automation
,
Best practice
,
Learning algorithms
2021
To make machine-learning analyses in the life sciences more computationally reproducible, we propose standards based on data, model and code publication, programming best practices and workflow automation. By meeting these standards, the community of researchers applying machine-learning methods in the life sciences can ensure that their analyses are worthy of trust.
Journal Article
A call for transparent reporting to optimize the predictive value of preclinical research
by
Blumenstein, Robi
,
Bradley, Eileen W.
,
Macleod, Malcolm R.
in
692/308/2778
,
Animal experimentation
,
Animals
2012
Deficiencies in methods reporting in animal experimentation lead to difficulties in reproducing experiments; the authors propose a set of reporting standards to improve scientific communication and study design.
Making the most of animal studies
Animal studies have contributed immensely to our understanding of diseases and assist the development of new therapies, but inadequate experimental reporting can sometimes render such studies difficult to reproduce and to translate into the clinic. This year, a US National Institute of Neurological Disorders and Stroke workshop addressed this issue, and its conclusions are discussed in a Perspective piece in this issue of
Nature
. The main workshop recommendation is that at a minimum, studies should report on randomization, blinding, sample-size estimation and how the data were handled.
The US National Institute of Neurological Disorders and Stroke convened major stakeholders in June 2012 to discuss how to improve the methodological reporting of animal studies in grant applications and publications. The main workshop recommendation is that at a minimum studies should report on sample-size estimation, whether and how animals were randomized, whether investigators were blind to the treatment, and the handling of data. We recognize that achieving a meaningful improvement in the quality of reporting will require a concerted effort by investigators, reviewers, funding agencies and journal editors. Requiring better reporting of animal studies will raise awareness of the importance of rigorous study design to accelerate scientific progress.
Journal Article