Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
236 result(s) for "Animal experimentation Statistical methods."
Sort by:
The design and statistical analysis of animal experiments
\"Written for animal researchers, this book provides a comprehensive guide to the design and statistical analysis of animal experiments. It has long been recognised that the proper implementation of these techniques helps reduce the number of animals needed. By using real-life examples to make them more accessible, this book explains the statistical tools employed by practitioners. A wide range of design types are considered, including block, factorial, nested, cross-over, dose-escalation and repeated measures and techniques are introduced to analyse the experimental data generated. Each analysis technique is described in non-mathematical terms, helping readers without a statistical background to understand key techniques such as t-tests, ANOVA, repeated measures, analysis of covariance, multiple comparison tests, non-parametric and survival analysis. This is also the first text to describe technical aspects of InVivoStat, a powerful open-source software package developed by the authors to enable animal researchers to analyse their data and obtain informative results\"-- Provided by publisher.
ARRIVE has not ARRIVEd: Support for the ARRIVE (Animal Research: Reporting of in vivo Experiments) guidelines does not improve the reporting quality of papers in animal welfare, analgesia or anesthesia
Poor research reporting is a major contributing factor to low study reproducibility, financial and animal waste. The ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines were developed to improve reporting quality and many journals support these guidelines. The influence of this support is unknown. We hypothesized that papers published in journals supporting the ARRIVE guidelines would show improved reporting compared with those in non-supporting journals. In a retrospective, observational cohort study, papers from 5 ARRIVE supporting (SUPP) and 2 non-supporting (nonSUPP) journals, published before (2009) and 5 years after (2015) the ARRIVE guidelines, were selected. Adherence to the ARRIVE checklist of 20 items was independently evaluated by two reviewers and items assessed as fully, partially or not reported. Mean percentages of items reported were compared between journal types and years with an unequal variance t-test. Individual items and sub-items were compared with a chi-square test. From an initial cohort of 956, 236 papers were included: 120 from 2009 (SUPP; n = 52, nonSUPP; n = 68), 116 from 2015 (SUPP; n = 61, nonSUPP; n = 55). The percentage of fully reported items was similar between journal types in 2009 (SUPP: 55.3 ± 11.5% [SD]; nonSUPP: 51.8 ± 9.0%; p = 0.07, 95% CI of mean difference -0.3-7.3%) and 2015 (SUPP: 60.5 ± 11.2%; nonSUPP; 60.2 ± 10.0%; p = 0.89, 95%CI -3.6-4.2%). The small increase in fully reported items between years was similar for both journal types (p = 0.09, 95% CI -0.5-4.3%). No paper fully reported 100% of items on the ARRIVE checklist and measures associated with bias were poorly reported. These results suggest that journal support for the ARRIVE guidelines has not resulted in a meaningful improvement in reporting quality, contributing to ongoing waste in animal research.
Challenges and opportunities of translating animal research into human trials in Ethiopia
Background and objectives Although the goal of translational research is to bring biomedical knowledge from the laboratory to clinical trial and therapeutic products for improving health, this goal has not been well achieved as often as desired because of many barriers documented in different countries. Therefore, the aim of this study was to investigate the challenges and opportunities of translating animal research into human trials in Ethiopia. Methods A descriptive qualitative study, using in-depth interviews, was conducted in which preclinical and clinical trial researchers who have been involved in animal research or clinical trials as principal investigator were involved. Data were analyzed using inductive thematic process. Results Six themes were emerged for challenges: lack of financial and human capacity, inadequate infrastructure, operational obstacles and poor research governance, lack of collaboration, lack of reproducibility of results and prolonged ethical and regulatory approval processes. Furthermore, three themes were synthesized for opportunities: growing infrastructure and resources, improved human capacity and better administrative processes and initiatives for collaboration. Conclusion and recommendations The study found that the identified characteristics/features are of high importance either to hurdle or enable the practice of translating animal research into human trials. The study suggests that there should be adequate infrastructure and finance, human capacity building, good research governance, improved ethical and regulatory approval process, multidisciplinary collaboration, and incentives and recognition for researchers to overcome the identified challenges and allow translating of animal research into human trials to proceed more efficiently.
Increasing the statistical power of animal experiments with historical control data
Low statistical power reduces the reliability of animal research; yet, increasing sample sizes to increase statistical power is problematic for both ethical and practical reasons. We present an alternative solution using Bayesian priors based on historical control data, which capitalizes on the observation that control groups in general are expected to be similar to each other. In a simulation study, we show that including data from control groups of previous studies could halve the minimum sample size required to reach the canonical 80% power or increase power when using the same number of animals. We validated the approach on a dataset based on seven independent rodent studies on the cognitive effects of early-life adversity. We present an open-source tool, RePAIR, that can be widely used to apply this approach and increase statistical power, thereby improving the reliability of animal experiments. Bonapersona and colleagues describe how historical control data can be used to improve statistical power while reducing the number of animals required in experiments. They present an open-source tool, RePAIR, that can be used to apply this approach.
Refining animal research: The Animal Study Registry
The Animal Study Registry (ASR; www.animalstudyregistry.org) was launched in January 2019 for preregistration of animal studies in order to increase transparency and reproducibility of bioscience research and to promote animal welfare. The registry is free of charge and is designed for exploratory and confirmatory studies within applied science as well as basic and preclinical research. The registration form helps scientists plan their study thoroughly by asking detailed questions concerning study design, methods, and statistics. With registration, the study automatically receives a digital object identifier (DOI) that marks it as intellectual property of the researcher. To accommodate the researchers concerns about theft of ideas, users can restrict the visibility of their registered studies for up to 5 years. The full content of the study becomes publicly accessible at the end of the embargo period. Because the platform is embedded in the infrastructure of the German Federal Government, continuity and data security are provided. By registering a study in the ASR, researchers can show their commitment to transparency and data quality to reviewers and editors, to third-party donors, and to the general public.
Reporting and analysis of repeated measurements in preclinical animals experiments
A common feature of preclinical animal experiments is repeated measurement of the outcome, e.g., body weight measured in mice pups weekly for 20 weeks. Separate time point analysis or repeated measures analysis approaches can be used to analyze such data. Each approach requires assumptions about the underlying data and violations of these assumptions have implications for estimation of precision, and type I and type II error rates. Given the ethical responsibilities to maximize valid results obtained from animals used in research, our objective was to evaluate approaches to reporting repeated measures design used by investigators and to assess how assumptions about variation in the outcome over time impact type I and II error rates and precision of estimates. We assessed the reporting of repeated measures designs of 58 studies in preclinical animal experiments. We used simulation modelling to evaluate three approaches to statistical analysis of repeated measurement data. In particular, we assessed the impact of (a) repeated measure analysis assuming that the outcome had non-constant variation at all time points (heterogeneous variance) (b) repeated measure analysis assuming constant variation in the outcome (homogeneous variance), (c) separate ANOVA at individual time point in repeated measures designs. The evaluation of the three model fitting was based on comparing the p-values distributions, the type I and type II error rates and by implication, the shrinkage or inflation of standard error estimates from 1000 simulated dataset. Of 58 studies with repeated measures design, three provided a rationale for repeated measurement and 23 studies reported using a repeated-measures analysis approach. Of the 35 studies that did not use repeated-measures analysis, fourteen studies used only two time points to calculate weight change which potentially means collected data was not fully utilized. Other studies reported only select time points (n = 12) raising the issue of selective reporting. Simulation studies showed that an incorrect assumption about the variance structure resulted in modified error rates and precision estimates. The reporting of the validity of assumptions for repeated measurement data is very poor. The homogeneous variation assumption, which is often invalid for body weight measurements, should be confirmed prior to conducting the repeated-measures analysis using homogeneous covariance structure and adjusting the analysis using corrections or model specifications if this is not met.
SYRCLE’s risk of bias tool for animal studies
Background Systematic Reviews (SRs) of experimental animal studies are not yet common practice, but awareness of the merits of conducting such SRs is steadily increasing. As animal intervention studies differ from randomized clinical trials (RCT) in many aspects, the methodology for SRs of clinical trials needs to be adapted and optimized for animal intervention studies. The Cochrane Collaboration developed a Risk of Bias (RoB) tool to establish consistency and avoid discrepancies in assessing the methodological quality of RCTs. A similar initiative is warranted in the field of animal experimentation. Methods We provide an RoB tool for animal intervention studies (SYRCLE’s RoB tool). This tool is based on the Cochrane RoB tool and has been adjusted for aspects of bias that play a specific role in animal intervention studies. To enhance transparency and applicability, we formulated signalling questions to facilitate judgment. Results The resulting RoB tool for animal studies contains 10 entries. These entries are related to selection bias, performance bias, detection bias, attrition bias, reporting bias and other biases. Half these items are in agreement with the items in the Cochrane RoB tool. Most of the variations between the two tools are due to differences in design between RCTs and animal studies. Shortcomings in, or unfamiliarity with, specific aspects of experimental design of animal studies compared to clinical studies also play a role. Conclusions SYRCLE’s RoB tool is an adapted version of the Cochrane RoB tool. Widespread adoption and implementation of this tool will facilitate and improve critical appraisal of evidence from animal studies. This may subsequently enhance the efficiency of translating animal research into clinical practice and increase awareness of the necessity of improving the methodological quality of animal studies.
Effect size, sample size and power of forced swim test assays in mice: Guidelines for investigators to optimize reproducibility
A recent flood of publications has documented serious problems in scientific reproducibility, power, and reporting of biomedical articles, yet scientists persist in their usual practices. Why? We examined a popular and important preclinical assay, the Forced Swim Test (FST) in mice used to test putative antidepressants. Whether the mice were assayed in a naïve state vs. in a model of depression or stress, and whether the mice were given test agents vs. known antidepressants regarded as positive controls, the mean effect sizes seen in the experiments were indeed extremely large (1.5–2.5 in Cohen’s d units); most of the experiments utilized 7–10 animals per group which did have adequate power to reliably detect effects of this magnitude. We propose that this may at least partially explain why investigators using the FST do not perceive intuitively that their experimental designs fall short—even though proper prospective design would require ~21–26 animals per group to detect, at a minimum, large effects (0.8 in Cohen’s d units) when the true effect of a test agent is unknown. Our data provide explicit parameters and guidance for investigators seeking to carry out prospective power estimation for the FST. More generally, altering the real-life behavior of scientists in planning their experiments may require developing educational tools that allow them to actively visualize the inter-relationships among effect size, sample size, statistical power, and replicability in a direct and intuitive manner.
CRIME-Q—a unifying tool for critical appraisal of methodological (technical) quality, quality of reporting and risk of bias in animal research
Background Systematic reviews within the field of animal research are becoming more common. However, in animal translational research, issues related to methodological quality and quality of reporting continue to arise, potentially leading to underestimation or overestimation of the effects of interventions or prevent studies from being replicated. The various tools and checklists available to ensure good-quality studies and proper reporting include both unique and/or overlapping items and/or simply lack necessary elements or are too situational to certain conditions or diseases. Currently, there is no tool available, which covers all aspects of animal models, from bench-top activities to animal facilities, hence a new tool is needed. This tool should be designed to be able to assess all kinds of animal studies such as old, new, low quality, high quality, interventional and noninterventional on. It should do this on multiple levels through items on quality of reporting, methodological (technical) quality, and risk of bias, for use in assessing the overall quality of studies involving animal research. Methods During a systematic review of meningioma models in animals, we developed a novel unifying tool that can assess all types of animal studies from multiple perspectives. The tool was inspired by the Collaborative Approach to Meta Analysis and Review of Animal Data from Experimental Studies (CAMARADES) checklist, the ARRIVE 2.0 guidelines, and SYRCLE’s risk of bias tool, while also incorporating unique items. We used the interrater agreement percentage and Cohen’s kappa index to test the interrater agreement between two independent reviewers for the items in the tool. Results There was high interrater agreement across all items (92.9%, 95% CI 91.0–94.8). Cohen’s kappa index showed quality of reporting had the best mean index of 0.86 (95%-CI 0.78–0.94), methodological quality had a mean index of 0.83 (95%-CI 0.78–0.94) and finally the items from SYRCLE’s risk of bias had a mean kappa index of 0.68 (95%-CI 0.57–0.79). Conclusions The Critical Appraisal of Methodological (technical) Quality, Quality of Reporting and Risk of Bias in Animal Research (CRIME-Q) tool unifies a broad spectrum of information (both unique items and items inspired by other methods) about the quality of reporting and methodological (technical) quality, and contains items from SYRCLE’s risk of bias. The tool is intended for use in assessing overall study quality across multiple domains and items and is not, unlike other tools, restricted to any particular model or study design (whether interventional or noninterventional). It is also easy to apply when designing and conducting animal experiments to ensure proper reporting and design in terms of replicability, transparency, and validity.
Survey of the Quality of Experimental Design, Statistical Analysis and Reporting of Research Using Animals
For scientific, ethical and economic reasons, experiments involving animals should be appropriately designed, correctly analysed and transparently reported. This increases the scientific validity of the results, and maximises the knowledge gained from each experiment. A minimum amount of relevant information must be included in scientific publications to ensure that the methods and results of a study can be reviewed, analysed and repeated. Omitting essential information can raise scientific and ethical concerns. We report the findings of a systematic survey of reporting, experimental design and statistical analysis in published biomedical research using laboratory animals. Medline and EMBASE were searched for studies reporting research on live rats, mice and non-human primates carried out in UK and US publicly funded research establishments. Detailed information was collected from 271 publications, about the objective or hypothesis of the study, the number, sex, age and/or weight of animals used, and experimental and statistical methods. Only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used. Appropriate and efficient experimental design is a critical component of high-quality science. Most of the papers surveyed did not use randomisation (87%) or blinding (86%), to reduce bias in animal selection and outcome assessment. Only 70% of the publications that used statistical methods described their methods and presented the results with a measure of error or variability. This survey has identified a number of issues that need to be addressed in order to improve experimental design and reporting in publications describing research using animals. Scientific publication is a powerful and important source of information; the authors of scientific publications therefore have a responsibility to describe their methods and results comprehensively, accurately and transparently, and peer reviewers and journal editors share the responsibility to ensure that published studies fulfil these criteria.