Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
15,277 result(s) for "Biomedical Research - statistics "
Sort by:
Survey of laboratory-acquired infections around the world in biosafety level 3 and 4 laboratories
Laboratory-acquired infections due to a variety of bacteria, viruses, parasites, and fungi have been described over the last century, and laboratory workers are at risk of exposure to these infectious agents. However, reporting laboratory-associated infections has been largely voluntary, and there is no way to determine the real number of people involved or to know the precise risks for workers. In this study, an international survey based on volunteering was conducted in biosafety level 3 and 4 laboratories to determine the number of laboratory-acquired infections and the possible underlying causes of these contaminations. The analysis of the survey reveals that laboratory-acquired infections have been infrequent and even rare in recent years, and human errors represent a very high percentage of the cases. Today, most risks from biological hazards can be reduced through the use of appropriate procedures and techniques, containment devices and facilities, and the training of personnel.
A reinvestigation of recruitment to randomised, controlled, multicenter trials: a review of trials funded by two UK funding agencies
Background Randomised controlled trials (RCTs) are the gold standard assessment for health technologies. A key aspect of the design of any clinical trial is the target sample size. However, many publicly-funded trials fail to reach their target sample size. This study seeks to assess the current state of recruitment success and grant extensions in trials funded by the Health Technology Assessment (HTA) program and the UK Medical Research Council (MRC). Methods Data were gathered from two sources: the National Institute for Health Research (NIHR) HTA Journal Archive and the MRC subset of the International Standard Randomised Controlled Trial Number (ISRCTN) register. A total of 440 trials recruiting between 2002 and 2008 were assessed for eligibility, of which 73 met the inclusion criteria. Where data were unavailable from the reports, members of the trial team were contacted to ensure completeness. Results Over half (55%) of trials recruited their originally specified target sample size, with over three-quarters (78%) recruiting 80% of their target. There was no evidence of this improving over the time of the assessment. Nearly half (45%) of trials received an extension of some kind. Those that did were no more likely to successfully recruit. Trials with 80% power were less likely to successfully recruit compared to studies with 90% power. Conclusions While recruitment appears to have improved since 1994 to 2002, publicly-funded trials in the UK still struggle to recruit to their target sample size, and both time and financial extensions are often requested. Strategies to cope with such problems should be more widely applied. It is recommended that where possible studies are planned with 90% power.
Increasing disparities between resource inputs and outcomes, as measured by certain health deliverables, in biomedical research
Society makes substantial investments in biomedical research, searching for ways to better human health. The product of this research is principally information published in scientific journals. Continued investment in science relies on society’s confidence in the accuracy, honesty, and utility of research results. A recent focus on productivity has dominated the competitive evaluation of scientists, creating incentives to maximize publication numbers, citation counts, and publications in high-impact journals. Some studies have also suggested a decreasing quality in the published literature. The efficiency of society’s investments in biomedical research, in terms of improved health outcomes, has not been studied. We show that biomedical research outcomes over the last five decades, as estimated by both life expectancy and New Molecular Entities approved by the Food and Drug Administration, have remained relatively constant despite rising resource inputs and scientific knowledge. Research investments by the National Institutes of Health over this time correlate with publication and author numbers but not with the numerical development of novel therapeutics. We consider several possibilities for the growing input-outcome disparity including the prior elimination of easier research questions, increasing specialization, overreliance on reductionism, a disproportionate emphasis on scientific outputs, and other negative pressures on the scientific enterprise. Monitoring the efficiency of research investments in producing positive societal outcomes may be a useful mechanism for weighing the efficacy of reforms to the scientific enterprise. Understanding the causes of the increasing input-outcome disparity in biomedical research may improve society’s confidence in science and provide support for growing future research investments.
Three behavior change theory–informed randomized studies within a trial to improve response rates to trial postal questionnaires
Our aim was to design and evaluate a novel behavior change approach to increase response rates to an annual postal questionnaire in three randomized studies within a trial (SWAT) and replicate the most promising SWAT. SWAT1 tested a trial logo sticker on questionnaire envelopes vs. no sticker; SWAT2 tested a theoretically informed letter sent with the questionnaire vs. a standard letter; SWAT3 tested a theoretically informed newsletter sent before the questionnaire vs. no newsletter. The SWATs were conducted within a large dental trial (N = 1,877 adults), and SWAT2 replicated in a different trial in a similar setting (N = 2,372). SWAT1 improved response rates by 1.4%, 95% confidence interval (CI) (−7.2%, 10.0%). SWAT2 improved response rates by 7.0%, 95% CI (1.7%, 12.3%). SWAT3 improved response rates by 0.8%, 95% CI (−5.1%, 6.7%). Replication of SWAT2 as the most promising SWAT showed improvement in response rates of 1.0%, 95% CI (−3.2%, 5.3%). Pooled results from SWAT2 showed an overall improvement in response rates of 3.4%, 95% CI (0.1%, 6.7%). A theory-based behavioral approach to design interventions to improve trial response rates showed small but meaningful improvements. The approach presented here can be easily implemented and adapted to address other identified barriers to trial retention.
Misreporting of Results of Research in Psychiatry
Abstract Few studies address publication and outcome reporting biases of randomized controlled trials (RCTs) in psychiatry. The objective of this study was to determine publication and outcome reporting bias in RCTs funded by the Stanley Medical Research Institute (SMRI), a U.S. based, non-profit organization funding RCTs in schizophrenia and bipolar disorder. We identified all RCTs (n = 280) funded by SMRI between 2000 and 2011, and using non-public, final study reports and published manuscripts, we classified the results as positive or negative in terms of the drug compared to placebo. Design, outcome measures and statistical methods specified in the original protocol were compared to the published manuscript. Of 280 RCTs funded by SMRI between 2000 and 2011, at the time of this writing, three RCTs were ongoing and 39 were not performed. Among the 238 completed RCTs, 86 (36.1%) reported positive and 152 (63.9%) reported negative results: 86% (74/86) of those with positive findings were published in contrast to 53% (80/152) of those with negative findings (P < .001). In 70% of the manuscripts published, there were major discrepancies between the published manuscript and the original RCT protocol (change in the primary outcome measure or statistics, change in a number of patient groups, 25% or more reduction in sample size). We conclude that publication bias and outcome reporting bias is common in papers reporting RCTs in schizophrenia and bipolar disorder. These data have major implications regarding the validity of the reports of clinical trials published in the literature.
Analysing the attributes of Comprehensive Cancer Centres and Cancer Centres across Europe to identify key hallmarks
There is a persistent variation in cancer outcomes among and within European countries suggesting (among other causes) inequalities in access to or delivery of high‐quality cancer care. European policy (EU Cancer Mission and Europe’s Beating Cancer Plan) is currently moving towards a mission‐oriented approach addressing these inequalities. In this study, we used the quantitative and qualitative data of the Organisation of European Cancer Institutes’ Accreditation and Designation Programme, relating to 40 large European cancer centres, to describe their current compliance with quality standards, to identify the hallmarks common to all centres and to show the distinctive features of Comprehensive Cancer Centres. All Comprehensive Cancer Centres and Cancer Centres accredited by the Organisation of European Cancer Institutes show good compliance with quality standards related to care, multidisciplinarity and patient centredness. However, Comprehensive Cancer Centres on average showed significantly better scores on indicators related to the volume, quality and integration of translational research, such as high‐impact publications, clinical trial activity (especially in phase I and phase IIa trials) and filing more patents as early indicators of innovation. However, irrespective of their size, centres show significant variability regarding effective governance when functioning as entities within larger hospitals. This study reveals the attributes of cancer centres based on data from 40 large European cancer centres, showing that Comprehensive Cancer Centres have significantly greater output of peer‐reviewed publications and clinical trials than other centres, and that the quality of multidisciplinarity is well established in all accredited cancer centres.
Community Needs, Concerns, and Perceptions About Health Research: Findings From the Clinical and Translational Science Award Sentinel Network
Objectives. We used results generated from the first study of the National Institutes of Health Sentinel Network to understand health concerns and perceptions of research among underrepresented groups such as women, the elderly, racial/ethnic groups, and rural populations. Methods. Investigators at 5 Sentinel Network sites and 2 community-focused national organizations developed a common assessment tool used by community health workers to assess research perceptions, health concerns, and conditions. Results. Among 5979 individuals assessed, the top 5 health concerns were hypertension, diabetes, cancer, weight, and heart problems; hypertension was the most common self-reported condition. Levels of interest in research participation ranged from 70.1% among those in the “other” racial/ethnic category to 91.0% among African Americans. Overall, African Americans were more likely than members of other racial/ethnic groups to be interested in studies requiring blood samples (82.6%), genetic samples (76.9%), or medical records (77.2%); staying overnight in a hospital (70.5%); and use of medical equipment (75.4%). Conclusions. Top health concerns were consistent across geographic areas. African Americans reported more willingness to participate in research even if it required blood samples or genetic testing.
Plagiarism in research: a survey of African medical journals
ObjectivesTo examine whether regional biomedical journals in Africa had policies on plagiarism and procedures to detect it; and to measure the extent of plagiarism in their original research articles and reviews.DesignCross sectional survey.Setting and participantsWe selected journals with an editor-in-chief in Africa, a publisher based in a low or middle income country and with author guidelines in English, and systematically searched the African Journals Online database. From each of the 100 journals identified, we randomly selected five original research articles or reviews published in 2016.OutcomesFor included journals, we examined the presence of plagiarism policies and whether they referred to text matching software. We submitted articles to Turnitin and measured the extent of plagiarism (copying of someone else’s work) or redundancy (copying of one’s own work) against a set of criteria we had developed and piloted.ResultsOf the 100 journals, 26 had a policy on plagiarism and 16 referred to text matching software. Of 495 articles, 313 (63%; 95% CI 58 to 68) had evidence of plagiarism: 17% (83) had at least four linked copied or more than six individual copied sentences; 19% (96) had three to six copied sentences; and the remainder had one or two copied sentences. Plagiarism was more common in the introduction and discussion, and uncommon in the results.ConclusionPlagiarism is common in biomedical research articles and reviews published in Africa. While wholesale plagiarism was uncommon, moderate text plagiarism was extensive. This could rapidly be eliminated if journal editors implemented screening strategies, including text matching software.
Prevalence of Multiplicity and Appropriate Adjustments Among Cardiovascular Randomized Clinical Trials Published in Major Medical Journals
Multiple analyses in a clinical trial can increase the probability of inaccurately concluding that there is a statistically significant treatment effect. However, to date, it is unknown how many randomized clinical trials (RCTs) perform adjustments for multiple comparisons, the lack of which could lead to erroneous findings. To assess the prevalence of multiplicity and whether appropriate multiplicity adjustments were performed among cardiovascular RCTs published in 6 medical journals with a high impact factor. In this cross-sectional study, cardiovascular RCTs were selected from all over the world, characterized as North America, Western Europe, multiregional, and rest of the world. Data were collected from past issues of 3 cardiovascular journals (Circulation, European Heart Journal, and Journal of the American College of Cardiology) and 3 general medicine journals (JAMA, The Lancet, and The New England Journal of Medicine) with high impact factors published between August 1, 2015, and July 31, 2018. Supplements and trial protocols of each of the included RCTs were also searched for multiplicity. Data were analyzed December 20 to 27, 2018. Data from the selected RCTs were extracted and verified independently by 2 researchers using a structured data instrument. In case of disagreement, a third reviewer helped to achieve consensus. An RCT was considered to have multiple treatment groups if it had more than 2 arms; multiple outcomes were defined as having more than 1 primary outcome, and multiple analyses were defined as analysis of the same outcome variable in multiple ways. Multiplicity was examined only for the analysis of the primary end point. Outcomes of interest were percentages of primary analyses that performed multiplicity adjustment of primary end points. Of 511 cardiovascular RCTs included in this analysis, 300 (58.7%) had some form of multiplicity; of these 300, only 85 (28.3%) adjusted for multiplicity. Intervention type and funding source had no statistically significant association with the reporting of multiplicity risk adjustment. Trials that assessed mortality vs nonmortality outcomes were more likely to contain a multiplicity risk in their primary analysis (66.3% [177 of 267] vs 50.4% [123 of 244]; P < .001), and larger trials vs smaller trials were less likely to make any adjustments for multiplicity (35.6% [52 of 146] vs 21.4% [33 of 154]; P = .001). Findings from this study suggest that cardiovascular RCTs published in medical journals with high impact factors demonstrate infrequent adjustments to correct for multiple comparisons in the primary end point. These parameters may be improved by more standardized reporting.
Sensitivity analysis after multiple imputation under missing at random: a weighting approach
Multiple imputation (MI) is now well established as a flexible, general, method for the analysis of data sets with missing values. Most implementations assume the missing data are `missing at random' (MAR), that is, given the observed data, the reason for the missing data does not depend on the unseen data. However, although this is a helpful and simplifying working assumption, it is unlikely to be true in practice. Assessing the sensitivity of the analysis to the MAR assumption is therefore important. However, there is very limited MI software for this. Further, analysis of a data set with missing values that are not missing at random (NMAR) is complicated by the need to extend the MAR imputation model to include a model for the reason for dropout. Here, we propose a simple alternative. We first impute under MAR and obtain parameter estimates for each imputed data set. The overall NMAR parameter estimate is a weighted average of these parameter estimates, where the weights depend on the assumed degree of departure from MAR. In some settings, this approach gives results that closely agree with joint modelling as the number of imputations increases. In others, it provides ball-park estimates of the results of full NMAR modelling, indicating the extent to which it is necessary and providing a check on its results. We illustrate our approach with a small simulation study, and the analysis of data from a trial of interventions to improve the quality of peer review.