Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,507 result(s) for "Publications - classification"
Sort by:
Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach
Background Scoping reviews are a relatively new approach to evidence synthesis and currently there exists little guidance regarding the decision to choose between a systematic review or scoping review approach when synthesising evidence. The purpose of this article is to clearly describe the differences in indications between scoping reviews and systematic reviews and to provide guidance for when a scoping review is (and is not) appropriate. Results Researchers may conduct scoping reviews instead of systematic reviews where the purpose of the review is to identify knowledge gaps, scope a body of literature, clarify concepts or to investigate research conduct. While useful in their own right, scoping reviews may also be helpful precursors to systematic reviews and can be used to confirm the relevance of inclusion criteria and potential questions. Conclusions Scoping reviews are a useful tool in the ever increasing arsenal of evidence synthesis approaches. Although conducted for different purposes compared to systematic reviews, scoping reviews still require rigorous and transparent methods in their conduct to ensure that the results are trustworthy. Our hope is that with clear guidance available regarding whether to conduct a scoping review or a systematic review, there will be less scoping reviews being performed for inappropriate indications better served by a systematic review, and vice-versa.
Big Science vs. Little Science: How Scientific Impact Scales with Funding
is it more effective to give large grants to a few elite researchers, or small grants to many researchers? Large grants would be more effective only if scientific impact increases as an accelerating function of grant size. Here, we examine the scientific impact of individual university-based researchers in three disciplines funded by the Natural Sciences and Engineering Research Council of Canada (NSERC). We considered four indices of scientific impact: numbers of articles published, numbers of citations to those articles, the most cited article, and the number of highly cited articles, each measured over a four-year period. We related these to the amount of NSERC funding received. Impact is positively, but only weakly, related to funding. Researchers who received additional funds from a second federal granting council, the Canadian Institutes for Health Research, were not more productive than those who received only NSERC funding. Impact was generally a decelerating function of funding. Impact per dollar was therefore lower for large grant-holders. This is inconsistent with the hypothesis that larger grants lead to larger discoveries. Further, the impact of researchers who received increases in funding did not predictably increase. We conclude that scientific impact (as reflected by publications) is only weakly limited by funding. We suggest that funding strategies that target diversity, rather than \"excellence\", are likely to prove to be more productive.
Examining influential factors for acknowledgements classification using supervised learning
Acknowledgements have been examined as important elements in measuring the contributions to and intellectual debts of a scientific publication. Unlike previous studies that were limited in the scope of analysis and manual examination. The present study aimed to conduct the automatic classification of acknowledgements on a large scale of data. To this end, we first created a training dataset for acknowledgements classification by sampling the acknowledgements sections from the entire PubMed Central database. Second, we adopted various supervised learning algorithms to examine which algorithm performed best in what condition. In addition, we observed the factors affecting classification performance. We investigated the effects of the following three main aspects: classification algorithms, categories, and text representations. The CNN+Doc2Vec algorithm achieved the highest performance of 93.58% accuracy in the original dataset and 87.93% in the converted dataset. The experimental results indicated that the characteristics of categories and sentence patterns influenced the performance of classification. Most of the classifiers performed better on the categories of financial, peer interactive communication, and technical support compared to other classes.
Quantifying and contextualizing the impact of bioRxiv preprints through automated social media audience segmentation
Engagement with scientific manuscripts is frequently facilitated by Twitter and other social media platforms. As such, the demographics of a paper's social media audience provide a wealth of information about how scholarly research is transmitted, consumed, and interpreted by online communities. By paying attention to public perceptions of their publications, scientists can learn whether their research is stimulating positive scholarly and public thought. They can also become aware of potentially negative patterns of interest from groups that misinterpret their work in harmful ways, either willfully or unintentionally, and devise strategies for altering their messaging to mitigate these impacts. In this study, we collected 331,696 Twitter posts referencing 1,800 highly tweeted bioRxiv preprints and leveraged topic modeling to infer the characteristics of various communities engaging with each preprint on Twitter. We agnostically learned the characteristics of these audience sectors from keywords each user's followers provide in their Twitter biographies. We estimate that 96% of the preprints analyzed are dominated by academic audiences on Twitter, suggesting that social media attention does not always correspond to greater public exposure. We further demonstrate how our audience segmentation method can quantify the level of interest from nonspecialist audience sectors such as mental health advocates, dog lovers, video game developers, vegans, bitcoin investors, conspiracy theorists, journalists, religious groups, and political constituencies. Surprisingly, we also found that 10% of the preprints analyzed have sizable (>5%) audience sectors that are associated with right-wing white nationalist communities. Although none of these preprints appear to intentionally espouse any right-wing extremist messages, cases exist in which extremist appropriation comprises more than 50% of the tweets referencing a given preprint. These results present unique opportunities for improving and contextualizing the public discourse surrounding scientific research.
Classifying publications from the clinical and translational science award program along the translational research spectrum: a machine learning approach
Background Translational research is a key area of focus of the National Institutes of Health (NIH), as demonstrated by the substantial investment in the Clinical and Translational Science Award (CTSA) program. The goal of the CTSA program is to accelerate the translation of discoveries from the bench to the bedside and into communities. Different classification systems have been used to capture the spectrum of basic to clinical to population health research, with substantial differences in the number of categories and their definitions. Evaluation of the effectiveness of the CTSA program and of translational research in general is hampered by the lack of rigor in these definitions and their application. This study adds rigor to the classification process by creating a checklist to evaluate publications across the translational spectrum and operationalizes these classifications by building machine learning-based text classifiers to categorize these publications. Methods Based on collaboratively developed definitions, we created a detailed checklist for categories along the translational spectrum from T0 to T4. We applied the checklist to CTSA-linked publications to construct a set of coded publications for use in training machine learning-based text classifiers to classify publications within these categories. The training sets combined T1/T2 and T3/T4 categories due to low frequency of these publication types compared to the frequency of T0 publications. We then compared classifier performance across different algorithms and feature sets and applied the classifiers to all publications in PubMed indexed to CTSA grants. To validate the algorithm, we manually classified the articles with the top 100 scores from each classifier. Results The definitions and checklist facilitated classification and resulted in good inter-rater reliability for coding publications for the training set. Very good performance was achieved for the classifiers as represented by the area under the receiver operating curves (AUC), with an AUC of 0.94 for the T0 classifier, 0.84 for T1/T2, and 0.92 for T3/T4. Conclusions The combination of definitions agreed upon by five CTSA hubs, a checklist that facilitates more uniform definition interpretation, and algorithms that perform well in classifying publications along the translational spectrum provide a basis for establishing and applying uniform definitions of translational research categories. The classification algorithms allow publication analyses that would not be feasible with manual classification, such as assessing the distribution and trends of publications across the CTSA network and comparing the categories of publications and their citations to assess knowledge transfer across the translational research spectrum.
Reducing Workload in Systematic Review Preparation Using Automated Citation Classification
To determine whether automated classification of document citations can be useful in reducing the time spent by experts reviewing journal articles for inclusion in updating systematic reviews of drug class efficacy for treatment of disease. A test collection was built using the annotated reference files from 15 systematic drug class reviews. A voting perceptron-based automated citation classification system was constructed to classify each article as containing high-quality, drug class–specific evidence or not. Cross-validation experiments were performed to evaluate performance. Precision, recall, and F-measure were evaluated at a range of sample weightings. Work saved over sampling at 95% recall was used as the measure of value to the review process. A reduction in the number of articles needing manual review was found for 11 of the 15 drug review topics studied. For three of the topics, the reduction was 50% or greater. Automated document citation classification could be a useful tool in maintaining systematic reviews of the efficacy of drug therapy. Further work is needed to refine the classification system and determine the best manner to integrate the system into the production of systematic reviews.
Feature Engineering and a Proposed Decision-Support System for Systematic Reviewers of Medical Evidence
Evidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find eligible studies. For this research, we tested the effect of various kinds of textual information (features) on performance of a machine learning classifier. Based on our findings, we propose an automated system to reduce screeing burden, as well as offer quality assurance. We built a database of citations from 5 systematic reviews that varied with respect to domain, topic, and sponsor. Consensus judgments regarding eligibility were inferred from published reports. We extracted 5 feature sets from citations: alphabetic, alphanumeric(+), indexing, features mapped to concepts in systematic reviews, and topic models. To simulate a two-person team, we divided the data into random halves. We optimized the parameters of a Bayesian classifier, then trained and tested models on alternate data halves. Overall, we conducted 50 independent tests. All tests of summary performance (mean F3) surpassed the corresponding baseline, P<0.0001. The ranks for mean F3, precision, and classification error were statistically different across feature sets averaged over reviews; P-values for Friedman's test were .045, .002, and .002, respectively. Differences in ranks for mean recall were not statistically significant. Alphanumeric(+) features were associated with best performance; mean reduction in screening burden for this feature type ranged from 88% to 98% for the second pass through citations and from 38% to 48% overall. A computer-assisted, decision support system based on our methods could substantially reduce the burden of screening citations for systematic review teams and solo reviewers. Additionally, such a system could deliver quality assurance both by confirming concordant decisions and by naming studies associated with discordant decisions for further consideration.
Dissemination of novel biostatistics methods: Impact of programming code availability and other characteristics on article citations
As statisticians develop new methodological approaches, there are many factors that influence whether others will utilize their work. This paper is a bibliometric study that identifies and quantifies associations between characteristics of new biostatistics methods and their citation counts. Of primary interest was the association between numbers of citations and whether software code was available to the reader. Statistics journal articles published in 2010 from 35 statistical journals were reviewed by two biostatisticians. Generalized linear mixed models were used to determine which characteristics (author, article, and journal) were independently associated with citation counts (as of April 1, 2017) in other peer-reviewed articles. Of 722 articles reviewed, 428 were classified as new biostatistics methods. In a multivariable model, for articles that were not freely accessible on the journal's website, having code available appeared to offer no boost to the number of citations (adjusted rate ratio = 0.96, 95% CI = 0.74 to 1.24, p = 0.74); however, for articles that were freely accessible on the journal's website, having code available was associated with a 2-fold increase in the number of citations (adjusted rate ratio = 2.01, 95% CI = 1.30 to 3.10, p = 0.002). Higher citation rates were also associated with higher numbers of references, longer articles, SCImago Journal Rank indicator (SJR), and total numbers of publications among authors, with the strongest impact on citation rates coming from SJR (rate ratio = 1.21 for a 1-unit increase in SJR; 95% CI = 1.11 to 1.32). These analyses shed new insight into factors associated with citation rates of articles on new biostatistical methods. Making computer code available to readers is a goal worth striving for that may enhance biostatistics knowledge translation.
Trends of Augmented Reality Applications and Research throughout the World: Meta-Analysis of Theses, Articles and Papers between 2001-2019 Years
Our aim in this research was to analyze studies in the area of Augmented Reality applications and research throughout the world using meta-analysis methods in order to determine trends in the area. For the purpose of the study, a total of 1008 pieces of research, published between 2001 and 2019 and selected by purposeful sampling method were analyzed. Trends of Augmented Reality applications and research throughout the world were examined under 13 criteria. These criteria were; index, year of publication, number of authors, country of research, area of research, method, education grade, sample group, sample number, data collection method, bibliography number, analysis tech-niques, purpose of research, and research trends. These data were interpreted based on percentage and frequency. In Augmented Reality, technologies are integrated in many fields such as education technology, engineering arts, visual arts education and special education.
Does Bradford's Law of Scattering predict the size of the literature in Cochrane Reviews?
[...]this study examined whether Bradford's law was valid for the Cochrane Review-identified literature on acute otitis media and pneumonia, conditions that are reported in a wide variety of clinical and health journals [13]. [...]the articles were not divided into 3 equal parts.\\n For pneumonia, 30 of 69 articles in 6 journals (43%) from Zone 1 were included in Cochrane Reviews.