Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
12,303 result(s) for "bibliographic databases"
Sort by:
Defining the process to literature searching in systematic reviews: a literature review of guidance and supporting studies
Background Systematic literature searching is recognised as a critical component of the systematic review process. It involves a systematic search for studies and aims for a transparent report of study identification, leaving readers clear about what was done to identify studies, and how the findings of the review are situated in the relevant evidence. Information specialists and review teams appear to work from a shared and tacit model of the literature search process. How this tacit model has developed and evolved is unclear, and it has not been explicitly examined before. The purpose of this review is to determine if a shared model of the literature searching process can be detected across systematic review guidance documents and, if so, how this process is reported in the guidance and supported by published studies. Method A literature review. Two types of literature were reviewed: guidance and published studies. Nine guidance documents were identified, including: The Cochrane and Campbell Handbooks. Published studies were identified through ‘pearl growing’, citation chasing, a search of PubMed using the systematic review methods filter, and the authors’ topic knowledge. The relevant sections within each guidance document were then read and re-read, with the aim of determining key methodological stages. Methodological stages were identified and defined. This data was reviewed to identify agreements and areas of unique guidance between guidance documents. Consensus across multiple guidance documents was used to inform selection of ‘key stages’ in the process of literature searching. Results Eight key stages were determined relating specifically to literature searching in systematic reviews. They were: who should literature search, aims and purpose of literature searching, preparation, the search strategy, searching databases, supplementary searching, managing references and reporting the search process. Conclusions Eight key stages to the process of literature searching in systematic reviews were identified. These key stages are consistently reported in the nine guidance documents, suggesting consensus on the key stages of literature searching, and therefore the process of literature searching as a whole, in systematic reviews. Further research to determine the suitability of using the same process of literature searching for all types of systematic review is indicated.
Safety and efficacy of pitolisant on cataplexy in patients with narcolepsy: a randomised, double-blind, placebo-controlled trial
Histaminergic neurons are crucial to maintain wakefulness, but their role in cataplexy is unknown. We assessed the safety and efficacy of pitolisant, a histamine H3 receptor inverse agonist, for treatment of cataplexy in patients with narcolepsy. For this randomised, double-blind, placebo-controlled trial we recruited patients with narcolepsy from 16 sleep centres in nine countries (Bulgaria, Czech Republic, Hungary, Macedonia, Poland, Russia, Serbia, Turkey, and Ukraine). Patients were eligible if they were aged 18 years or older, diagnosed with narcolepsy with cataplexy according to version two of the International Classification of Sleep Disorders criteria, experienced at least three cataplexies per week, and had excessive daytime sleepiness (defined as an Epworth Sleepiness Scale score ≥12). We used a computer-generated sequence via an interactive web response system to randomly assign patients to receive either pitolisant or placebo once per day (1:1 ratio). Randomisation was done in blocks of four. Participants and investigators were masked to treatment allocation. Treatment lasted for 7 weeks: 3 weeks of flexible dosing decided by investigators according to efficacy and tolerance (5 mg, 10 mg, or 20 mg oral pitolisant), followed by 4 weeks of stable dosing (5 mg, 10 mg, 20 mg, or 40 mg). The primary endpoint was the change in the average number of cataplexy attacks per week as recorded in patient diaries (weekly cataplexy rate [WCR]) between the 2 weeks of baseline and the 4 weeks of stable dosing period. Analysis was by intention to treat. This trial is registered with ClinicalTrials.gov, number NCT01800045. The trial was done between April 19, 2013, and Jan 28, 2015. We screened 117 patients, 106 of whom were randomly assigned to treatment (54 to pitolisant and 52 to placebo) and, after dropout, 54 patients from the pitolisant group and 51 from the placebo group were included in the intention-to-treat analysis. The WCR during the stable dosing period compared with baseline was decreased by 75% (WCRfinal=2·27; WCRbaseline=9·15; WCRfinal/baseline=0·25) in patients who received pitolisant and 38% (WCRfinal=4·52; WCRbaseline=7·31; WCRfinal/baseline=0·62) in patients who received placebo (rate ratio 0·512; 95% CI 0·43–0·60, p<0·0001). Treatment-related adverse events were significantly more common in the pitolisant group than in the placebo group (15 [28%] of 54 vs 6 [12%] of 51; p=0·048). There were no serious adverse events, but one case of severe nausea in the pitolisant group. The most frequent adverse events in the pitolisant group (headache, irritability, anxiety, and nausea) were mild or moderate except one case of severe nausea. No withdrawal syndrome was detected following pitolisant treatment; one case was detected in the placebo group. Pitolisant was well tolerated and efficacious in reducing cataplexy. If confirmed in long-term studies, pitolisant might constitute a useful first-line therapy for cataplexy in patients with narcolepsy, for whom there are currently few therapeutic options. Bioprojet, France.
Considerations for conducting systematic reviews: evaluating the performance of different methods for de-duplicating references
Background Systematic reviews involve searching multiple bibliographic databases to identify eligible studies. As this type of evidence synthesis is increasingly pursued, the use of various electronic platforms can help researchers improve the efficiency and quality of their research. We examined the accuracy and efficiency of commonly used electronic methods for flagging and removing duplicate references during this process. Methods A heterogeneous sample of references was obtained by conducting a similar topical search in MEDLINE, Embase, Cochrane Central Register of Controlled Trials, and PsycINFO databases. References were de-duplicated via manual abstraction to create a benchmark set. The default settings were then used in Ovid multifile search, EndNote desktop, Mendeley, Zotero, Covidence, and Rayyan to de-duplicate the sample of references independently. Using the benchmark set as reference, the number of false-negative and false-positive duplicate references for each method was identified, and accuracy, sensitivity, and specificity were determined. Results We found that the most accurate methods for identifying duplicate references were Ovid, Covidence, and Rayyan. Ovid and Covidence possessed the highest specificity for identifying duplicate references, while Rayyan demonstrated the highest sensitivity. Conclusion This study reveals the strengths and weaknesses of commonly used de-duplication methods and provides strategies for improving their performance to avoid unintentionally removing eligible studies and introducing bias into systematic reviews. Along with availability, ease-of-use, functionality, and capability, these findings are important to consider when researchers are selecting database platforms and supporting software programs for conducting systematic reviews.
A scoping review on the conduct and reporting of scoping reviews
Background Scoping reviews are used to identify knowledge gaps, set research agendas, and identify implications for decision-making. The conduct and reporting of scoping reviews is inconsistent in the literature. We conducted a scoping review to identify: papers that utilized and/or described scoping review methods; guidelines for reporting scoping reviews; and studies that assessed the quality of reporting of scoping reviews. Methods We searched nine electronic databases for published and unpublished literature scoping review papers, scoping review methodology, and reporting guidance for scoping reviews. Two independent reviewers screened citations for inclusion. Data abstraction was performed by one reviewer and verified by a second reviewer. Quantitative (e.g. frequencies of methods) and qualitative (i.e. content analysis of the methods) syntheses were conducted. Results After searching 1525 citations and 874 full-text papers, 516 articles were included, of which 494 were scoping reviews. The 494 scoping reviews were disseminated between 1999 and 2014, with 45 % published after 2012. Most of the scoping reviews were conducted in North America (53 %) or Europe (38 %), and reported a public source of funding (64 %). The number of studies included in the scoping reviews ranged from 1 to 2600 (mean of 118). Using the Joanna Briggs Institute methodology guidance for scoping reviews, only 13 % of the scoping reviews reported the use of a protocol, 36 % used two reviewers for selecting citations for inclusion, 29 % used two reviewers for full-text screening, 30 % used two reviewers for data charting, and 43 % used a pre-defined charting form. In most cases, the results of the scoping review were used to identify evidence gaps (85 %), provide recommendations for future research (84 %), or identify strengths and limitations (69 %). We did not identify any guidelines for reporting scoping reviews or studies that assessed the quality of scoping review reporting. Conclusion The number of scoping reviews conducted per year has steadily increased since 2012. Scoping reviews are used to inform research agendas and identify implications for policy or practice. As such, improvements in reporting and conduct are imperative. Further research on scoping review methodology is warranted, and in particular, there is need for a guideline to standardize reporting.
Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study
Background Within systematic reviews, when searching for relevant references, it is advisable to use multiple databases. However, searching databases is laborious and time-consuming, as syntax of search strategies are database specific. We aimed to determine the optimal combination of databases needed to conduct efficient searches in systematic reviews and whether the current practice in published reviews is appropriate. While previous studies determined the coverage of databases, we analyzed the actual retrieval from the original searches for systematic reviews. Methods Since May 2013, the first author prospectively recorded results from systematic review searches that he performed at his institution. PubMed was used to identify systematic reviews published using our search strategy results. For each published systematic review, we extracted the references of the included studies. Using the prospectively recorded results and the studies included in the publications, we calculated recall, precision, and number needed to read for single databases and databases in combination. We assessed the frequency at which databases and combinations would achieve varying levels of recall (i.e., 95%). For a sample of 200 recently published systematic reviews, we calculated how many had used enough databases to ensure 95% recall. Results A total of 58 published systematic reviews were included, totaling 1746 relevant references identified by our database searches, while 84 included references had been retrieved by other search methods. Sixteen percent of the included references (291 articles) were only found in a single database; Embase produced the most unique references ( n  = 132). The combination of Embase, MEDLINE, Web of Science Core Collection, and Google Scholar performed best, achieving an overall recall of 98.3 and 100% recall in 72% of systematic reviews. We estimate that 60% of published systematic reviews do not retrieve 95% of all available relevant references as many fail to search important databases. Other specialized databases, such as CINAHL or PsycINFO, add unique references to some reviews where the topic of the review is related to the focus of the database. Conclusions Optimal searches in systematic reviews should search at least Embase, MEDLINE, Web of Science, and Google Scholar as a minimum requirement to guarantee adequate and efficient coverage.
Web of Science (WoS) and Scopus: The Titans of Bibliographic Information in Today’s Academic World
Nowadays, the importance of bibliographic databases (DBs) has increased enormously, as they are the main providers of publication metadata and bibliometric indicators universally used both for research assessment practices and for performing daily tasks. Because the reliability of these tasks firstly depends on the data source, all users of the DBs should be able to choose the most suitable one. Web of Science (WoS) and Scopus are the two main bibliographic DBs. The comprehensive evaluation of the DBs’ coverage is practically impossible without extensive bibliometric analyses or literature reviews, but most DBs users do not have bibliometric competence and/or are not willing to invest additional time for such evaluations. Apart from that, the convenience of the DB’s interface, performance, provided impact indicators and additional tools may also influence the users’ choice. The main goal of this work is to provide all of the potential users with an all-inclusive description of the two main bibliographic DBs by gathering the findings that are presented in the most recent literature and information provided by the owners of the DBs at one place. This overview should aid all stakeholders employing publication and citation data in selecting the most suitable DB.
PubMed coverage varied across specialties and over time: a large-scale study of included studies in Cochrane reviews
PubMed is one of the most commonly used search tools in biomedical and life sciences. Existing studies of database coverage generally conclude that searching PubMed may not be sufficient although some find that the contributions from other databases are modest at best. However, generalizability of the studies of the coverage of PubMed is typically restricted. The objective of this study is to analyze the coverage of PubMed across specialties and over time. We use the more than 50,000 included studies in all Cochrane reviews published from 2012 to 2016 as our population and examine if the studies and resulting publications can be identified in PubMed. The results show that PubMed has a coverage of 70.9, 95% confidence interval (CI) (68.40, 73.30) of all the included publications and 82.8%, 95% CI (80.9, 84.7) of the included studies. There are huge differences in coverage across and within specialties. In addition, coverage varies within groups over time. Databases used for searching topics within the groups with highly varying or low coverage should be chosen with care as PubMed may have a relatively low coverage. •This study presents the results of an analysis of more than 85,000 publications from the 53 Cochrane groups.•PubMed covers more than 80 percent of all studies and 71 percent of publications included in Cochrane reviews.•Coverage varies across specialties and within specialties over time.
Increasing number of databases searched in systematic reviews and meta-analyses between 1994 and 2014
The purpose of this study was to determine whether the number of bibliographic databases used to search the health sciences literature in individual systematic reviews (SRs) and meta-analyses (MAs) changed over a twenty-year period related to the official 1995 launch of the Cochrane Database of Systematic Reviews (CDSR). Ovid MEDLINE was searched using a modified version of a strategy developed by the Scottish Intercollegiate Guidelines Network to identify SRs and MAs. Records from 3 milestone years were searched: the year immediately preceding (1994) and 1 (2004) and 2 (2014) decades following the CDSR launch. Records were sorted with randomization software. Abstracts or full texts of the records were examined to identify database usage until 100 relevant records were identified from each of the 3 years. The mean and median number of bibliographic databases searched in 1994, 2004, and 2014 were 1.62 and 1, 3.34 and 3, and 3.73 and 4, respectively. Studies that searched only 1 database decreased over the 3 milestone years (60% in 1994, 28% in 2004, and 10% in 2014). The number of bibliographic databases searched in individual SRs and MAs increased from 1994 to 2014.
The use of purposeful sampling in a qualitative evidence synthesis: A worked example on sexual adjustment to a cancer trajectory
Background An increasing number of qualitative evidence syntheses papers are found in health care literature. Many of these syntheses use a strictly exhaustive search strategy to collect articles, mirroring the standard template developed by major review organizations such as the Cochrane and Campbell Collaboration. The hegemonic idea behind it is that non-comprehensive samples in systematic reviews may introduce selection bias. However, exhaustive sampling in a qualitative evidence synthesis has been questioned, and a more purposeful way of sampling papers has been proposed as an alternative, although there is a lack of transparency on how these purposeful sampling strategies might be applied to a qualitative evidence synthesis. We discuss in our paper why and how we used purposeful sampling in a qualitative evidence synthesis about ‘sexual adjustment to a cancer trajectory’, by giving a worked example. Methods We have chosen a mixed purposeful sampling, combining three different strategies that we considered the most consistent with our research purpose: intensity sampling, maximum variation sampling and confirming/disconfirming case sampling. Results The concept of purposeful sampling on the meta-level could not readily been borrowed from the logic applied in basic research projects. It also demands a considerable amount of flexibility, and is labour-intensive, which goes against the argument of many authors that using purposeful sampling provides a pragmatic solution or a short cut for researchers, compared with exhaustive sampling. Opportunities of purposeful sampling were the possible inclusion of new perspectives to the line-of-argument and the enhancement of the theoretical diversity of the papers being included, which could make the results more conceptually aligned with the synthesis purpose. Conclusions This paper helps researchers to make decisions related to purposeful sampling in a more systematic and transparent way. Future research could confirm or disconfirm the hypothesis of conceptual enhancement by comparing the findings of a purposefully sampled qualitative evidence synthesis with those drawing on an exhaustive sample of the literature.
Unreported links between trial registrations and published articles were identified using document similarity measures in a cross-sectional analysis of ClinicalTrials.gov
Trial registries can be used to measure reporting biases and support systematic reviews, but 45% of registrations do not provide a link to the article reporting on the trial. We evaluated the use of document similarity methods to identify unreported links between ClinicalTrials.gov and PubMed. We extracted terms and concepts from a data set of 72,469 ClinicalTrials.gov registrations and 276,307 PubMed articles and tested methods for ranking articles across 16,005 reported links and 90 manually identified unreported links. Performance was measured by the median rank of matching articles and the proportion of unreported links that could be found by screening ranked candidate articles in order. The best-performing concept-based representation produced a median rank of 3 (interquartile range [IQR] 1–21) for reported links and 3 (IQR 1–19) for the manually identified unreported links, and term-based representations produced a median rank of 2 (1–20) for reported links and 2 (IQR 1–12) in unreported links. The matching article was ranked first for 40% of registrations, and screening 50 candidate articles per registration identified 86% of the unreported links. Leveraging the growth in the corpus of reported links between ClinicalTrials.gov and PubMed, we found that document similarity methods can assist in the identification of unreported links between trial registrations and corresponding articles.