Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,608
result(s) for
"Search Engine - standards"
Sort by:
Did online publishers “get it right”? Using a naturalistic search strategy to review cognitive health promotion content on internet webpages
2017
Background
One of the most common uses of the Internet is to search for health-related information. Although scientific evidence pertaining to cognitive health promotion has expanded rapidly in recent years, it is unclear how much of this information has been made available to Internet users. Thus, the purpose of our study was to assess the reliability and quality of information about cognitive health promotion encountered by typical Internet users.
Methods
To generate a list of relevant search terms employed by Internet users, we entered seed search terms in Google Trends and recorded any terms consistently used in the prior 2 years. To further approximate the behaviour of typical Internet users, we entered each term in Google and sampled the first two relevant results. This search, completed in October 2014, resulted in a sample of 86 webpages, 48 of which had content related to cognitive health promotion. An interdisciplinary team rated the information reliability and quality of these webpages using a standardized measure.
Results
We found that information reliability and quality were moderate, on average. Just one retrieved page mentioned best practice, national recommendations, or consensus guidelines by name. Commercial content (i.e., product promotion, advertising content, or non-commercial) was associated with differences in reliability and quality, with product promoter webpages having the lowest mean reliability and quality ratings.
Conclusions
As efforts to communicate the association between lifestyle and cognitive health continue to expand, we offer these results as a baseline assessment of the reliability and quality of cognitive health promotion on the Internet.
Journal Article
Users choose to engage with more partisan news than they are exposed to on Google Search
by
Wilson, Christo
,
Robertson, Ronald E.
,
Ruck, Damian J.
in
706/689/112
,
706/689/454
,
706/689/477/2811
2023
If popular online platforms systematically expose their users to partisan and unreliable news, they could potentially contribute to societal issues such as rising political polarization
1
,
2
. This concern is central to the ‘echo chamber’
3
–
5
and ‘filter bubble’
6
,
7
debates, which critique the roles that user choice and algorithmic curation play in guiding users to different online information sources
8
–
10
. These roles can be measured as exposure, defined as the URLs shown to users by online platforms, and engagement, defined as the URLs selected by users. However, owing to the challenges of obtaining ecologically valid exposure data—what real users were shown during their typical platform use—research in this vein typically relies on engagement data
4
,
8
,
11
–
16
or estimates of hypothetical exposure
17
–
23
. Studies involving ecological exposure have therefore been rare, and largely limited to social media platforms
7
,
24
, leaving open questions about web search engines. To address these gaps, we conducted a two-wave study pairing surveys with ecologically valid measures of both exposure and engagement on Google Search during the 2018 and 2020 US elections. In both waves, we found more identity-congruent and unreliable news sources in participants’ engagement choices, both within Google Search and overall, than they were exposed to in their Google Search results. These results indicate that exposure to and engagement with partisan or unreliable news on Google Search are driven not primarily by algorithmic curation but by users’ own choices.
Ecologically valid data collected during the 2018 and 2020 US elections show that exposure to and engagement with partisan or unreliable news on Google Search are driven not primarily by algorithmic curation but by users’ own choices.
Journal Article
Artificial intelligence-powered chatbots in search engines: a cross-sectional study on the quality and risks of drug information for patients
by
Nicolaus, Hagen F
,
Sametinger, Sophie Marie
,
Jung-Poppe, Lea
in
Artificial intelligence
,
Chatbots
,
Clinical pharmacology
2025
BackgroundSearch engines often serve as a primary resource for patients to obtain drug information. However, the search engine market is rapidly changing due to the introduction of artificial intelligence (AI)-powered chatbots. The consequences for medication safety when patients interact with chatbots remain largely unexplored.ObjectiveTo explore the quality and potential safety concerns of answers provided by an AI-powered chatbot integrated within a search engine.MethodologyBing copilot was queried on 10 frequently asked patient questions regarding the 50 most prescribed drugs in the US outpatient market. Patient questions covered drug indications, mechanisms of action, instructions for use, adverse drug reactions and contraindications. Readability of chatbot answers was assessed using the Flesch Reading Ease Score. Completeness and accuracy were evaluated based on corresponding patient drug information in the pharmaceutical encyclopaedia drugs.com. On a preselected subset of inaccurate chatbot answers, healthcare professionals evaluated likelihood and extent of possible harm if patients follow the chatbot’s given recommendations.ResultsOf 500 generated chatbot answers, overall readability implied that responses were difficult to read according to the Flesch Reading Ease Score. Overall median completeness and accuracy of chatbot answers were 100.0% (IQR 50.0–100.0%) and 100.0% (IQR 88.1–100.0%), respectively. Of the subset of 20 chatbot answers, experts found 66% (95% CI 50% to 85%) to be potentially harmful. 42% (95% CI 25% to 60%) of these 20 chatbot answers were found to potentially cause moderate to mild harm, and 22% (95% CI 10% to 40%) to cause severe harm or even death if patients follow the chatbot’s advice.ConclusionsAI-powered chatbots are capable of providing overall complete and accurate patient drug information. Yet, experts deemed a considerable number of answers incorrect or potentially harmful. Furthermore, complexity of chatbot answers may limit patient understanding. Hence, healthcare professionals should be cautious in recommending AI-powered search engines until more precise and reliable alternatives are available.
Journal Article
PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews
by
Cheraghi-Sohi, Sudeh
,
Campbell, Stephen
,
Methley, Abigail M
in
Analysis
,
Animals
,
Care and treatment
2014
Background
Qualitative systematic reviews are increasing in popularity in evidence based health care. Difficulties have been reported in conducting literature searches of qualitative research using the PICO search tool. An alternative search tool, entitled SPIDER, was recently developed for more effective searching of qualitative research, but remained untested beyond its development team.
Methods
In this article we tested the ‘SPIDER’ search tool in a systematic narrative review of qualitative literature investigating the health care experiences of people with Multiple Sclerosis. Identical search terms were combined into the PICO or SPIDER search tool and compared across Ovid MEDLINE, Ovid EMBASE and EBSCO CINAHL Plus databases. In addition, we added to this method by comparing initial SPIDER and PICO tools to a modified version of PICO with added qualitative search terms (PICOS).
Results
Results showed a greater number of hits from the PICO searches, in comparison to the SPIDER searches, with greater sensitivity. SPIDER searches showed greatest specificity for every database. The modified PICO demonstrated equal or higher sensitivity than SPIDER searches, and equal or lower specificity than SPIDER searches. The modified PICO demonstrated lower sensitivity and greater specificity than PICO searches.
Conclusions
The recommendations for practice are therefore to use the PICO tool for a fully comprehensive search but the PICOS tool where time and resources are limited. Based on these limited findings the SPIDER tool would not be recommended due to the risk of not identifying relevant papers, but has potential due to its greater specificity.
Journal Article
Development and validation of a search filter to identify equity-focused studies: reducing the number needed to screen
by
Prady, Stephanie L
,
Uphoff, Eleonora P
,
Power, Madeleine
in
Databases, Bibliographic - standards
,
Databases, Bibliographic - statistics & numerical data
,
Diabetes
2018
Background
Health inequalities, worse health associated with social and economic disadvantage, are reported by a minority of research articles. Locating these studies when conducting an equity-focused systematic review is challenging due to a deficit in standardised terminology, indexing, and lack of validated search filters. Current reporting guidelines recommend not applying filters, meaning that increased resources are needed at the screening stage.
Methods
We aimed to design and test search filters to locate studies that reported outcomes by a social determinant of health. We developed and expanded a ‘specific terms strategy’ using keywords and subject headings compiled from recent systematic reviews that applied an equity filter. A ‘non-specific strategy’ was compiled from phrases used to describe equity analyses that were reported in titles and abstracts, and related subject headings. Gold standard evaluation and validation sets were compiled. The filters were developed in MEDLINE, adapted for Embase and tested in both. We set a target of 0.90 sensitivity (95% CI; 0.84, 0.94) in retrieving 150 gold standard validation papers. We noted the reduction in the number needed to screen in a proposed equity-focused systematic review and the proportion of equity-focused reviews we assessed in the project that applied an equity filter to their search strategy.
Results
The specific terms strategy filtered out 93-95% of all records, and retrieved a validation set of articles with a sensitivity of 0.84 in MEDLINE (0.77, 0.89), and 0.87 (0.81, 0.92) in Embase. When combined (Boolean ‘OR’) with the non-specific strategy sensitivity was 0.92 (0.86, 0.96) in MEDLINE (Embase 0.94; 0.89, 0.97). The number needed to screen was reduced by 77% by applying the specific terms strategy, and by 59.7% (MEDLINE) and 63.5% (Embase) by applying the combined strategy. Eighty-one per cent of systematic reviews filtered studies by equity.
Conclusions
A combined approach of using specific and non-specific terms is recommended if systematic reviewers wish to filter studies for reporting outcomes by social determinants. Future research should concentrate on the indexing standardisation for equity studies and further development and testing of both specific and non-specific terms for accurate study retrieval.
Journal Article
Audit AI search tools now, before they skew research
2023
Generative AI could be a boon for literature search, but only if independent groups scrutinize its biases and limitations.
Generative AI could be a boon for literature search, but only if independent groups scrutinize its biases and limitations.
Credit: Irene Rapp
Michael Gusenbauer portrait
Journal Article
AI science search engines are exploding in number — are they any good?
2023
Several search tools claim to help researchers do science.
Several search tools claim to help researchers do science.
Credit: Dimitri Otis/Getty
Laptop computer with book library on screen
Journal Article
Comparison of New Era’s Education Platforms, YouTube® and WebSurg®, in Sleeve Gastrectomy
2019
IntroductionThe Internet is a widely used resource for obtaining medical information. However, the quality of information on online platforms is still debated. Our goal in this quality-controlled WebSurg® and YouTube®–based study was to compare these two online video platforms in terms of the accuracy and quality of information about sleeve gastrectomy videos.MethodsMost viewed (popular) videos returned by YouTube® search engine in response to the keyword “sleeve gastrectomy” were included in the study. The educational accuracy and quality of the videos were evaluated according to known scoring systems. A novel scoring system measured technical quality. The ten most viewed (popular) videos in WebSurg® in response to the keyword “sleeve gastrectomy” were compared with ten YouTube® videos with the highest educational/technical scores.ResultsScoring systems measuring the educational accuracy and quality of WebSurg® videos were significantly higher than ten YouTube® videos which have the most top technical scores (p < 0.05), and no significant difference was found in the assessment of ten YouTube® videos that have the highest technical ratings compared with WebSurg® videos (p 0.481).ConclusionsWebSurg® videos, which were passed through a reviewing process and were mostly prepared by academicians, remained below the expected quality. The main limitation of WebSurg® and YouTube® is the lack of information on preoperative and postoperative processes.
Journal Article