Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
125,315
result(s) for
"checks"
Sort by:
Blind recognition of sparse parity‐check matrices of low‐density parity‐check codes in the presence of noise
2023
This paper studies the blind recognition method of the sparse parity‐check matrices of low‐density parity‐check codes in noncooperative communication, which is critical to the reverse analysis of communication protocols using LDPC codes. In this paper, two improvements are made to the algorithm of Liu Qian et al. (2021) for this problem. Firstly, a Gaussian elimination method based on random column exchange and soft information is proposed to enhance the fault tolerance of the elimination process. Secondly, according to the sparse property of the parity‐check matrices of LDPC codes, a random extraction method is proposed to further improve the fault tolerance of the algorithm, and it is verified theoretically. Finally, simulations verify the superior performance of the algorithm proposed in this paper. This paper studies the blind recognition method of the sparse parity‐check matrices of low‐density parity‐check (LDPC) codes in noncooperative communication, which is critical to the reverse analysis of communication protocols using LDPC codes.
Journal Article
Are fewer people living in poverty in the UK than 10 years ago?
by
Limb, Matthew
in
FACT CHECK
2022
Journal Article
CONSORT 2025 statement: updated guideline for reporting randomised trials
2025
Well designed and properly executed randomised trials are considered the most reliable evidence on the benefits of healthcare interventions. However, there is overwhelming evidence that the quality of reporting is not optimal. The CONSORT (Consolidated Standards of Reporting Trials) statement was designed to improve the quality of reporting and provides a minimum set of items to be included in a report of a randomised trial. CONSORT was first published in 1996, then updated in 2001 and 2010. Here, we present the updated CONSORT 2025 statement, which aims to account for recent methodological advancements and feedback from end users. We conducted a scoping review of the literature and developed a project-specific database of empirical and theoretical evidence related to CONSORT, to generate a list of potential changes to the checklist. The list was enriched with recommendations provided by the lead authors of existing CONSORT extensions (Harms, Outcomes, Non-pharmacological Treatment), other related reporting guidelines (TIDieR) and recommendations from other sources (eg, personal communications). The list of potential changes to the checklist was assessed in a large, international, online, three-round Delphi survey involving 317 participants and discussed at a two-day online expert consensus meeting of 30 invited international experts. We have made substantive changes to the CONSORT checklist. We added seven new checklist items, revised three items, deleted one item, and integrated several items from key CONSORT extensions. We also restructured the CONSORT checklist, with a new section on open science. The CONSORT 2025 statement consists of a 30-item checklist of essential items that should be included when reporting the results of a randomised trial and a diagram for documenting the flow of participants through the trial. To facilitate implementation of CONSORT 2025, we have also developed an expanded version of the CONSORT 2025 checklist, with bullet points eliciting critical elements of each item. Authors, editors, reviewers, and other potential users should use CONSORT 2025 when writing and evaluating manuscripts of randomised trials to ensure that trial reports are clear and transparent.
Journal Article
TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods
by
Boulesteix, Anne-Laure
,
Denniston, Alastair K
,
Lam, Emily
in
Artificial intelligence
,
Calibration
,
Check lists
2024
The TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) statement was published in 2015 to provide the minimum reporting recommendations for studies developing or evaluating the performance of a prediction model. Methodological advances in the field of prediction have since included the widespread use of artificial intelligence (AI) powered by machine learning methods to develop prediction models. An update to the TRIPOD statement is thus needed. TRIPOD+AI provides harmonised guidance for reporting prediction model studies, irrespective of whether regression modelling or machine learning methods have been used. The new checklist supersedes the TRIPOD 2015 checklist, which should no longer be used. This article describes the development of TRIPOD+AI and presents the expanded 27 item checklist with more detailed explanation of each reporting recommendation, and the TRIPOD+AI for Abstracts checklist. TRIPOD+AI aims to promote the complete, accurate, and transparent reporting of studies that develop a prediction model or evaluate its performance. Complete reporting will facilitate study appraisal, model evaluation, and model implementation.
Journal Article
CONSORT 2025 explanation and elaboration: updated guideline for reporting randomised trials
2025
Critical appraisal of the quality of randomised trials is possible only if their design, conduct, analysis, and results are completely and accurately reported. Without transparent reporting of the methods and results, readers will not be able to fully evaluate the reliability and validity of trial findings. The CONSORT (Consolidated Standards of Reporting Trials) statement aims to improve the quality of reporting and provides a minimum set of items to be included in a report of a randomised trial. CONSORT was first published in 1996 and was updated in 2001 and 2010. CONSORT comprises a checklist of essential items that should be included in reports of randomised trials and a diagram for documenting the flow of participants through a trial. The CONSORT statement has been updated (CONSORT 2025) to reflect recent methodological advancements and feedback from end users, ensuring that it remains fit for purpose. Here, we present the updated CONSORT explanation and elaboration document, which has been extensively revised and describes the rationale and scientific background for each CONSORT 2025 checklist item and provides published examples of good reporting. The objective is to enhance the use, understanding, and dissemination of CONSORT 2025 and provide guidance to authors about how to improve the reporting of their trials and ensure trial reports are complete, and transparent.
Journal Article
Assumption-checking rather than (just) testing: The importance of visualization and effect size in statistical diagnostics
2024
Statistical methods generally have assumptions (e.g., normality in linear regression models). Violations of these assumptions can cause various issues, like statistical errors and biased estimates, whose impact can range from inconsequential to critical. Accordingly, it is important to check these assumptions, but this is often done in a flawed way. Here, I first present a prevalent but problematic approach to diagnostics—testing assumptions using null hypothesis significance tests (e.g., the Shapiro–Wilk test of normality). Then, I consolidate and illustrate the issues with this approach, primarily using simulations. These issues include statistical errors (i.e., false positives, especially with large samples, and false negatives, especially with small samples), false binarity, limited descriptiveness, misinterpretation (e.g., of
p
-value as an effect size), and potential testing failure due to unmet test assumptions. Finally, I synthesize the implications of these issues for statistical diagnostics, and provide practical recommendations for improving such diagnostics. Key recommendations include maintaining awareness of the issues with assumption tests (while recognizing they can be useful), using appropriate combinations of diagnostic methods (including visualization and effect sizes) while recognizing their limitations, and distinguishing between
testing
and
checking
assumptions. Additional recommendations include judging assumption violations as a complex spectrum (rather than a simplistic binary), using programmatic tools that increase replicability and decrease researcher degrees of freedom, and sharing the material and rationale involved in the diagnostics.
Journal Article
CONSORT 2025 statement: updated guideline for reporting randomised trials
by
Aggarwal, Rakesh
,
Ioannidis, John P A
,
Lamb, Sarah E
in
Check lists
,
Checklist - standards
,
Delphi Technique
2025
AbstractBackgroundWell designed and properly executed randomised trials are considered the most reliable evidence on the benefits of healthcare interventions. However, there is overwhelming evidence that the quality of reporting is not optimal. The CONSORT (Consolidated Standards of Reporting Trials) statement was designed to improve the quality of reporting and provides a minimum set of items to be included in a report of a randomised trial. CONSORT was first published in 1996, then updated in 2001 and 2010. Here, we present the updated CONSORT 2025 statement, which aims to account for recent methodological advancements and feedback from end users.MethodsWe conducted a scoping review of the literature and developed a project-specific database of empirical and theoretical evidence related to CONSORT, to generate a list of potential changes to the checklist. The list was enriched with recommendations provided by the lead authors of existing CONSORT extensions (Harms, Outcomes, Non-pharmacological Treatment), other related reporting guidelines (TIDieR) and recommendations from other sources (eg, personal communications). The list of potential changes to the checklist was assessed in a large, international, online, three-round Delphi survey involving 317 participants and discussed at a two-day online expert consensus meeting of 30 invited international experts.ResultsWe have made substantive changes to the CONSORT checklist. We added seven new checklist items, revised three items, deleted one item, and integrated several items from key CONSORT extensions. We also restructured the CONSORT checklist, with a new section on open science. The CONSORT 2025 statement consists of a 30-item checklist of essential items that should be included when reporting the results of a randomised trial and a diagram for documenting the flow of participants through the trial. To facilitate implementation of CONSORT 2025, we have also developed an expanded version of the CONSORT 2025 checklist, with bullet points eliciting critical elements of each item.ConclusionAuthors, editors, reviewers, and other potential users should use CONSORT 2025 when writing and evaluating manuscripts of randomised trials to ensure that trial reports are clear and transparent.
Journal Article
ERA-20C
by
Berrisford, Paul
,
Bonavita, Massimo
,
Hólm, Elías V.
in
20th century
,
Air temperature
,
Archives & records
2016
The ECMWF twentieth century reanalysis (ERA-20C; 1900–2010) assimilates surface pressure and marine wind observations. The reanalysis is single-member, and the background errors are spatiotemporally varying, derived from an ensemble. The atmospheric general circulation model uses the same configuration as the control member of the ERA-20CM ensemble, forced by observationally based analyses of sea surface temperature, sea ice cover, atmospheric composition changes, and solar forcing. The resulting climate trend estimations resemble ERA-20CM for temperature and the water cycle. The ERA-20C water cycle features stable precipitation minus evaporation global averages and no spurious jumps or trends. The assimilation of observations adds realism on synoptic time scales as compared to ERA-20CM in regions that are sufficiently well observed. Comparing to nighttime ship observations, ERA-20C air temperatures are 1 K colder. Generally, the synoptic quality of the product and the agreement in terms of climate indices with other products improve with the availability of observations. The MJO mean amplitude in ERA-20C is larger than in 20CR version 2c throughout the century, and in agreement with other reanalyses such as JRA-55. A novelty in ERA-20C is the availability of observation feedback information. As shown, this information can help assess the product’s quality on selected time scales and regions.
Journal Article
Research on flow force of check valve for space station based on the theory of flow around motion
2025
Check valve is an important component of the fluid drive unit used in the Chinese space station (CSS). At the minimum working flow rate, the valve core of the check valve overcomes the spring force to fully open and maintain a fully open state, ensuring high reliability during the operation of check valve. Through establishing force and parametric model on the valve core and spring of the check valve, deriving the mathematical model and carrying out the flow rate test. Test results match theoretical results quite well with low errors of 5.50%. The effectiveness of the theory has been confirmed, which can provide design References for valve cores and springs.
Journal Article
Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI
by
Wheatstone, Peter
,
Kader, Rawen
,
Higham, Janet
in
Accuracy
,
Artificial Intelligence
,
Check lists
2022
A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico, evaluation, but few have yet demonstrated real benefit to patient care. Early stage clinical evaluation is important to assess an AI system’s actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use, and pave the way to further large scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multistakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two round, modified Delphi process to collect and analyse expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 predefined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. 123 experts participated in the first round of Delphi, 138 in the second, 16 in the consensus meeting, and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI specific reporting items (made of 28 subitems) and 10 generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we have developed a guideline comprising key items that should be reported in early stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings.
Journal Article