Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
42
result(s) for
"Frequentist approach"
Sort by:
Bayesian versus Frequentist approaches in Psychometrics: a bibliometric analysis
by
Zagaria, Andrea
,
Lombardi, Luigi
in
Bayesian analysis
,
Bayesian method
,
Behavioral Science and Psychology
2024
The increasing popularity of the Bayesian approach in Psychology has prompted metascientific efforts to quantify its prevalence. However, despite enduring debates between proponents of Frequentist and Bayesian schools of thought, no systematic comparison of their prominence has been conducted in existing literature. This brief report fills this gap, examining Bayesian and Frequentist trends in the period from 1964 to 2023 through a meticulous search in PsycINFO. The findings reveal that the Frequentist approach has consistently been more popular than the Bayesian approach in the realm of Psychometrics and Statistical Psychology. However, Bayesian contributions steadily increased from the 80’s onward and appear to be almost as important or even surpassing the Frequentist counterparts in the latest years investigated (2019–2023). Although this observation applies primarily to specialized literature rather than the entire domain of Psychology, it underscores the growing prevalence of the Bayesian approach, signaling attention among specialists in the field.
Journal Article
Impact of Model Choice on LR Assessment in Case of Rare Haplotype Match (Frequentist Approach)
2017
The likelihood ratio (LR) measures the relative weight of forensic data regarding two hypotheses. Several levels of uncertainty arise if frequentist methods are chosen for its assessment: the assumed population model only approximates the true one, and its parameters are estimated through a limited database. Moreover, it may be wise to discard part of data, especially that only indirectly related to the hypotheses. Different reductions define different LRs. Therefore, it is more sensible to talk about 'a' LR instead of 'the' LR, and the error involved in the estimation should be quantified. Two frequentist methods are proposed in the light of these points for the 'rare type match problem', that is, when a match between the perpetrator's and the suspect's DNA profile, never observed before in the database of reference, is to be evaluated.
Journal Article
Statistical hypothesis testing and common misinterpretations: Should we abandon p-value in forensic science applications?
2016
•The inferential problem: frequentist and Bayesian school of thought.•Opposite views to hypothesis testing.•Editorial decision to reject any paper containing null hypothesis procedures.•Relevant limitations of p-value in the forensic context.•Should we abandon p-value in forensic science applications?
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses.
Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices.
Journal Article
Local Management of Anogenital Warts in Non-Immunocompromised Adults: A Network Meta-Analysis of Randomized Controlled Trials
by
Huiart, Laetitia
,
Derancourt, Christian
,
Bertolotti, Antoine
in
Adults
,
Anogenital warts
,
Care and treatment
2020
Introduction
No hierarchy of first-line treatments for anogenital warts (AGWs) is provided in international guidelines. This study aimed to determine the efficacy of topical treatments and ablative procedures for the management of AGWs.
Methods
Twelve electronic databases were systematically searched from inception to August 2018. All randomized controlled trials (RCTs) comparing immunocompetent adults with AGWs who received at least 1 provider-administered or patient-administered treatment in at least 1 parallel group were included. Risk of bias assessment followed the Cochrane Handbook. The study endpoint was complete lesion response after clearance and recurrence assessment. A network meta-analysis was performed.
Results
A network geometry was constructed based on 49 of the 70 RCTs included in our systematic review. All but 4 RCTs had a high risk of bias. The most efficacious treatments compared to placebo were surgery (RR 10.54; CI 95% 4.53–24.52), ablative therapy + imiquimod (RR 7.52; CI 95% 4.53–24.52), and electrosurgery (RR 7.10; CI 95% 3.47–14.53). SUCRA values confirmed the superiority of surgery (90.9%), ablative therapy + imiquimod (79.8%), and electrosurgery (77.1%). The most efficacious patient-administered treatments were podophyllotoxin 0.5% solution (63.5%) and podophyllotoxin 0.5% cream (62.2%).
Conclusions
With low-level evidence of most included RCTs, surgery and electrosurgery were superior to other treatments after clearance and recurrence assessment. Podophyllotoxin 0.5% was the most efficacious patient-administered treatment. Combined therapies should be evaluated in future RCTs in view of their identified effectiveness. The results of future RCTs should systematically include clinical type, number and location of AGWs, and sex of the patient, to refine therapeutic indications.
Protocol Registration
PROSPERO-CRD42015025827
Journal Article
Causal inference and adjustment for reference-arm risk in indirect treatment comparison meta-analysis
2020
To illustrate that bias associated with indirect treatment comparison and network meta-analyses can be reduced by adjusting for outcomes on common reference arms.
Approaches to adjusting for reference-arm effects are presented within a causal inference framework. Bayesian and Frequentist approaches are applied to three real data examples.
Reference-arm adjustment can significantly impact estimated treatment differences, improve model fit and align indirectly estimated treatment effects with those observed in randomized trials. Reference-arm adjustment can possibly reverse the direction of estimated treatment effects.
Accumulating theoretical and empirical evidence underscores the importance of adjusting for reference-arm outcomes in indirect treatment comparison and network meta-analyses to make full use of data and reduce the risk of bias in estimated treatments effects.
Indirect treatment comparisons (ITCs) and network meta-analyses (NMAs) can help decision makers compare therapies that lack head-to-head randomized trials. However, these estimates are vulnerable to biases due to cross-trial differences in patient characteristics and other factors. In this study, we outline methods to reduce biases associated with ITC/NMA and apply them to three real-world examples (antiretroviral therapy for human immunodeficiency virus, treatments for Type 2 diabetes and biological treatments for psoriasis). Our results show that reference-arm adjustment can have a significant impact on indirectly estimated treatment effects and can improve consistency between indirect evidence and gold-standard evidence from randomized trials. ITC and NMA without reference-arm adjustment present an avoidable risk of misleading or biased treatment effects. We argue that reference-arm adjustment should always be considered and reported when feasible in ITC and NMA.
Journal Article
Bayesian estimation of the Modified Omori Law parameters for the Iranian Plateau
by
Smirnov, V. B.
,
Ommi, S.
,
Zafarani, H.
in
Bayesian analysis
,
Earth and Environmental Science
,
Earth Sciences
2016
The forecasting of large aftershocks is a preliminary and critical step in seismic hazard analysis and seismic risk management. From a statistical point of view, it relies entirely on the estimation of the properties of aftershock sequences using a set of laws with well-defined parameters. Since the frequentist and Bayesian approaches are common tools to assess these parameter values, we compare the two approaches for the Modified Omori Law and a selection of mainshock–aftershock sequences in the Iranian Plateau. There is a general agreement between the two methods, but the Bayesian appears to be more efficient as the number of recorded aftershocks decreases. Taking into account temporal variations of the
b
-value, the slope of the frequency-size distribution, the probability for the occurrence of strong aftershock, or larger main shock has been calculated in a finite time window using the parameters of the Modified Omori Law observed in the Iranian Plateau.
Journal Article
Multiple Co-primary Endpoints: Medical and Statistical Solutions: A Report from the Multiple Endpoints Expert Team of the Pharmaceutical Research and Manufacturers of America
by
Muirhead, Robb
,
Givens, Sam
,
Jackson, Joseph D.
in
Alzheimer's disease
,
Clinical trials
,
Disorders
2007
There are quite a few disorders for which regulatory agencies have required a treatment to demonstrate a statistically significant effect on multiple endpoints, each at the one-sided 2.5% level, before accepting the treatment's efficacy for the disorders. Depending on the correlation among the endpoints, this requirement could lead to a substantial reduction in the study's power to conclude the efficacy of a treatment. To investigate the prevalence of this requirement and propose possible solutions, a multiple-disciplinary Multiple Endpoints Expert Team sponsored by Pharmaceutical Research and Manufacturers of America was formed in November 2003. The team recognized early that many researchers were not fully aware of the implications of requiring multiple co-primary endpoints. The team proposes possible solutions from both the medical and the statistical perspectives. The optimal solution is to reduce the number of multiple co-primary endpoints. If after careful considerations, multiple co-primary endpoints remain a scientific requirement, the team proposes statistical solutions and encourages that regulatory agencies be receptive to approaches that adopt modest upward adjustments of the nominal significance levels for testing individual endpoints. Finally, the team hopes that this report will draw more attention to the problem of multiple co-primary endpoints and stimulate further research.
Journal Article
Conocimiento matemático de futuros profesores para la enseñanza de la probabilidad desde el enfoque frecuencial
2014
En este trabajo se evalúan algunos componentes del conocimiento matemático para la enseñanza de la probabilidad (desde el enfoque frecuencial) de futuros profesores de Educación Primaria. Para evaluar el conocimiento común del contenido se analizan las soluciones dadas individuamente por 157 futuros profesores españoles de Educación Primaria a un problema abierto. Una vez discutidas con los participantes sus soluciones al problema, se analizan dos componentes del conocimiento didáctico, en 81 de estos profesores, trabajando en pequeños grupos: (a) para evaluar el conocimiento especializado del contenido se pide a los participantes identificar los contenidos matemáticos en la tarea; (b) evaluar el conocimiento del contenido y los estudiantes se les pide identificar y justificar, entre un grupo de respuestas dadas por alumnos de Educación Primaria, cuáles son correctas e incorrectas. Mientras que en la evaluación inicial se observaron diversas concepciones erróneas, como sesgo de equiprobabilidad, heurística de representatividad y subestimación de la variabilidad en el muestre, en la segunda los futuros profesores son capaces de identificar las respuestas correctas e incorrectas, explicando la razón de los errores. Aunque el conocimiento didáctico es aún insuficiente, pues se identifican pocos objetos matemáticos en la tarea, se deduce la utilidad de la actividad para el desarrollo del conocimiento de los futuros profesores.
Journal Article
On the Definition of Objective Probabilities by Empirical Similarity
by
Gilboa, Itzhak
,
Schmeidler, David
,
Lieberman, Offer
in
Aerospace medicine
,
Aviation
,
Axiomatization
2010
We suggest to define objective probabilities by similarity-weighted empirical frequencies, where more similar cases get a higher weight in the computation of frequencies. This formula is justified intuitively and axiomatically, but raises the question, which similarity function should be used? We propose to estimate the similarity function from the data, and thus obtain objective probabilities. We compare this definition to others, and attempt to delineate the scope of situations in which objective probabilities can be used.
Journal Article
Repeated Looks at Accumulating Data: To Correct or Not to Correct?
2005
Sequential analysis is a statistical way of analysing cumulative data. Its goal is to come to a decision as soon as enough evidence is reached for one or another hypothesis. In this article three different statistical approaches, the frequentist, the Bayesian and the likelihood approach, are discussed in relation to sequential analysis. In particular, the less known likelihood approach is elucidated.
Journal Article