Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
733
result(s) for
"Bayesian interpretation"
Sort by:
Assessment of high-quality counterfeit stamp impressions generated by inkjet printers via texture analysis and likelihood ratio
by
Yang, Xu
,
Tao, Yi-Min
,
Chen, Xiao-Hong
in
Analytical chemistry
,
Bayesian interpretation
,
Counterfeit
2023
High-quality counterfeit stamp impressions made by inkjet printers remain challenging in questioned document examination and forensic analyses. A dataset comprised of various printed stamp impressions, using ten options of conditions and materials, and hand stamped impressions was generated. In this paper, we report printed impressions in pure color and high-quality printing mode are very similar to hand stamped impressions in terms of their microscopic characteristics. These similarities may lead to incorrect conclusions via traditional identification methods. Here, we proposed a method for identifying counterfeit stamp impressions via texture features and image quality parameters extracted from impressions. First, the statistical analysis methods were used to verify a significant difference between the printed and hand stamped impressions. Principal component analysis (PCA) was used to show the variation between the impressions, and the differences between printed and hand stamped impressions were obvious in the three-dimensional plot. After filtering the background of the stamp impressions, image processing analysis was introduced to extract features of gray level co-occurrence matrix (GLCM), segmentation-based fractal texture analysis (SFTA), local binary pattern (LBP), and image quality metrics (IQM), which were used to characterize the stamp impressions. Finally, specific cases were simulated by random selection, based on the dataset of stamp impressions, and an evaluation system for stamp evidence was established to calculate the likelihood ratios (LRs) under two alternative hypotheses. The likelihood ratio interprets calibrated evaluations on the strength of stamp impressions as evidence. We can also balance these LRs against the rates of misleading evidence with a reasonable performance (equal error rate = 0.048). This paper provides a system to differentiate high-quality printed and hand stamped impressions with reasonable performance.
[Display omitted]
•This paper reported the potential challenge a high-quality stamp impression poses to questioned document examination.•A comprehensive image processing properly demonstrates the difference between genuine and counterfeit stamp impressions.•Evidence evaluation helps to rigorously evaluate the strength of stamp impression evidence with a scientific interpretation.
Journal Article
Fermentation tube test statistics for direct water sampling and comments on the Thomas formula
by
Nawalany, M.
,
Loga, M.
in
Bacteriological Techniques
,
Bayes Theorem
,
Enterobacteriaceae - isolation & purification
2010
This article describes a new interpretation of the Fermentation Tube Test (FTT) performed on water samples drawn from natural waters polluted by faecal bacteria. A novel general procedure to calculate the Most Probable Number of bacteria (MPN) in natural waters has been derived for the FTT for both direct and independent repetitive multiple water sampling. The generalization based on solving the newly proposed equation allows consideration of any a priori frequency distribution g(n) of bacterial concentration in analysed water as opposed to the unbounded uniform a priori distribution g(n) assumed in the standard procedures of the Standard Methods of Examining Water and Wastewater and ISO 8199:1988. Also a statistical analysis of the Thomas formula is presented. It is demonstrated that the Thomas formula is highly inaccurate. The authors propose, therefore, to remove the Thomas formula from the Standard Methods of Examining Water and Wastewater and ISO 8199:1988 altogether and replace it with a solution of the proposed generalized equation.
Journal Article
Bayesian framework for the evaluation of fiber evidence in a double murder—a case report
2004
Fiber evidence found on a suspect vehicle was the only useful trace to reconstruct the dynamics of the transportation of two corpses.
Optical microscopy, UV-Vis microspectrophotometry and infrared analysis were employed to compare fibers recovered in the trunk of a car to those of the blankets composing the wrapping in which the victims had been hidden.
A “pseudo-1:1” taping permitted to reconstruct the spatial distribution of the traces and to further strengthen the support to one of the hypotheses.
The Likelihood Ratio (LR) was calculated, in order to quantify the support given by forensic evidence to the explanations proposed.
A generalization of the Likelihood Ratio equation to cases analogous to this has been derived.
Fibers were the only traces that helped in the corroboration of the crime scenario, being absent any DNA, fingerprints and ballistic evidence.
Journal Article
Chapter 4 - Fact-finding in information networks
by
Dong Wang
,
Tarek Abdelzaher
,
Lance Kaplan
in
Bayesian analysis
,
Bayesian interpretation
,
Fact-finder
2015
This chapter reviews the state-of-the-art fact-finding schemes developed for trust and credibility analysis in information networks with a focus on a Bayesian interpretation of the basic fact-finding scheme. The fact-finding algorithms rank a list of claims and a list of sources by their credibility, which can be used toward solving the reliable social sensing problem. In particular, we review a Bayesian interpretation of the basic mechanism used in fact-finding from information networks in detail. This interpretation leads to a direct quantification of the accuracy of conclusions obtained from information network analysis. Such quantification scheme is of great value in deriving reliable information from unreliable sources in the context of social sensing.
Book Chapter
LINEAR BELIEF MODELS
by
Powell, Warren B
,
Ryzhov, Ilya O
in
bayesian interpretation
,
dose response optimization
,
dynamic pricing
2013,2012
This chapter contains sections titled:
Applications
A Brief Review of Linear Regression
The Knowledge Gradient for a Linear Model
Application to Drug Discovery
Application to Dynamic Pricing
Bibliographic Notes
Problems
Book Chapter
The JASP guidelines for conducting and reporting a Bayesian analysis
by
Kucharský, Šimon
,
Derks, Koen
,
Matzke, Dora
in
Bayes Theorem
,
Bayesian analysis
,
Behavioral Science and Psychology
2021
Despite the increasing popularity of Bayesian inference in empirical research, few practical guidelines provide detailed recommendations for how to apply Bayesian procedures and interpret the results. Here we offer specific guidelines for four different stages of Bayesian statistical reasoning in a research setting:
planning
the analysis,
executing
the analysis,
interpreting
the results, and
reporting
the results. The guidelines for each stage are illustrated with a running example. Although the guidelines are geared towards analyses performed with the open-source statistical software JASP, most guidelines extend to Bayesian inference in general.
Journal Article
A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research
by
Asendorpf, Jens B.
,
Kaplan, David
,
van de Schoot, Rens
in
Bayes Theorem
,
Bayesian analysis
,
Bayesian method
2014
Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided.
Journal Article
The fallacy of placing confidence in confidence intervals
by
Hoekstra, Rink
,
Morey, Richard D.
,
Lee, Michael D.
in
Bayes Theorem
,
Behavioral Science and Psychology
,
Cognitive Psychology
2016
Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true parameter value in some known proportion of repeated samples, on average. The width of confidence intervals is thought to index the precision of an estimate; CIs are thought to be a guide to which parameter values are plausible or reasonable; and the confidence coefficient of the interval (e.g., 95 %) is thought to index the plausibility that the true parameter is included in the interval. We show in a number of examples that CIs do not necessarily have any of these properties, and can lead to unjustified or arbitrary inferences. For this reason, we caution against relying upon confidence interval theory to justify interval estimates, and suggest that other theories of interval estimation should be used instead.
Journal Article
The Theory That Would Not Die
by
Sharon Bertsch Mcgrayne
in
Bayesian statistical decision theory
,
Bayesian statistical decision theory -- History
,
History
2011
Bayes' rule appears to be a straightforward, one-line theorem: by updating our initial beliefs with objective new information, we get a new and improved belief. To its adherents, it is an elegant statement about learning from experience. To its opponents, it is subjectivity run amok.
In the first-ever account of Bayes' rule for general readers, Sharon Bertsch McGrayne explores this controversial theorem and the human obsessions surrounding it. She traces its discovery by an amateur mathematician in the 1740s through its development into roughly its modern form by French scientist Pierre Simon Laplace. She reveals why respected statisticians rendered it professionally taboo for 150 years-at the same time that practitioners relied on it to solve crises involving great uncertainty and scanty information (Alan Turing's role in breaking Germany's Enigma code during World War II), and explains how the advent of off-the-shelf computer technology in the 1980s proved to be a game-changer. Today, Bayes' rule is used everywhere from DNA de-coding to Homeland Security.
Drawing on primary source material and interviews with statisticians and other scientists,The Theory That Would Not Dieis the riveting account of how a seemingly simple theorem ignited one of the greatest controversies of all time.
Optional stopping: No problem for Bayesians
Optional stopping refers to the practice of peeking at data and then, based on the results, deciding whether or not to continue an experiment. In the context of ordinary significance-testing analysis, optional stopping is discouraged, because it necessarily leads to increased type I error rates over nominal values. This article addresses whether optional stopping is problematic for Bayesian inference with Bayes factors. Statisticians who developed Bayesian methods thought not, but this wisdom has been challenged by recent simulation results of Yu, Sprenger, Thomas, and Dougherty (
2013
) and Sanborn and Hills (
2013
). In this article, I show through simulation that the interpretation of Bayesian quantities does not depend on the stopping rule. Researchers using Bayesian methods may employ optional stopping in their own research and may provide Bayesian analysis of secondary data regardless of the employed stopping rule. I emphasize here the proper interpretation of Bayesian quantities as measures of subjective belief on theoretical positions, the difference between frequentist and Bayesian interpretations, and the difficulty of using frequentist intuition to conceptualize the Bayesian approach.
Journal Article