Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
18
result(s) for
"α-divergence"
Sort by:
A Novel Approach to Canonical Divergences within Information Geometry
2015
A divergence function on a manifold M defines a Riemannian metric g and dually coupled affine connections ∇ and ∇ * on M. When M is dually flat, that is flat with respect to ∇ and ∇ * , a canonical divergence is known, which is uniquely determined from ( M , g , ∇ , ∇ * ) . We propose a natural definition of a canonical divergence for a general, not necessarily flat, M by using the geodesic integration of the inverse exponential map. The new definition of a canonical divergence reduces to the known canonical divergence in the case of dual flatness. Finally, we show that the integrability of the inverse exponential map implies the geodesic projection property.
Journal Article
A unified approach to measuring unequal representation
2024
The concept of unequal representation is commonly understood through the lenses of disproportionality and malapportionment, pertaining to inter-party and inter-district aspects, respectively. Popular indices used to measure such features are analyzed separately despite being mathematically identical. District-level wasted votes are not measured in terms of unequal representation, even though they can be conceptualized as intra-district unequal representation. A new component, intra-party unequal representation, which measures unequal representation across districts for voters who support each party, has not been considered to contribute to unequal representation. We propose a unified approach for measuring these components—disproportionality, malapportionment, wasted votes, and intra-party unequal representation—by using α-divergence. We show mathematically that the total of disproportionality and intra-party unequal representation equals that of malapportionment and wasted votes. We apply this approach to the Japanese political system and demonstrate the role of intra-party unequal representation in sustaining disproportionality in favor of the Liberal Democratic Party.
Journal Article
Criterion for the Resemblance Between the Mother and the Model Distribution
2025
If the probability distribution model aims to approximate the hidden mother distribution, it is imperative to establish a useful criterion for the resemblance between the mother and the model distributions. This study proposes a criterion that measures the Hellinger distance between discretized (quantized) samples from both distributions. Unlike information criteria such as AIC, this criterion does not require the probability density function of the model distribution, which cannot be explicitly obtained for a complicated model such as a deep learning machine. Second, it can draw a positive conclusion (i.e., both distributions are sufficiently close) under a given threshold, whereas a statistical hypothesis test, such as the Kolmogorov–Smirnov test, cannot genuinely lead to a positive conclusion when the hypothesis is accepted. In this study, we establish a reasonable threshold for the criterion deduced from the Bayes error rate and also present the asymptotic bias of the estimator of the criterion. From these results, a reasonable and easy-to-use criterion is established that can be directly calculated from the two sets of samples from both distributions.
Journal Article
Blind Separation of Instantaneous Mixtures of Independent/Dependent Sources
by
Laghrib Amine
,
Ghazdali Abdelghani
,
Abdelmoutalib, Metrane
in
Comparative studies
,
Cost function
,
Divergence
2021
Blind Source Separation (BSS) has always been an active research field within the signal processing community; it is used to reconstruct primary source signals from their observed mixtures. Independent Component Analysis has been and is still used to solve the BSS problem; however, it is based on the mutual independence of the original source signals. In this paper, we propose to use Copulas to model the dependency structure between these signals, enabling the separation of dependent source components; we also deploy α-divergence as our cost function to minimize, considering its superiority to handle noisy data as well as its ability to converge faster. We test our approach for various values of alpha and give a comparative study between the proposed methodology and other existing methods; this approach exhibited a higher quality performance and accuracy, especially when the value of α is equal to 12, which is equivalent to the Hellinger divergence.
Journal Article
Operator valued inequalities based on Young’s inequality
by
Tohyama, Hiroaki
,
Watanabe, Masayuki
,
Kamei, Eizaburo
in
Mathematics
,
Mathematics and Statistics
,
Operator Theory
2023
Various refinements of Young’s inequality have been obtained by many authors. Based on one of those refinements we will construct a new operator inequality, which is obtained by generalizing the operator valued
α
-divergence.
Journal Article
On Clustering Histograms with k-Means by Using Mixed α-Divergences
2014
Clustering sets of histograms has become popular thanks to the success of the generic method of bag-of-X used in text categorization and in visual categorization applications. In this paper, we investigate the use of a parametric family of distortion measures, called the α-divergences, for clustering histograms. Since it usually makes sense to deal with symmetric divergences in information retrieval systems, we symmetrize the α -divergences using the concept of mixed divergences. First, we present a novel extension of k-means clustering to mixed divergences. Second, we extend the k-means++ seeding to mixed α-divergences and report a guaranteed probabilistic bound. Finally, we describe a soft clustering technique for mixed α-divergences.
Journal Article
A Generalized Bayes Rule for Prediction
1999
In the case of prior knowledge about the unknown parameter, the Bayesian predictive density coincides with the Bayes estimator for the true density in the sense of the Kullback-Leibler divergence, but this is no longer true if we consider another loss function. In this paper we present a generalized Bayes rule to obtain Bayes density estimators with respect to any α-divergence, including the Kullback-Leibler divergence and the Hellinger distance. For curved exponential models, we study the asymptotic behaviour of these predictive densities. We show that, whatever prior we use, the generalized Bayes rule improves (in a non-Bayesian sense) the estimative density corresponding to a bias modification of the maximum likelihood estimator. It gives rise to a correspondence between choosing a prior density for the generalized Bayes rule and fixing a bias for the maximum likelihood estimator in the classical setting. A criterion for comparing and selecting prior densities is also given.
Journal Article