Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
39 result(s) for "Probabilistic polynomial time"
Sort by:
A Blockchain-Based Multi-Factor Authentication Model for a Cloud-Enabled Internet of Vehicles
Continuous and emerging advances in Information and Communication Technology (ICT) have enabled Internet-of-Things (IoT)-to-Cloud applications to be induced by data pipelines and Edge Intelligence-based architectures. Advanced vehicular networks greatly benefit from these architectures due to the implicit functionalities that are focused on realizing the Internet of Vehicle (IoV) vision. However, IoV is susceptible to attacks, where adversaries can easily exploit existing vulnerabilities. Several attacks may succeed due to inadequate or ineffective authentication techniques. Hence, there is a timely need for hardening the authentication process through cutting-edge access control mechanisms. This paper proposes a Blockchain-based Multi-Factor authentication model that uses an embedded Digital Signature (MFBC_eDS) for vehicular clouds and Cloud-enabled IoV. Our proposed MFBC_eDS model consists of a scheme that integrates the Security Assertion Mark-up Language (SAML) to the Single Sign-On (SSO) capabilities for a connected edge to cloud ecosystem. MFBC_eDS draws an essential comparison with the baseline authentication scheme suggested by Karla and Sood. Based on the foundations of Karla and Sood’s scheme, an embedded Probabilistic Polynomial-Time Algorithm (ePPTA) and an additional Hash function for the Pi generated during Karla and Sood’s authentication were proposed and discussed. The preliminary analysis of the proposition shows that the approach is more suitable to counter major adversarial attacks in an IoV-centered environment based on the Dolev–Yao adversarial model while satisfying aspects of the Confidentiality, Integrity, and Availability (CIA) triad.
A Meeting Point of Probability, Graphs, and Algorithms: The Lovász Local Lemma and Related Results—A Survey
A classic and fundamental result, known as the Lovász Local Lemma, is a gem in the probabilistic method of combinatorics. At a high level, its core message can be described by the claim that weakly dependent events behave similarly to independent ones. A fascinating feature of this result is that even though it is a purely probabilistic statement, it provides a valuable and versatile tool for proving completely deterministic theorems. The Lovász Local Lemma has found many applications; despite being originally published in 1973, it still attracts active novel research. In this survey paper, we review various forms of the Lemma, as well as some related results and applications.
Computational aspects of modular forms and Galois representations
Modular forms are tremendously important in various areas of mathematics, from number theory and algebraic geometry to combinatorics and lattices. Their Fourier coefficients, with Ramanujan's tau-function as a typical example, have deep arithmetic significance. Prior to this book, the fastest known algorithms for computing these Fourier coefficients took exponential time, except in some special cases. The case of elliptic curves (Schoof's algorithm) was at the birth of elliptic curve cryptography around 1985. This book gives an algorithm for computing coefficients of modular forms of level one in polynomial time. For example, Ramanujan's tau of a prime number p can be computed in time bounded by a fixed power of the logarithm of p. Such fast computation of Fourier coefficients is itself based on the main result of the book: the computation, in polynomial time, of Galois representations over finite fields attached to modular forms by the Langlands program. Because these Galois representations typically have a nonsolvable image, this result is a major step forward from explicit class field theory, and it could be described as the start of the explicit Langlands program. The computation of the Galois representations uses their realization, following Shimura and Deligne, in the torsion subgroup of Jacobian varieties of modular curves. The main challenge is then to perform the necessary computations in time polynomial in the dimension of these highly nonlinear algebraic varieties. Exact computations involving systems of polynomial equations in many variables take exponential time. This is avoided by numerical approximations with a precision that suffices to derive exact results from them. Bounds for the required precision--in other words, bounds for the height of the rational numbers that describe the Galois representation to be computed--are obtained from Arakelov theory. Two types of approximations are treated: one using complex uniformization and another one using geometry over finite fields. The book begins with a concise and concrete introduction that makes its accessible to readers without an extensive background in arithmetic geometry. And the book includes a chapter that describes actual computations.
Computation with multiple CTCs of fixed length and width
We examine some variants of computation with closed timelike curves (CTCs), where various restrictions are imposed on the memory of the computer, and the information carrying capacity and range of the CTC. We give full characterizations of the classes of languages decided by polynomial time probabilistic and quantum computers that can send a single classical bit to their own past. We show that, given a time machine with constant negative delay, one can implement CTC-based computations without the need to know about the runtime beforehand. Chaining multiple instances of such fixed-length CTCs, the power of postselection can be endowed to deterministic computers, all languages in can be decided with no error in worst-case polynomial time, and all Turing-decidable languages can be decided in constant expected time. We provide proofs of the following facts for weaker models: Augmenting probabilistic computers with a single CTC leads to an improvement in language recognition power. Quantum computers under these restrictions are more powerful than their classical counterparts. Some deterministic models assisted with multiple CTCs are more powerful than those with a single CTC.
Ensemble Postprocessing Using Quantile Function Regression Based on Neural Networks and Bernstein Polynomials
The value of ensemble forecasts is well documented. However, postprocessing by statistical methods is usually required to make forecasts reliable in a probabilistic sense. In this work a flexible statistical method for making probabilistic forecasts in terms of quantile functions is proposed. The quantile functions are specified by linear combinations of Bernstein basis polynomials, and their coefficients are assumed to be related to ensemble forecasts by means of a highly adaptable neural network. This leads to many parameters to estimate, but a recent learning algorithm often applied to deep-learning problems makes this feasible and provides robust estimates. The method is applied to ~2 yr of ensemble wind speed forecasting data at 125 Norwegian stations for lead time +60 h. An intercomparison with two quantile regression methods shows improvements in quantile skill score of nearly 1%. The most appealing feature of the method is arguably its versatility.
Fast Approximations of the Jeffreys Divergence between Univariate Gaussian Mixtures via Mixture Conversions to Exponential-Polynomial Distributions
The Jeffreys divergence is a renown arithmetic symmetrization of the oriented Kullback–Leibler divergence broadly used in information sciences. Since the Jeffreys divergence between Gaussian mixture models is not available in closed-form, various techniques with advantages and disadvantages have been proposed in the literature to either estimate, approximate, or lower and upper bound this divergence. In this paper, we propose a simple yet fast heuristic to approximate the Jeffreys divergence between two univariate Gaussian mixtures with arbitrary number of components. Our heuristic relies on converting the mixtures into pairs of dually parameterized probability densities belonging to an exponential-polynomial family. To measure with a closed-form formula the goodness of fit between a Gaussian mixture and an exponential-polynomial density approximating it, we generalize the Hyvärinen divergence to α-Hyvärinen divergences. In particular, the 2-Hyvärinen divergence allows us to perform model selection by choosing the order of the exponential-polynomial densities used to approximate the mixtures. We experimentally demonstrate that our heuristic to approximate the Jeffreys divergence between mixtures improves over the computational time of stochastic Monte Carlo estimations by several orders of magnitude while approximating the Jeffreys divergence reasonably well, especially when the mixtures have a very small number of modes.
CONVERGENCE AND QUALITATIVE PROPERTIES OF MODIFIED EXPLICIT SCHEMES FOR BSDES WITH POLYNOMIAL GROWTH
The theory of Forward–Backward Stochastic Differential Equations (FBSDEs) paves a way to probabilistic numerical methods for nonlinear parabolic PDEs. The majority of the results on the numerical methods for FBSDEs relies on the global Lipschitz assumption, which is not satisfied for a number of important cases such as the Fisher–KPP or the FitzHugh–Nagumo equations. Furthermore, it has been shown in [Ann. Appl. Probab. 25 (2015) 2563–2625] that for BSDEs with monotone drivers having polynomial growth in the primary variable y, only the (sufficiently) implicit schemes converge. But these require an additional computational effort compared to explicit schemes. This article develops a general framework that allows the analysis, in a systematic fashion, of the integrability properties, convergence and qualitative properties (e.g., comparison theorem) for whole families of modified explicit schemes. The framework yields the convergence of some modified explicit scheme with the same rate as implicit schemes and with the computational cost of the standard explicit scheme. To illustrate our theory, we present several classes of easily implementable modified explicit schemes that can computationally outperform the implicit one and preserve the qualitative properties of the solution to the BSDE. These classes fit into our developed framework and are tested in computational experiments.
Probabilistic Cellular Automata Monte Carlo for the Maximum Clique Problem
We consider the problem of finding the largest clique of a graph. This is an NP-hard problem and no exact algorithm to solve it exactly in polynomial time is known to exist. Several heuristic approaches have been proposed to find approximate solutions. Markov Chain Monte Carlo is one of these. In the context of Markov Chain Monte Carlo, we present a class of “parallel dynamics”, known as Probabilistic Cellular Automata, which can be used in place of the more standard choice of sequential “single spin flip” to sample from a probability distribution concentrated on the largest cliques of the graph. We perform a numerical comparison between the two classes of chains both in terms of the quality of the solution and in terms of computational time. We show that the parallel dynamics are considerably faster than the sequential ones while providing solutions of comparable quality.
Propagation Computation for Mixed Bayesian Networks Using Minimal Strong Triangulation
In recent years, mixed Bayesian networks have received increasing attention across various fields for probabilistic reasoning. Though many studies have been devoted to propagation computation on strong junction trees for mixed Bayesian networks, few have addressed the construction of appropriate strong junction trees. In this work, we establish a connection between the minimal strong triangulation for marked graphs and the minimal triangulation for star graphs. We further propose a minimal strong triangulation method for the moral graph of mixed Bayesian networks and develop a polynomial-time algorithm to derive a strong junction tree from this minimal strong triangulation. Moreover, we also focus on the propagation computation of all posteriors on this derived strong junction tree. We conducted multiple numerical experiments to evaluate the performance of our proposed method, demonstrating significant improvements in computational efficiency compared to existing approaches. Experimental results indicate that our minimal strong triangulation approach provides a robust framework for efficient probabilistic inference in mixed Bayesian networks.
Comparative Evaluation of Nonparametric Density Estimators for Gaussian Mixture Models with Clustering Support
The article investigates the accuracy of nonparametric univariate density estimation methods applied to various Gaussian mixture models. A comprehensive comparative analysis is performed for four popular estimation approaches: adaptive kernel density estimation, projection pursuit, log-spline estimation, and wavelet-based estimation. The study is extended with modified versions of these methods, where the sample is first clustered using the EM algorithm based on Gaussian mixture components prior to density estimation. Estimation accuracy is quantitatively evaluated using MAE and MAPE criteria, with simulation experiments conducted over 100,000 replications for various sample sizes. The results show that estimation accuracy strongly depends on the density structure, sample size, and degree of component overlap. Clustering before density estimation significantly improves accuracy for multimodal and asymmetric densities. Although no formal statistical tests are conducted, the performance improvement is validated through non-overlapping confidence intervals obtained from 100,000 simulation replications. In addition, several decision-making systems are compared for automatically selecting the most appropriate estimation method based on the sample’s statistical features. Among the tested systems, kernel discriminant analysis yielded the lowest error rates, while neural networks and hybrid methods showed competitive but more variable performance depending on the evaluation criterion. The findings highlight the importance of using structurally adaptive estimators and automation of method selection in nonparametric statistics. The article concludes with recommendations for method selection based on sample characteristics and outlines future research directions, including extensions to multivariate settings and real-time decision-making systems.