Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
41,029 result(s) for "Lower bounds"
Sort by:
α-VARIATIONAL INFERENCE WITH STATISTICAL GUARANTEES
We provide statistical guarantees for a family of variational approximations to Bayesian posterior distributions, called α-VB, which has close connections with variational approximations of tempered posteriors in the literature. The standard variational approximation is a special case of α-VB with α = 1. When α ∈ (0, 1], a novel class of variational inequalities are developed for linking the Bayes risk under the variational approximation to the objective function in the variational optimization problem, implying that maximizing the evidence lower bound in variational inference has the effect of minimizing the Bayes risk within the variational density family. Operating in a frequentist setup, the variational inequalities imply that point estimates constructed from the α-VB procedure converge at an optimal rate to the true parameter in a wide range of problems. We illustrate our general theory with a number of examples, including the mean-field variational approximation to (low)-highdimensional Bayesian linear regression with spike and slab priors, Gaussian mixture models and latent Dirichlet allocation.
Explore First, Exploit Next: The True Shape of Regret in Bandit Problems
We revisit lower bounds on the regret in the case of multiarmed bandit problems. We obtain nonasymptotic, distribution-dependent bounds and provide simple proofs based only on well-known properties of Kullback–Leibler divergences. These bounds show in particular that in the initial phase the regret grows almost linearly, and that the well-known logarithmic growth of the regret only holds in a final phase. The proof techniques come to the essence of the information-theoretic arguments used and they involve no unnecessary complications.
Kernelization Lower Bounds by Cross-Composition
We introduce the framework of cross-composition for proving kernelization lower bounds. A classical problem $L$ \\and/or-cross-composes into a parameterized problem $\\mathcal{Q}$ if it is possible to efficiently construct an instance of $\\mathcal{Q}$ with polynomially bounded parameter value that expresses the logical and or or of a sequence of instances of $L$. Building on work by Bodlaender et al. and using results of Fortnow and Santhanam, Dell and van Melkebeek, and Drucker, we show that if an NP-hard problem and/or-cross-composes into a parameterized problem $\\mathcal{Q}$, then $\\mathcal{Q}$ does not admit a polynomial kernel unless $\\mbox{NP}\\subseteq \\mbox{coNP/poly}$ and the polynomial hierarchy collapses. Our technique generalizes and strengthens the techniques of using composition algorithms and of transferring the lower bounds via polynomial parameter transformations. We show its applicability by proving kernelization lower bounds for a number of important graphs problems with structural (nonstandard) parameterizations, e.g., Clique, Chromatic Number, Weighted Feedback Vertex Set, and Weighted Odd Cycle Transversal do not admit polynomial kernels with respect to the vertex cover number of the input graphs unless the polynomial hierarchy collapses, contrasting the fact that these problems are trivially fixed-parameter tractable for this parameter. We have similar lower bounds for Feedback Vertex Set and Odd Cycle Transversal under structural parameterizations. After learning of our results, several teams of authors have successfully applied the cross-composition framework to different parameterized problems. For completeness, our presentation of the framework includes several extensions based on this follow-up work. For example, we show how a relaxed version of or-cross-compositions may be used to give lower bounds on the degree of the polynomial in the kernel size. [PUBLICATION ABSTRACT]
An Iterative Algorithm for Maximal and Minimal Solutions of a Class Matrix Equations
In the paper, the peak solutions of a class equation is studied, the peak solutions are the maximal and minimal solutions. There is an iterative algorithm given for the solutions of the class equation. First, the existence of the peak solutions of the class equations is obtained. Second, when the peak solutions exist, an iterative algorithm is established to converge to the peak solutions of the class equation. By an upper bound and a lower bound of the solutions of the equation solution as the initial matrix, the iterative algorithm of the paper converges to the peak solutions of the class equation. The convergence problem of the algorithm is proved by the mathematical induction in the paper. The above results are verified by the examples.
Localising two sub-diffraction emitters in 3D using quantum correlation microscopy
The localisation of fluorophores is an important aspect of determining the biological function of cellular systems. Quantum correlation microscopy (QCM) is a promising technique for providing diffraction unlimited emitter localisation that can be used with either confocal or widefield modalities. However, so far, QCM has not been applied to three dimensional localisation problems. Here we show that QCM provides diffraction-unlimited three-dimensional localisation for two emitters within a single diffraction-limited spot. By introducing a two-stage maximum likelihood estimator, our modelling shows that the localisation precision scales as 1 / t where t is the total detection time. Diffraction unlimited localisation is achieved using both intensity and photon correlation from Hanbury Brown and Twiss measurements at as few as four measurement locations. We also compare the results of (MC) simulations with the Cramér–Rao lower bound.
KRW Composition Theorems via Lifting
One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., P ⊈ NC 1 ). Karchmer et al. (Comput Complex 5(3/4):191–204, 1995) suggested to approach this problem by proving that depth complexity behaves “as expected” with respect to the composition of functions f ◊ g . They showed that the validity of this conjecture would imply that P ⊈ NC 1 . Several works have made progress toward resolving this conjecture by proving special cases. In particular, these works proved the KRW conjecture for every outer function f , but only for few inner functions g . Thus, it is an important challenge to prove the KRW conjecture for a wider range of inner functions. In this work, we extend significantly the range of inner functions that can be handled. First, we consider the monotone version of the KRW conjecture. We prove it for every monotone inner function g whose depth complexity can be lower-bounded via a query-to-communication lifting theorem. This allows us to handle several new and well-studied functions such as the s-t-connectivity, clique, and generation functions. In order to carry this progress back to the non-monotone setting, we introduce a new notion of semi-monotone composition, which combines the non-monotone complexity of the outer function f with the monotone complexity of the inner function g . In this setting, we prove the KRW conjecture for a similar selection of inner functions g , but only for a specific choice of the outer function f .
Improved spectral radius bounds for Schur product via trace and non-diagonal elements
For two nonnegative matrices, we derive an improved lower bound for the spectral radius of the Schur product. Our bound incorporates the trace of the Schur product as well as the non-diagonal elements, leading to a tighter estimate compared to existing results. To validate our theoretical findings, we present numerical experiments demonstrating the effectiveness of the proposed bound.
OPTIMAL ADAPTIVITY OF SIGNED-POLYGON STATISTICS FOR NETWORK TESTING
Given a symmetric social network, we are interested in testing whether it has only one community or multiple communities. The desired tests should (a) accommodate severe degree heterogeneity, (b) accommodate mixed memberships, (c) have a tractable null distribution and (d) adapt automatically to different levels of sparsity, and achieve the optimal phase diagram. How to find such a test is a challenging problem. We propose the Signed Polygon as a class of new tests. Fixing m ≥ 3, for each m-gon in the network, define a score using the centered adjacency matrix. The sum of such scores is then the mth order Signed Polygon statistic. The Signed Triangle (SgnT) and the Signed Quadrilateral (SgnQ) are special examples of the Signed Polygon. We show that both the SgnT and SgnQ tests satisfy (a)–(d), and especially, they work well for both very sparse and less sparse networks. Our proposed tests compare favorably with existing tests. For example, the EZ and GC tests behave unsatisfactorily in the less sparse case and do not achieve the optimal phase diagram. Also, many existing tests do not allow for severe heterogeneity or mixed memberships, and they behave unsatisfactorily in our settings. The analysis of the SgnT and SgnQ tests is delicate and extremely tedious, and the main reason is that we need a unified proof that covers a wide range of sparsity levels and a wide range of degree heterogeneity. For lower bound theory, we use a phase transition framework, which includes the standard minimax argument, but is more informative. The proof uses classical theorems on matrix scaling.
Understanding Hypothetical Bias
The presence of hypothetical bias (HB) associated with stated preference methods has garnered frequent attention in the broad literature trying to describe and understand human behavior, often seen in environmental valuation, marketing studies, transportation choices, medical research, and others. This study presents an updated meta-analysis to explore the source of HB and methods to mitigate it. While previous meta-analysis on this topic often involves a few dozen articles, this analysis includes 131 studies after reviewing over 500 published and unpublished articles. This enables the inclusion of several important factors that have not been investigated before. These include relatively recent willingness to pay elicitation methods such as choice experiments and the Turnbull lower bound estimator. Newly emerged HB reduction techniques such as consequentiality and certainty follow-up treatments are also included. For explanatory variables that have been examined in previous studies, this analysis does not always report consistent findings. In particular, holding everything constant and contrary to commonlyheld beliefs, the method of auction does not offer much reduction to HB compared to more conventional methods such as a referendum vote. However, choice experiment, cheap talk, consequentiality and certainty follow-up all significantly contributed to explaining and mitigating the magnitude of HB. These results help practitioners to understand HB’s presence and choose appropriate methods for amelioration. The framework established through this study also enables future analyses targeted at understanding variations built upon one or multiple HB mitigation techniques.
A CONVEX OPTIMIZATION APPROACH TO HIGH-DIMENSIONAL SPARSE QUADRATIC DISCRIMINANT ANALYSIS
In this paper, we study high-dimensional sparse Quadratic Discriminant Analysis (QDA) and aim to establish the optimal convergence rates for the classification error. Minimax lower bounds are established to demonstrate the necessity of structural assumptions such as sparsity conditions on the discriminating direction and differential graph for the possible construction of consistent high-dimensional QDA rules. We then propose a classification algorithm called SDAR using constrained convex optimization under the sparsity assumptions. Both minimax upper and lower bounds are obtained and this classification rule is shown to be simultaneously rate optimal over a collection of parameter spaces, up to a logarithmic factor. Simulation studies demonstrate that SDAR performs well numerically. The algorithm is also illustrated through an analysis of prostate cancer data and colon tissue data. The methodology and theory developed for highdimensional QDA for two groups in the Gaussian setting are also extended to multigroup classification and to classification under the Gaussian copula model.