Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
18 result(s) for "Heinecke, Andreas"
Sort by:
Bayesian splines versus fractional polynomials in network meta-analysis
Background Network meta-analysis (NMA) provides a powerful tool for the simultaneous evaluation of multiple treatments by combining evidence from different studies, allowing for direct and indirect comparisons between treatments. In recent years, NMA is becoming increasingly popular in the medical literature and underlying statistical methodologies are evolving both in the frequentist and Bayesian framework. Traditional NMA models are often based on the comparison of two treatment arms per study. These individual studies may measure outcomes at multiple time points that are not necessarily homogeneous across studies. Methods In this article we present a Bayesian model based on B-splines for the simultaneous analysis of outcomes across time points, that allows for indirect comparison of treatments across different longitudinal studies. Results We illustrate the proposed approach in simulations as well as on real data examples available in the literature and compare it with a model based on P-splines and one based on fractional polynomials, showing that our approach is flexible and overcomes the limitations of the latter. Conclusions The proposed approach is computationally efficient and able to accommodate a large class of temporal treatment effect patterns, allowing for direct and indirect comparisons of widely varying shapes of longitudinal profiles.
RIP Sensing Matrices Construction for Sparsifying Dictionaries with Application to MRI Imaging
Practical applications of compressed sensing often restrict the choice of its two main ingredients. They may (i) prescribe the use of particular redundant dictionaries for certain classes of signals to become sparsely represented or (ii) dictate specific measurement mechanisms which exploit certain physical principles. On the problem of RIP measurement matrix design in compressed sensing with redundant dictionaries, we give a simple construction to derive sensing matrices whose compositions with a prescribed dictionary have with high probability the RIP in the klog(n/k) regime. Our construction thus provides recovery guarantees usually only attainable for sensing matrices from random ensembles with sparsifying orthonormal bases. Moreover, we use the dictionary factorization idea that our construction rests on in the application of magnetic resonance imaging, in which also the sensing matrix is prescribed by quantum mechanical principles. We propose a recovery algorithm based on transforming the acquired measurements such that the compressed sensing theory for RIP embeddings can be utilized to recover wavelet coefficients of the target image, and show its performance on examples from the fastMRI dataset.
FINITE ELEMENTS IN ORDERED BANACH SPACES WITH POSITIVE BASES
We characterize finite elements, an order-theoretic concept in Archimedean vector lattices, in the setting of ordered Banach spaces with positive unconditional basis as vectors having finite support with respect to their basis representations. Using algebraic vector space bases, we further describe a class of infinite dimensional vector lattices in which each element is finite and even self-majorizing.
RIP sensing matrices construction for sparsifying dictionaries with application to MRI imaging
Practical applications of compressed sensing often restrict the choice of its two main ingredients. They may (i) prescribe using particular redundant dictionaries for certain classes of signals to become sparsely represented, or (ii) dictate specific measurement mechanisms which exploit certain physical principles. On the problem of RIP measurement matrix design in compressed sensing with redundant dictionaries, we give a simple construction to derive sensing matrices whose compositions with a prescribed dictionary have a high probability of the RIP in the \\(k \\log(n/k)\\) regime. Our construction thus provides recovery guarantees usually only attainable for sensing matrices from random ensembles with sparsifying orthonormal bases. Moreover, we use the dictionary factorization idea that our construction rests on in the application of magnetic resonance imaging, in which also the sensing matrix is prescribed by quantum mechanical principles. We propose a recovery algorithm based on transforming the acquired measurements such that the compressed sensing theory for RIP embeddings can be utilized to recover wavelet coefficients of the target image, and show its performance on examples from the fastMRI dataset.
Cohesion and Repulsion in Bayesian Distance Clustering
Clustering in high-dimensions poses many statistical challenges. While traditional distance-based clustering methods are computationally feasible, they lack probabilistic interpretation and rely on heuristics for estimation of the number of clusters. On the other hand, probabilistic model-based clustering techniques often fail to scale and devising algorithms that are able to effectively explore the posterior space is an open problem. Based on recent developments in Bayesian distance-based clustering, we propose a hybrid solution that entails defining a likelihood on pairwise distances between observations. The novelty of the approach consists in including both cohesion and repulsion terms in the likelihood, which allows for cluster identifiability. This implies that clusters are composed of objects which have small \"dissimilarities\" among themselves (cohesion) and similar dissimilarities to observations in other clusters (repulsion). We show how this modelling strategy has interesting connection with existing proposals in the literature as well as a decision-theoretic interpretation. The proposed method is computationally efficient and applicable to a wide variety of scenarios. We demonstrate the approach in a simulation study and an application in digital numismatics.
Unsupervised Statistical Learning for Die Analysis in Ancient Numismatics
Die analysis is an essential numismatic method, and an important tool of ancient economic history. Yet, manual die studies are too labor-intensive to comprehensively study large coinages such as those of the Roman Empire. We address this problem by proposing a model for unsupervised computational die analysis, which can reduce the time investment necessary for large-scale die studies by several orders of magnitude, in many cases from years to weeks. From a computer vision viewpoint, die studies present a challenging unsupervised clustering problem, because they involve an unknown and large number of highly similar semantic classes of imbalanced sizes. We address these issues through determining dissimilarities between coin faces derived from specifically devised Gaussian process-based keypoint features in a Bayesian distance clustering framework. The efficacy of our method is demonstrated through an analysis of 1135 Roman silver coins struck between 64-66 C.E..
Spectral Tetris Fusion Frame Constructions
Spectral tetris is a fexible and elementary method to construct unit norm frames with a given frame operator, having all of its eigenvalues greater than or equal to two. One important application of spectral tetris is the construction of fusion frames. We first show how the assumption on the spectrum of the frame operator can be dropped and extend the spectral tetris algorithm to construct unit norm frames with any given spectrum of the frame operator. We then provide a suffcient condition for using this generalization of spectral tetris to construct fusion frames with prescribed spectrum for the fusion frame operator and with prescribed dimensions for the subspaces. This condition is shown to be necessary in the tight case of redundancy greater than two.
A quantitative notion of redundancy for infinite frames
Bodmann, Casazza and Kutyniok introduced a quantitative notion of redundancy for finite frames - which they called {\\em upper and lower redundancies} - that match better with an intuitive understanding of redundancy for finite frames in a Hilbert space. The objective of this paper is to see how much of this theory generalizes to infinite frames.
Necessary and sufficient conditions to perform Spectral Tetris
Spectral Tetris has proved to be a powerful tool for constructing sparse equal norm Hilbert space frames. We introduce a new form of Spectral Tetris which works for non-equal norm frames. It is known that this method cannot construct all frames --- even in the new case introduced here. Until now, it has been a mystery as to why Spectral Tetris sometimes works and sometimes fails. We will give a complete answer to this mystery by giving necessary and sufficient conditions for Spectral Tetris to construct frames in all cases including equal norm frames, prescribed norm frames, frames with constant spectrum of the frame operator, and frames with prescribed spectrum for the frame operator. We present a variety of examples as well as special cases where Spectral Tetris always works.
Optimally Sparse Frames
Frames have established themselves as a means to derive redundant, yet stable decompositions of a signal for analysis or transmission, while also promoting sparse expansions. However, when the signal dimension is large, the computation of the frame measurements of a signal typically requires a large number of additions and multiplications, and this makes a frame decomposition intractable in applications with limited computing budget. To address this problem, in this paper, we focus on frames in finite-dimensional Hilbert spaces and introduce sparsity for such frames as a new paradigm. In our terminology, a sparse frame is a frame whose elements have a sparse representation in an orthonormal basis, thereby enabling low-complexity frame decompositions. To introduce a precise meaning of optimality, we take the sum of the numbers of vectors needed of this orthonormal basis when expanding each frame vector as sparsity measure. We then analyze the recently introduced algorithm Spectral Tetris for construction of unit norm tight frames and prove that the tight frames generated by this algorithm are in fact optimally sparse with respect to the standard unit vector basis. Finally, we show that even the generalization of Spectral Tetris for the construction of unit norm frames associated with a given frame operator produces optimally sparse frames.