Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
196 result(s) for "Multidimensional array"
Sort by:
Parsimonious Tensor Response Regression
Aiming at abundant scientific and engineering data with not only high dimensionality but also complex structure, we study the regression problem with a multidimensional array (tensor) response and a vector predictor. Applications include, among others, comparing tensor images across groups after adjusting for additional covariates, which is of central interest in neuroimaging analysis. We propose parsimonious tensor response regression adopting a generalized sparsity principle. It models all voxels of the tensor response jointly, while accounting for the inherent structural information among the voxels. It effectively reduces the number of free parameters, leading to feasible computation and improved interpretation. We achieve model estimation through a nascent technique called the envelope method, which identifies the immaterial information and focuses the estimation based upon the material information in the tensor response. We demonstrate that the resulting estimator is asymptotically efficient, and it enjoys a competitive finite sample performance. We also illustrate the new method on two real neuroimaging studies. Supplementary materials for this article are available online.
Tensor Regression with Applications in Neuroimaging Data Analysis
Classical regression methods treat covariates as a vector and estimate a corresponding vector of regression coefficients. Modern applications in medical imaging generate covariates of more complex form such as multidimensional arrays (tensors). Traditional statistical and computational methods are proving insufficient for analysis of these high-throughput data due to their ultrahigh dimensionality as well as complex structure. In this article, we propose a new family of tensor regression models that efficiently exploit the special structure of tensor covariates. Under this framework, ultrahigh dimensionality is reduced to a manageable level, resulting in efficient estimation and prediction. A fast and highly scalable estimation algorithm is proposed for maximum likelihood estimation and its associated asymptotic properties are studied. Effectiveness of the new methods is demonstrated on both synthetic and real MRI imaging data. Supplementary materials for this article are available online.
Third-Order Tensors as Operators on Matrices: A Theoretical and Computational Framework with Applications in Imaging
Recent work by Kilmer and Martin [Linear Algebra Appl., 435 (2011), pp. 641--658] and Braman [Linear Algebra Appl., 433 (2010), pp. 1241--1253] provides a setting in which the familiar tools of linear algebra can be extended to better understand third-order tensors. Continuing along this vein, this paper investigates further implications including (1) a bilinear operator on the matrices which is nearly an inner product and which leads to definitions for length of matrices, angle between two matrices, and orthogonality of matrices, and (2) the use of t-linear combinations to characterize the range and kernel of a mapping defined by a third-order tensor and the t-product and the quantification of the dimensions of those sets. These theoretical results lead to the study of orthogonal projections as well as an effective Gram--Schmidt process for producing an orthogonal basis of matrices. The theoretical framework also leads us to consider the notion of tensor polynomials and their relation to tensor eigentuples defined in the recent article by Braman. Implications for extending basic algorithms such as the power method, QR iteration, and Krylov subspace methods are discussed. We conclude with two examples in image processing: using the orthogonal elements generated via a Golub--Kahan iterative bidiagonalization scheme for object recognition and solving a regularized image deblurring problem. [PUBLICATION ABSTRACT]
Dynamic Tensor Clustering
Dynamic tensor data are becoming prevalent in numerous applications. Existing tensor clustering methods either fail to account for the dynamic nature of the data, or are inapplicable to a general-order tensor. There is also a gap between statistical guarantee and computational efficiency for existing tensor clustering solutions. In this article, we propose a new dynamic tensor clustering method that works for a general-order dynamic tensor, and enjoys both strong statistical guarantee and high computational efficiency. Our proposal is based on a new structured tensor factorization that encourages both sparsity and smoothness in parameters along the specified tensor modes. Computationally, we develop a highly efficient optimization algorithm that benefits from substantial dimension reduction. Theoretically, we first establish a nonasymptotic error bound for the estimator from the structured tensor factorization. Built upon this error bound, we then derive the rate of convergence of the estimated cluster centers, and show that the estimated clusters recover the true cluster structures with high probability. Moreover, our proposed method can be naturally extended to co-clustering of multiple modes of the tensor data. The efficacy of our method is illustrated through simulations and a brain dynamic functional connectivity analysis from an autism spectrum disorder study. Supplementary materials for this article are available online.
An Order- $p$Tensor Factorization with Applications in Imaging
Operations with tensors, or multiway arrays, are increasingly prevalent in many applications involving multiway data analysis. This paper extends a third-order factorization strategy and tensor operations defined in a recent paper [M. E. Kilmer and C. D. Martin, Linear Algebra Appl., 435 (2011), pp. 641--658] to order-$p$ tensors. The extension to order-$p$ tensors is explained in a recursive way but for computational speed is implemented directly using the fast Fourier transform. A major motivation for considering factorization strategies for order-$p$ tensors is to devise new types of algorithms for general order-$p$ tensors which can be used in applications. We conclude with two applications in imaging. The first application is image deblurring, and the second application is video facial recognition. Both applications involve order-4 tensors. [PUBLICATION ABSTRACT]
Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
There has been continued interest in seeking a theorem describing optimal low-rank approximations to tensors of order 3 or higher that parallels the Eckart-Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank-$r$ approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders, and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank-3 tensor has an optimal rank-2 approximation. The notable exceptions to this misbehavior are rank-1 tensors and order-2 tensors (i.e., matrices). In a more positive spirit, we propose a natural way of overcoming the ill-posedness of the low-rank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete low-dimensional examples as a first step toward more general results. To this end, we present a detailed analysis of equivalence classes of $2 \\times 2 \\times 2$ tensors, and we develop methods for extending results upward to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, it can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular, we make extensive use of the hyperdeterminant $\\Delta$ on $\\mathbb{R}^{2\\times 2 \\times 2}$.
TENSOR GENERALIZED ESTIMATING EQUATIONS FOR LONGITUDINAL IMAGING ANALYSIS
Longitudinal neuroimaging studies are becoming increasingly prevalent, where brain images are collected on multiple subjects at multiple time points. Analyses of such data are scientifically important, but also challenging. Brain images are in the form of multidimensional arrays, or tensors, which are characterized by both ultrahigh dimensionality and a complex structure. Longitudinally repeated images and induced temporal correlations add a further layer of complexity. Despite some recent efforts, there exist very few solutions for longitudinal imaging analyses. In response to the increasing need to analyze longitudinal imaging data, we propose several tensor generalized estimating equations (GEEs). The proposed GEE approach accounts for intra-subject correlation, and an imposed low-rank structure on the coefficient tensor effectively reduces the dimensionality. We also propose a scalable estimation algorithm, establish the asymptotic properties of the solution to the tensor GEEs, and investigate sparsity regularization for the purpose of region selection. We demonstrate the proposed method using simulations and by analyzing a real data set from the Alzheimer’s Disease Neuroimaging Initiative.
Tensor Envelope Partial Least-Squares Regression
Partial least squares (PLS) is a prominent solution for dimension reduction and high-dimensional regressions. Recent prevalence of multidimensional tensor data has led to several tensor versions of the PLS algorithms. However, none offers a population model and interpretation, and statistical properties of the associated parameters remain intractable. In this article, we first propose a new tensor partial least-squares algorithm, then establish the corresponding population interpretation. This population investigation allows us to gain new insight on how the PLS achieves effective dimension reduction, to build connection with the notion of sufficient dimension reduction, and to obtain the asymptotic consistency of the PLS estimator. We compare our method, both analytically and numerically, with some alternative solutions. We also illustrate the efficacy of the new method on simulations and two neuroimaging data analyses. Supplementary materials for this article are available online.
Tensor Mixed Effects Model With Application to Nanomanufacturing Inspection
Raman mapping technique has been used to perform in-line quality inspections of nanomanufacturing processes. In such an application, massive high-dimensional Raman mapping data with mixed effects is generated. In general, fixed effects and random effects in the multi-array Raman data are associated with different quality characteristics such as fabrication consistency, uniformity, and defects. The existing tensor decomposition methods cannot separate mixed effects, and existing mixed effects model can only handle matrix data but not high-dimensional multi-array data. In this article, we propose a tensor mixed effects (TME) model to analyze massive high-dimensional Raman mapping data with complex structure. The proposed TME model can (i) separate fixed effects and random effects in a tensor domain; (ii) explore the correlations along different dimensions; and (iii) realize efficient parameter estimation by a proposed iterative double Flip-Flop algorithm. We also investigate the properties of the TME model, existence and identifiability of parameter estimation. The numerical analysis demonstrates the efficiency and accuracy of the parameter estimation in the TME model. Convergence and asymptotic properties are discussed in the simulation and surrogate data analysis. The case study shows an application of the TME model in quantifying the influence of alignment on carbon nanotubes buckypaper. Moreover, the TME model can be applied to provide potential solutions for a family of tensor data analytics problems with mixed effects.