Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
77 result(s) for "De Lathauwer, Lieven"
Sort by:
Canonical Polyadic Decomposition of Third-Order Tensors: Reduction to Generalized Eigenvalue Decomposition
Canonical polyadic decomposition (CPD) of a third-order tensor is decomposition in a minimal number of rank-1 tensors. We call an algorithm algebraic if it is guaranteed to find the decomposition when it is exact and if it relies only on standard linear algebra (essentially sets of linear equations and matrix factorizations). The known algebraic algorithms for the computation of the CPD are limited to cases where at least one of the factor matrices has full column rank. In this paper we present an algebraic algorithm for the computation of the CPD in cases where none of the factor matrices has full column rank. In particular, we show that if the famous Kruskal condition holds, then the CPD can be found algebraically. [PUBLICATION ABSTRACT]
Optimization-Based Algorithms for Tensor Decompositions: Canonical Polyadic Decomposition, Decomposition in Rank- $(L_r,L_r,1)$Terms, and a New Generalization
The canonical polyadic and rank-$(L_r,L_r,1)$ block term decomposition (CPD and BTD, respectively) are two closely related tensor decompositions. The CPD and, recently, BTD are important tools in psychometrics, chemometrics, neuroscience, and signal processing. We present a decomposition that generalizes these two and develop algorithms for its computation. Among these algorithms are alternating least squares schemes, several general unconstrained optimization techniques, and matrix-free nonlinear least squares methods. In the latter we exploit the structure of the Jacobian's Gramian to reduce computational and memory cost. Combined with an effective preconditioner, numerical experiments confirm that these methods are among the most efficient and robust currently available for computing the CPD, rank-$(L_r,L_r,1)$ BTD, and their generalized decomposition. [PUBLICATION ABSTRACT]
Unconstrained Optimization of Real Functions in Complex Variables
Nonlinear optimization problems in complex variables are frequently encountered in applied mathematics and engineering applications such as control theory, signal processing, and electrical engineering. Optimization of these problems often requires a first- or second-order approximation of the objective function to generate a new step or descent direction. However, such methods cannot be applied to real functions of complex variables because they are necessarily nonanalytic in their argument, i.e., the Taylor series expansion in their argument alone does not exist. To overcome this problem, the objective function is usually redefined as a function of the real and imaginary parts of its complex argument so that standard optimization methods can be applied. However, this approach may needlessly disguise any inherent structure present in the derivatives of such complex problems. Although little known, it is possible to construct an expansion of the objective function in its original complex variables by noting that functions of complex variables can be analytic in their argument and its complex conjugate as a whole. We use these complex Taylor series expansions to generalize existing optimization algorithms for both general nonlinear optimization problems and nonlinear least squares problems. We then apply these methods to two case studies which demonstrate that complex derivatives can lead to greater insight in the structure of the problem, and that this structure can often be exploited to improve computational complexity and storage cost. [PUBLICATION ABSTRACT]
Simulation Study of the Localization of a Near-Surface Crack Using an Air-Coupled Ultrasonic Sensor Array
The importance of Non-Destructive Testing (NDT) to check the integrity of materials in different fields of industry has increased significantly in recent years. Actually, industry demands NDT methods that allow fast (preferably non-contact) detection and localization of early-stage defects with easy-to-interpret results, so that even a non-expert field worker can carry out the testing. The main challenge is to combine as many of these requirements into one single technique. The concept of acoustic cameras, developed for low frequency NDT, meets most of the above-mentioned requirements. These cameras make use of an array of microphones to visualize noise sources by estimating the Direction Of Arrival (DOA) of the impinging sound waves. Until now, however, because of limitations in the frequency range and the lack of integrated nonlinear post-processing, acoustic camera systems have never been used for the localization of incipient damage. The goal of the current paper is to numerically investigate the capabilities of locating incipient damage by measuring the nonlinear airborne emission of the defect using a non-contact ultrasonic sensor array. We will consider a simple case of a sample with a single near-surface crack and prove that after efficient excitation of the defect sample, the nonlinear defect responses can be detected by a uniform linear sensor array. These responses are then used to determine the location of the defect by means of three different DOA algorithms. The results obtained in this study can be considered as a first step towards the development of a nonlinear ultrasonic camera system, comprising the ultrasonic sensor array as the hardware and nonlinear post-processing and source localization software.
Decompositions of a Higher-Order Tensor in Block Terms—Part II: Definitions and Uniqueness
In this paper we introduce a new class of tensor decompositions. Intuitively, we decompose a given tensor block into blocks of smaller size, where the size is characterized by a set of mode-$n$ ranks. We study different types of such decompositions. For each type we derive conditions under which essential uniqueness is guaranteed. The parallel factor decomposition and Tucker's decomposition can be considered as special cases in the new framework. The paper sheds new light on fundamental aspects of tensor algebra.
Exploiting Generalized Cyclic Symmetry to Find Fast Rectangular Matrix Multiplication Algorithms Easier
The quest to multiply two large matrices as fast as possible is one that has already intrigued researchers for several decades. However, the ‘optimal’ algorithm for a certain problem size is still not known. The fast matrix multiplication (FMM) problem can be formulated as a non-convex optimization problem—more specifically, as a challenging tensor decomposition problem. In this work, we build upon a state-of-the-art augmented Lagrangian algorithm, which formulates the FMM problem as a constrained least squares problem, by incorporating a new, generalized cyclic symmetric (CS) structure in the decomposition. This structure decreases the number of variables, thereby reducing the large search space and the computational cost per iteration. The constraints are used to find practical solutions, i.e., decompositions with simple coefficients, which yield fast algorithms when implemented in hardware. For the FMM problem, usually a very large number of starting points are necessary to converge to a solution. Extensive numerical experiments for different problem sizes demonstrate that including this structure yields more ‘unique’ practical decompositions for a fixed number of starting points. Uniqueness is defined relative to the known scale and trace invariance transformations that hold for all FMM decompositions. Making it easier to find practical decompositions may lead to the discovery of faster FMM algorithms when used in combination with sufficient computational power. Lastly, we show that the CS structure reduces the cost of multiplying a matrix by itself.
Block term decomposition for modelling epileptic seizures
Recordings of neural activity, such as EEG, are an inherent mixture of different ongoing brain processes as well as artefacts and are typically characterised by low signal-to-noise ratio. Moreover, EEG datasets are often inherently multidimensional, comprising information in time, along different channels, subjects, trials, etc. Additional information may be conveyed by expanding the signal into even more dimensions, e.g. incorporating spectral features applying wavelet transform. The underlying sources might show differences in each of these modes. Therefore, tensor-based blind source separation techniques which can extract the sources of interest from such multiway arrays, simultaneously exploiting the signal characteristics in all dimensions, have gained increasing interest. Canonical polyadic decomposition (CPD) has been successfully used to extract epileptic seizure activity from wavelet-transformed EEG data (Bioinformatics 23(13):i10–i18, 2007; NeuroImage 37:844–854, 2007), where each source is described by a rank-1 tensor, i.e. by the combination of one particular temporal, spectral and spatial signature. However, in certain scenarios, where the seizure pattern is nonstationary, such a trilinear signal model is insufficient. Here, we present the application of a recently introduced technique, called block term decomposition (BTD) to separate EEG tensors into rank- ( L r , L r ,1) terms, allowing to model more variability in the data than what would be possible with CPD. In a simulation study, we investigate the robustness of BTD against noise and different choices of model parameters. Furthermore, we show various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics.