Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
183 result(s) for "Dimension (vector space)"
Sort by:
Quantum similarity and QSPR in Euclidean-, and Minkowskian–Banach spaces
This paper describes first how Euclidian- and Minkowskian–Banach spaces are related via the definition of a metric or signature vector. Also, it is discussed later on how these spaces can be generated using homothecies of the unit sphere or shell. Such possibility allows for proposing a process aiming at the dimension condensation in such spaces. The condensation of dimensions permits the account of the incompleteness of classical QSPR procedures, independently of whether the algorithm used is statistical bound or AI-neural network related. Next, a quantum QSPR framework within Minkowskian vector spaces is discussed. Then, a well-defined set of general isometric vectors is proposed, and connected to the set of molecular density functions generating the quantum similarity metric matrix. A convenient quantum QSPR algorithm emerges from this Minkowskian mathematical structure and isometry.
Some Standard Examples of Vector Spaces
In this article, using the Mizar system, we introduce some standard examples of vector spaces, e.g., the vector space of linear transformations between vector spaces. We formulate some conditions for the isomorphism of finite-dimensional vector spaces and prove that linear transformations are uniquely determined by their values with respect to the basis.
A primer on mapping class groups (Princeton mathematical series)
The study of the mapping class group Mod(S) is a classical topic that is experiencing a renaissance. It lies at the juncture of geometry, topology, and group theory. This book explains as many important theorems, examples, and techniques as possible, quickly and directly, while at the same time giving full details and keeping the text nearly self-contained. The book is suitable for graduate students.
Descent in Buildings (AM-190)
Descent in Buildings begins with the resolution of a major open question about the local structure of Bruhat-Tits buildings. The authors then put their algebraic solution into a geometric context by developing a general fixed point theory for groups acting on buildings of arbitrary type, giving necessary and sufficient conditions for the residues fixed by a group to form a kind of subbuilding or \"form\" of the original building. At the center of this theory is the notion of a Tits index, a combinatorial version of the notion of an index in the relative theory of algebraic groups. These results are combined at the end to show that every exceptional Bruhat-Tits building arises as a form of a \"residually pseudo-split\" Bruhat-Tits building. The book concludes with a display of the Tits indices associated with each of these exceptional forms.This is the third and final volume of a trilogy that began with Richard Weiss' The Structure of Spherical Buildings and The Structure of Affine Buildings.
Matrices, Moments and Quadrature with Applications
This computationally oriented book describes and explains the mathematical relationships among matrices, moments, orthogonal polynomials, quadrature rules, and the Lanczos and conjugate gradient algorithms. The book bridges different mathematical areas to obtain algorithms to estimate bilinear forms involving two vectors and a function of the matrix. The first part of the book provides the necessary mathematical background and explains the theory. The second part describes the applications and gives numerical examples of the algorithms and techniques developed in the first part. Applications addressed in the book include computing elements of functions of matrices; obtaining estimates of the error norm in iterative methods for solving linear systems and computing parameters in least squares and total least squares; and solving ill-posed problems using Tikhonov regularization. This book will interest researchers in numerical linear algebra and matrix computations, as well as scientists and engineers working on problems involving computation of bilinear forms.
Foundations of algebraic topology
The book description for \"Foundations of Algebraic Topology\" is currently unavailable.
Discriminant component analysis for privacy protection and visualization of big data
Big data has many divergent types of sources, from physical (sensor/IoT) to social and cyber (web) types, rendering it messy and, imprecise, and incomplete. Due to its quantitative (volume and velocity) and qualitative (variety) challenges, big data to the users resembles something like “the elephant to the blind men”. It is imperative to enact a major paradigm shift in data mining and learning tools so that information from diversified sources must be integrated together to unravel information hidden in the massive and messy big data, so that, metaphorically speaking, it would let the blind men “see” the elephant. This talk will address yet another vital “V”-paradigm: “Visualization”. Visualization tools are meant to supplement (instead of replace) the domain expertise (e.g. a cardiologist) and provide a big picture to help users formulate critical questions and subsequently postulate heuristic and insightful answers. For big data, the curse of high feature dimensionality is causing grave concerns on computational complexity and over-training. In this talk, we shall explore various projection methods for dimension reduction - a prelude to visualization of vectorial and non-vectorial data. A popular visualization tool for unsupervised learning is Principal Component Analysis (PCA). PCA aims at the best recoverability of the original data in the Euclidean Vector Space (EVS). However, PCA is not effective for supervised and collaborative learning environment. Discriminant Component Analysis (DCA), basically a supervised PCA, can be derived via a notion of Canonical Vector Space (CVS). The signal subspace components of DCA are associated with the discriminant distance/power (related to the classification effectiveness) while the noise subspace components of DCA are tightly coupled with the recoverability and/or privacy protection. DCA enjoys two major merits: First, because the rank of the signal subspace is limited by the number of classes, DCA can effectively support classification using a relatively small dimensionality (i.e. high compression). Second, in DCA, the eigenvalues of the noise-space are ordered according to their corresponding reconstruction errors and can thus be used to control recoverability or anti-recoverability by applying respectively an negative or positive ridge. Via DCA, individual data can be highly compressed before being uploaded to the cloud, and thus better enabling privacy protection. In many practical scenarios, additional privacy protection can be incorporated by allowing individual participants to selectively hide some personal features. The classification of masked data calls for a kernel approach to Incomplete Data Analysis (KAIDA). More specifically, we extend PCA/DCA to their kernel variants. The success of kernel machines hinges upon the kernel function adopted to characterize the similarity of pairs of partially-specified vectors. Simulations on the HAR dataset confirm that DCA far outperforms PCA, both in their conventional or kernelized variants. For the latter, the visualization/classification results suggest favorable performance by the proposed partial correlation kernels over the imputed RBF kernel. In addition, the visualization results further points to a potentially promising approach via multiple kernels such as combining an imputed Gaussian RBF kernel and a non-imputed partial correlation kernel.
Frechet Differentiability of Lipschitz Functions and Porous Sets in Banach Spaces (AM-179)
This book makes a significant inroad into the unexpectedly difficult question of existence of Fréchet derivatives of Lipschitz maps of Banach spaces into higher dimensional spaces. Because the question turns out to be closely related to porous sets in Banach spaces, it provides a bridge between descriptive set theory and the classical topic of existence of derivatives of vector-valued Lipschitz functions. The topic is relevant to classical analysis and descriptive set theory on Banach spaces. The book opens several new research directions in this area of geometric nonlinear functional analysis. The new methods developed here include a game approach to perturbational variational principles that is of independent interest. Detailed explanation of the underlying ideas and motivation behind the proofs of the new results on Fréchet differentiability of vector-valued functions should make these arguments accessible to a wider audience. The most important special case of the differentiability results, that Lipschitz mappings from a Hilbert space into the plane have points of Fréchet differentiability, is given its own chapter with a proof that is independent of much of the work done to prove more general results. The book raises several open questions concerning its two main topics.
Topics in Quaternion Linear Algebra
Quaternions are a number system that has become increasingly useful for representing the rotations of objects in three-dimensional space and has important applications in theoretical and applied mathematics, physics, computer science, and engineering. This is the first book to provide a systematic, accessible, and self-contained exposition of quaternion linear algebra. It features previously unpublished research results with complete proofs and many open problems at various levels, as well as more than 200 exercises to facilitate use by students and instructors. Applications presented in the book include numerical ranges, invariant semidefinite subspaces, differential equations with symmetries, and matrix equations. Designed for researchers and students across a variety of disciplines, the book can be read by anyone with a background in linear algebra, rudimentary complex analysis, and some multivariable calculus. Instructors will find it useful as a complementary text for undergraduate linear algebra courses or as a basis for a graduate course in linear algebra. The open problems can serve as research projects for undergraduates, topics for graduate students, or problems to be tackled by professional research mathematicians. The book is also an invaluable reference tool for researchers in fields where techniques based on quaternion analysis are used.
Topic Modeling: A Comprehensive Review
Topic modelling is the new revolution in text mining. It is a statistical technique for revealing the underlying semantic structure in large collection of documents. After analysing approximately 300 research articles on topic modeling, a comprehensive survey on topic modelling has been presented in this paper. It includes classification hierarchy, Topic modelling methods, Posterior Inference techniques, different evolution models of latent Dirichlet allocation (LDA) and its applications in different areas of technology including Scientific Literature, Bioinformatics, Software Engineering and analysing social network is presented. Quantitative evaluation of topic modeling techniques is also presented in detail for better understanding the concept of topic modeling. At the end paper is concluded with detailed discussion on challenges of topic modelling, which will definitely give researchers an insight for good research.