Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
14,380
result(s) for
"Factorization"
Sort by:
Factor Separation in the Atmosphere : Applications and Future Prospects
\"Modeling atmospheric processes in order to forecast the weather or future climate change is an extremely complex and computationally intensive undertaking. One of the main difficulties is that there are a huge number of factors that need to be taken into account, some of which are still poorly understood. The Factor Separation (FS) method is a computational procedure that helps deal with these nonlinear factors. In recent years many scientists have applied FS methodology to a range of modeling problems, including paleoclimatology, limnology, regional climate change, rainfall analysis, cloud modeling, pollution, crop growth, and other forecasting applications. This book is the first to describe the fundamentals of the method, and to bring together its many applications in the atmospheric sciences. The main audience is researchers and graduate students using the FS method, but it is also of interest to advanced students, researchers, and professionals across the atmospheric sciences\"-- Provided by publisher.
Baryonic and mesonic 3-point functions with open spin indices
2018
We have implemented a new way of computing three-point correlation functions. It is based on a factorization of the entire correlation function into two parts which are evaluated with open spin-(and to some extent flavor-) indices. This allows us to estimate the two contributions simultaneously for many different initial and final states and momenta, with little computational overhead. We explain this factorization as well as its efficient implementation in a new library which has been written to provide the necessary functionality on modern parallel architectures and on CPUs, including Intel’s Xeon Phi series.
Journal Article
La conjecture de Manin pour une famille de variétés en dimension supérieure
2019
Inspired by a method of La Bretèche relying on some unique factorisation, we generalise work of Blomer, Brüdern and Salberger to prove Manin's conjecture in its strong form conjectured by Peyre for some infinite family of varieties of higher dimension. The varieties under consideration in this paper correspond to the singular projective varieties defined by the following equation
$$
x_1 y_2y_3\\cdots y_n+x_2y_1y_3 \\cdots y_n+ \\cdots+x_n y_1 y_2 \\cdots y_{n-1}=0
$$
in ℙℚ2n−1 for all n ⩾ 3. This paper comes with an Appendix by Per Salberger constructing a crepant resolution of the above varieties. En s'inspirant d'une méthode due à La Bretèche reposant sur une factorisation unique, nous généralisons des travaux récents de Blomer, Brüdern, et Salberger en établissant la conjecture de Manin sous sa forme forte conjecturée par Peyre pour une famille infinie de variétés en dimension supérieure. Les variétés considérées dans cet article correspondent aux variétés projectives singulières définies par l'équation suivante
$$
x_1 y_2y_3\\cdots y_n+x_2y_1y_3 \\cdots y_n+ \\cdots+x_n y_1 y_2 \\cdots y_{n-1}=0
$$
dans ℙℚ2n−1 pour tout n ⩾ 3. Cet article est accompagné d'une Annexe de Per Salberger dans laquelle une résolution crépante des variétés ci-dessus est explicitée.
Journal Article
Proof of the 1-factorization and Hamilton Decomposition Conjectures
by
Lo, Allan
,
Kühn, Daniela
,
Osthus, Deryk
in
Decomposition (Mathematics)
,
Factorization (Mathematics)
2016
In this paper the authors prove the following results (via a unified approach) for all sufficiently large n: (i) [1-factorization conjecture] Suppose that n is even and D\\geq 2\\lceil n/4\\rceil -1. Then every D-regular graph G on n vertices has a decomposition into perfect matchings. Equivalently, \\chi'(G)=D. (ii) [Hamilton decomposition conjecture] Suppose that D \\ge \\lfloor n/2 \\rfloor . Then every D-regular graph G on n vertices has a decomposition into Hamilton cycles and at most one perfect matching. (iii) [Optimal packings of Hamilton cycles] Suppose that G is a graph on n vertices with minimum degree \\delta\\ge n/2. Then G contains at least {\\rm reg}_{\\rm even}(n,\\delta)/2 \\ge (n-2)/8 edge-disjoint Hamilton cycles. Here {\\rm reg}_{\\rm even}(n,\\delta) denotes the degree of the largest even-regular spanning subgraph one can guarantee in a graph on n vertices with minimum degree \\delta. (i) was first explicitly stated by Chetwynd and Hilton. (ii) and the special case \\delta= \\lceil n/2 \\rceil of (iii) answer questions of Nash-Williams from 1970. All of the above bounds are best possible.
Explainable recommendations with nonnegative matrix factorization
2023
Explicable recommendation system is proved to be conducive to improving the persuasiveness of the recommendation system, enabling users to trust the system more and make more intelligent decisions. Nonnegative Matrix Factorization (NMF) produces interpretable solutions for many applications including collaborative filtering as it’s nonnegativity. However, the latent features make it difficult to interpret recommendation results to users because we don’t know the specific meaning of features that users are interested in and the extent to which the items or users belong to these features. To overcome this difficulty, we develop a novel method called Partially Explainable Nonnegative Matrix Factorization (PE-NMF) by employing explicit data to replace part latent variables of item-feature matrix, by which users can learn more about the features of the items and then to make ideal decisions and recommendations. The objective function of PE-NMF is composed of two parts: one part corresponding to explicit features and the other part is about implicit features. We develop an iterative method to minimize the objective function and derive the iterative update rules, with which the objective function can be proved to be decreasing. Finally, the experiments are executed on Yelp, Amazon and Dianping datasets, and the experimental results demonstrate PE-NMF keeps a high prediction performance on both rating prediction and top-N recommendation that compare to fully explainable nonnegative matrix factorization (FE-NMF), which is obtained by using explicit opinions instead of item-feature matrix. Also PE-NMF holds almost the same recommendation ability as NMF.
Journal Article
Generalized Canonical Polyadic Tensor Decomposition
by
Hong, David
,
Duersch, Jed A.
,
Kolda, Tamara G.
in
Bernoulli tensor factorization
,
CANDECOMP
,
Canonical polyadic (CP) tensor decomposition
2020
Tensor decomposition is a fundamental unsupervised machine learning method in data science, with applications including network analysis and sensor data processing. This work develops a generalized canonical polyadic (GCP) low-rank tensor decomposition that allows other loss functions besides squared error. For instance, we can use logistic loss or Kullback-Leibler divergence, enabling tensor decomposition for binary or count data. We present a variety of statistically motivated loss functions for various scenarios. We provide a generalized framework for computing gradients and handling missing data that enables the use of standard optimization methods for fitting the model. We demonstrate the flexibility of the GCP decomposition on several real-world examples including interactions in a social network, neural activity in a mouse, and monthly rainfall measurements in India.
Journal Article
Algorithms for nonnegative matrix and tensor factorizations: a unified view based on block coordinate descent framework
2014
We review algorithms developed for nonnegative matrix factorization (NMF) and nonnegative tensor factorization (NTF) from a unified view based on the block coordinate descent (BCD) framework. NMF and NTF are low-rank approximation methods for matrices and tensors in which the low-rank factors are constrained to have only nonnegative elements. The nonnegativity constraints have been shown to enable natural interpretations and allow better solutions in numerous applications including text analysis, computer vision, and bioinformatics. However, the computation of NMF and NTF remains challenging and expensive due the constraints. Numerous algorithmic approaches have been proposed to efficiently compute NMF and NTF. The BCD framework in constrained non-linear optimization readily explains the theoretical convergence properties of several efficient NMF and NTF algorithms, which are consistent with experimental observations reported in literature. In addition, we discuss algorithms that do not fit in the BCD framework contrasting them from those based on the BCD framework. With insights acquired from the unified perspective, we also propose efficient algorithms for updating NMF when there is a small change in the reduced dimension or in the data. The effectiveness of the proposed updating algorithms are validated experimentally with synthetic and real-world data sets.
Journal Article
Optimization of identifiability for efficient community detection
2020
Many physical and social systems are best described by networks. And the structural properties of these networks often critically determine the properties and function of the resulting mathematical models. An important method to infer the correlations between topology and function is the detection of community structure, which plays a key role in the analysis, design, and optimization of many complex systems. The nonnegative matrix factorization has been used prolifically to that effect in recent years, although it cannot guarantee balanced partitions, and it also does not allow a proactive computation of the number of communities in a network. This indicates that the nonnegative matrix factorization does not satisfy all the nonnegative low-rank approximation conditions. Here we show how to resolve this important open problem by optimizing the identifiability of community structure. We propose a new form of nonnegative matrix decomposition and a probabilistic surrogate learning function that can be solved according to the majorization-minimization principle. Extensive in silico tests on artificial and real-world data demonstrate the efficient performance in community detection, regardless of the size and complexity of the network.
Journal Article
Weakly-supervised Semantic Guided Hashing for Social Image Retrieval
2020
Hashing has been widely investigated for large-scale image retrieval due to its search effectiveness and computation efficiency. In this work, we propose a novel Semantic Guided Hashing method coupled with binary matrix factorization to perform more effective nearest neighbor image search by simultaneously exploring the weakly-supervised rich community-contributed information and the underlying data structures. To uncover the underlying semantic information from the weakly-supervised user-provided tags, the binary matrix factorization model is leveraged for learning the binary features of images while the problem of imperfect tags is well addressed. The uncovered semantic information enables to well guide the discrete hash code learning. The underlying data structures are discovered by adaptively learning a discriminative data graph, which makes the learned hash codes preserve the meaningful neighbors. To the best of our knowledge, the proposed method is the first work that incorporates the hash code learning, the semantic information mining and the data structure discovering into one unified framework. Besides, the proposed method is extended to one deep approach for the optimal compatibility of discriminative feature learning and hash code learning. Experiments are conducted on two widely-used social image datasets and the proposed method achieves encouraging performance compared with the state-of-the-art hashing methods.
Journal Article
SPECTRAL AND MATRIX FACTORIZATION METHODS FOR CONSISTENT COMMUNITY DETECTION IN MULTI-LAYER NETWORKS
2020
We consider the problem of estimating a consensus community structure by combining information from multiple layers of a multi-layer network using methods based on the spectral clustering or a low-rank matrix factorization. As a general theme, these “intermediate fusion” methods involve obtaining a low column rank matrix by optimizing an objective function and then using the columns of the matrix for clustering. However, the theoretical properties of these methods remain largely unexplored. In the absence of statistical guarantees on the objective functions, it is difficult to determine if the algorithms optimizing the objectives will return good community structures. We investigate the consistency properties of the global optimizer of some of these objective functions under the multi-layer stochastic blockmodel. For this purpose, we derive several new asymptotic results showing consistency of the intermediate fusion techniques along with the spectral clustering of mean adjacency matrix under a high dimensional setup, where the number of nodes, the number of layers and the number of communities of the multi-layer graph grow. Our numerical study shows that the intermediate fusion techniques outperform late fusion methods, namely spectral clustering on aggregate spectral kernel and module allegiance matrix in sparse networks, while they outperform the spectral clustering of mean adjacency matrix in multi-layer networks that contain layers with both homophilic and heterophilic communities.
Journal Article