Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
400 result(s) for "Matrix partitioning"
Sort by:
Properties of the matrix from Kronecker product on the representation of quaternion group
This paper discusses the properties of matrices, which are elements of a group derived from the application of Kronecker product to the representation of the quaternion group (the author calls this group with Kronecker quaternion group). The properties of the new matrices constructed by matrices from the Kronecker quaternion group as submatrix in a partitioned matrix are discussed based on transpose, determinant, and permutation matrix.
Innovative Bayesian and Parsimony Phylogeny of Dung Beetles (Coleoptera, Scarabaeidae, Scarabaeinae) Enhanced by Ontology-Based Partitioning of Morphological Characters
Scarabaeine dung beetles are the dominant dung feeding group of insects and are widely used as model organisms in conservation, ecology and developmental biology. Due to the conflicts among 13 recently published phylogenies dealing with the higher-level relationships of dung beetles, the phylogeny of this lineage remains largely unresolved. In this study, we conduct rigorous phylogenetic analyses of dung beetles, based on an unprecedented taxon sample (110 taxa) and detailed investigation of morphology (205 characters). We provide the description of morphology and thoroughly illustrate the used characters. Along with parsimony, traditionally used in the analysis of morphological data, we also apply the Bayesian method with a novel approach that uses anatomy ontology for matrix partitioning. This approach allows for heterogeneity in evolutionary rates among characters from different anatomical regions. Anatomy ontology generates a number of parameter-partition schemes which we compare using Bayes factor. We also test the effect of inclusion of autapomorphies in the morphological analysis, which hitherto has not been examined. Generally, schemes with more parameters were favored in the Bayesian comparison suggesting that characters located on different body regions evolve at different rates and that partitioning of the data matrix using anatomy ontology is reasonable; however, trees from the parsimony and all the Bayesian analyses were quite consistent. The hypothesized phylogeny reveals many novel clades and provides additional support for some clades recovered in previous analyses. Our results provide a solid basis for a new classification of dung beetles, in which the taxonomic limits of the tribes Dichotomiini, Deltochilini and Coprini are restricted and many new tribes must be described. Based on the consistency of the phylogeny with biogeography, we speculate that dung beetles may have originated in the Mesozoic contrary to the traditional view pointing to a Cenozoic origin.
A Clustering Algorithm for Tunnel Boring Machine Data Based on Ridge Regression and Fuzzy C-Means
A fuzzy clustering data partitioning method based on ridge regression is proposed to address the high correlation between the data attributes of the tunnel boring machine. The method utilizes the fuzzy partition matrix as the weight for ridge regression analysis and then employs the resulting regression equation as the clustering model to iteratively approach the target minimum value. This process enables data clustering based on the relevant characteristics of the data attributes. Experimental results using functional data sets demonstrate that this method achieves higher accuracy in clustering functional correlation data compared to traditional methods. Additionally, the proposed algorithm is evaluated using measured data from a bid section of a city subway. Results indicate that the algorithm effectively addresses the problem of data classification and key performance index prediction, with a misclassification rate and prediction error of 23.8% and 2.11%, respectively. These findings meet the engineering requirements and provide support for the subsequent data analysis of the tunnel boring machine.
Analysis of dynamic transmission of HPV with reproduction numbers R0
In this paper, we discuss the patterns of the transmission of how the Human papillomavirus (HPV) spreads in the public. We analyze the behaviors of infectious disease in SIR, SEIR, and Multistrian models. In terms of this disease, HPV has many subtypes, some are high-risk, and some are low-risk. Unfortunately, many people in some areas are not aware of it. That will always cause many delays for the disease and aggravations. The harm caused by HPV is different for different groups of people, people will take various measures to handle the situation according to the severity. We divided them into 3 cases. That is case1-negative for people who may have caught HPV; case 2-negative people who may have caught HPV, but excluding exposures; case 3-negative for people who is sure to catch HPV. After that, by computing the disease-free equilibrium (DFE) to get the value for calculating reproduction number R0 to estimate the stability for every case. Then the way of getting the derivatives and partitioned matrix and computing reproduction number with the measure of the Jacobian matrix were used for deciding the response for the epidemic eventually.
An Adaptive Sequential Phase Optimization Method Based on Coherence Stability Detection and Adjustment Correction
Phase optimization, aimed to enhance phase signal-to-noise ratio, is a critical component of the distributed scatterer interferometric synthetic aperture radar technique and directly determines the fineness and reliability of deformation monitoring. As a state-of-the-art method that balances computational efficiency and optimization performance in high-dimensional data environments, sequential phase optimization has been widely studied. However, the improper matrix partitioning and discontinuous sequence compensation in current sequential methods severely restrict their optimization performance. To address these limitations, an adaptive sequential phase optimization method (AdSeq) based on coherence stability detection and adjustment correction is proposed. A submatrix dimension adaptive estimation model driven by coherence stability detection is first established based on persistent exceedance detection analysis. Then, a covariance matrix adaptive sequential partitioning strategy is developed by introducing the submatrix overlap criterion. Finally, a phase reference correction model based on weighted least squares adjustment is proposed to improve phase continuity and overall optimization performance. Experiments with simulated and real datasets are performed to comprehensively evaluate the optimization performance. Experimental results demonstrate that, compared with traditional phase optimization methods, the monitoring point density obtained by AdSeq increased by over 21.07%, and the deformation monitoring accuracy reached 16.49 mm, representing an improvement exceeding 10.09%. These results confirm that the proposed AdSeq method achieves superior noise robustness and phase optimization performance, and provides a higher deformation monitoring accuracy.
On Two-Dimensional Sparse Matrix Partitioning: Models, Methods, and a Recipe
We consider two-dimensional partitioning of general sparse matrices for parallel sparse matrix-vector multiply operation. We present three hypergraph-partitioning-based methods, each having unique advantages. The first one treats the nonzeros of the matrix individually and hence produces fine-grain partitions. The other two produce coarser partitions, where one of them imposes a limit on the number of messages sent and received by a single processor, and the other trades that limit for a lower communication volume. We also present a thorough experimental evaluation of the proposed two-dimensional partitioning methods together with the hypergraph-based one-dimensional partitioning methods, using an extensive set of public domain matrices. Furthermore, for the users of these partitioning methods, we present a partitioning recipe that chooses one of the partitioning methods according to some matrix characteristics. [PUBLICATION ABSTRACT]
Semi-supervised sparse representation collaborative clustering of incomplete data
Sparse subspace clustering (SSC) focuses on revealing the structure and distribution of high dimensional data from an algebraic perspective. It is a two-phase clustering technique, performing sparse representation of the high dimensional data and subsequently cutting the induced affinity graph, which cannot achieve an optimal or expected clustering result. To address this challenge, this paper proposes an approach to subspace representation collaborative clustering (SRCC) for incomplete high dimensional data. In the proposed model, both phases of sparse subspace representation and clustering are integrated into a unified optimization, in which a fuzzy partition matrix is introduced as a bridge to cluster the extracted sparse representation features of the data. At the same time, the missing entries are adaptively imputed along with the two phases. To generalize SRCC to a semi-supervised case, an adjacency matrix of incomplete data is constructed with the ideas of ‘Must-link’ and ‘Cannot-link’. Meanwhile, a semi-supervised indicator matrix is introduced to promote discriminative capacity of revealing global and local structures of incomplete data and enhance the performance of clustering. The semi-supervised sparse representation collaborative clustering (S3RCC) is modeled. Extensive experiments on lots of real-world benchmark datasets demonstrate the superior performance of the proposed two models on imputation and clustering of incomplete data compared to the state-of-the-art methods.
Bidimensionally partitioned online sequential broad learning system for large-scale data stream modeling
Incremental broad learning system (IBLS) is an effective and efficient incremental learning method based on broad learning paradigm. Owing to its streamlined network architecture and flexible dynamic update scheme, IBLS can achieve rapid incremental reconstruction on the basis of the previous model without the entire retraining from scratch, which enables it adept at handling streaming data. However, two prominent deficiencies still persist in IBLS and constrain its further promotion in large-scale data stream scenarios. Firstly, IBLS needs to retain all historical data and perform associated calculations in the incremental learning process, which causes its computational overhead and storage burden to increase over time and as such puts the efficacy of the algorithm at risk for massive or unlimited data streams. Additionally, due to the random generation rule of hidden nodes, IBLS generally necessitates a large network size to guarantee approximation accuracy, and the resulting high-dimensional matrix calculation poses a greater challenge to the updating efficiency of the model. To address these issues, we propose a novel bidimensionally partitioned online sequential broad learning system (BPOSBLS) in this paper. The core idea of BPOSBLS is to partition the high-dimensional broad feature matrix bidimensionally from the aspects of instance dimension and feature dimension, and consequently decompose a large least squares problem into multiple smaller ones, which can then be solved individually. By doing so, the scale and computational complexity of the original high-order model are substantially diminished, thus significantly improving its learning efficiency and usability for large-scale complex learning tasks. Meanwhile, an ingenious recursive computation method called partitioned recursive least squares is devised to solve the BPOSBLS. This method exclusively utilizes the current online samples for iterative updating, while disregarding the previous historical samples, thereby rendering BPOSBLS a lightweight online sequential learning algorithm with consistently low computational costs and storage requirements. Theoretical analyses and simulation experiments demonstrate the effectiveness and superiority of the proposed algorithm.
Constructing Perturbation Matrices of Prototypes for Enhancing the Performance of Fuzzy Decoding Mechanism
Granular computing (GrC) embraces a spectrum of concepts, methodologies, methods, and applications, which dwells upon information granules and their processing. Fuzzy C-means (FCM) based encoding and decoding (granulation-degranulation) mechanism plays a visible role in granular computing. Fuzzy decoding mechanism, also known as the reconstruction (degranulation) problem, has become an intensively studied category in recent years. This study mainly focuses on the improvement of the fuzzy decoding mechanism, and an augmented version achieved through constructing perturbation matrices of prototypes is put forward. Particle swarm optimization is employed to determine a group of optimal perturbation matrices to optimize the prototype matrix and obtain an optimal partition matrix. A series of experiments are carried out to show the enhancement of the proposed method. The experimental results are consistent with the theoretical analysis and demonstrate that the developed method outperforms the traditional FCM-based decoding mechanism.
Number of targets detection method with FCM‐based granulation–degranulation mechanism
In this letter, the authors elaborate on a novel method to detect the number of targets in low SNR and small snapshots through using the Fuzzy C‐Means clustering based granulation‐degranulation mechanism. In the developed scheme, the eigenvalues of the array output correlation matrix are regarded as a time series, and granulated into a pair of ordered information granules. The information about the number of targets is included in the information granules. Subsequently, the degranulation mechanism is used to modify the cost function of the granulation mechanism to improve the quality of the information granules, which can increase the detection accuracy of the number of targets. Finally, the number of targets is determined by the prototype matrix and partition matrix generated by the enhanced version of the granulation. Explicit analysis and derivation of the proposed scheme are presented.