Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,818
result(s) for
"Sparse networks"
Sort by:
Network histograms and universality of blockmodel approximation
2014
In this paper we introduce the network histogram, a statistical summary of network interactions to be used as a tool for exploratory data analysis. A network histogram is obtained by fitting a stochastic blockmodel to a single observation of a network dataset. Blocks of edges play the role of histogram bins and community sizes that of histogram bandwidths or bin sizes. Just as standard histograms allow for varying bandwidths, different blockmodel estimates can all be considered valid representations of an underlying probability model, subject to bandwidth constraints. Here we provide methods for automatic bandwidth selection, by which the network histogram approximates the generating mechanism that gives rise to exchangeable random graphs. This makes the blockmodel a universal network representation for unlabeled graphs. With this insight, we discuss the interpretation of network communities in light of the fact that many different community assignments can all give an equally valid representation of such a network. To demonstrate the fidelity-versus-interpretability tradeoff inherent in considering different numbers and sizes of communities, we analyze two publicly available networks—political weblogs and student friendships—and discuss how to interpret the network histogram when additional information related to node and edge labeling is present.
Significance Representing and understanding large networks remains a major challenge across the sciences, with a strong focus on communities: groups of network nodes whose connectivity properties are similar. Here we argue that, independently of the presence or absence of actual communities in the data, this notion leads to something stronger: a histogram representation, in which blocks of network edges that result from community groupings can be interpreted as two-dimensional histogram bins. We provide an automatic procedure to determine bin widths for any given network and illustrate our methodology using two publicly available network datasets.
Journal Article
CONSISTENT NONPARAMETRIC ESTIMATION FOR HEAVY-TAILED SPARSE GRAPHS
by
Borgs, Christian
,
Chayes, Jennifer T.
,
Cohn, Henry
in
Algorithms
,
Clustering
,
Estimating techniques
2021
We study graphons as a nonparametric generalization of stochastic block models, and show how to obtain compactly represented estimators for sparse networks in this framework. In contrast to previous work, we relax the usual boundedness assumption for the generating graphon and instead assume only integrability, so that we can handle networks that have long tails in their degree distributions. We also relax the usual assumption that the graphon is defined on the unit interval, to allow latent position graphs based on more general spaces.
We analyze three algorithms. The first is a least squares algorithm, which gives a consistent estimator for all square-integrable graphons, with errors expressed in terms of the best possible stochastic block model approximation. Next, we analyze an algorithm based on the cut norm, which works for all integrable graphons. Finally, we show that clustering based on degrees works whenever the underlying degree distribution is atomless.
Journal Article
Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks
by
Li, Na
,
Wu, Tao
,
Li, Xiaoyang
in
differential evolution
,
neural network compression
,
sparse network
2021
Deep neural networks have evolved significantly in the past decades and are now able to achieve better progression of sensor data. Nonetheless, most of the deep models verify the ruling maxim in deep learning—bigger is better—so they have very complex structures. As the models become more complex, the computational complexity and resource consumption of these deep models are increasing significantly, making them difficult to perform on resource-limited platforms, such as sensor platforms. In this paper, we observe that different layers often have different pruning requirements, and propose a differential evolutionary layer-wise weight pruning method. Firstly, the pruning sensitivity of each layer is analyzed, and then the network is compressed by iterating the weight pruning process. Unlike some other methods that deal with pruning ratio by greedy ways or statistical analysis, we establish an optimization model to find the optimal pruning sensitivity set for each layer. Differential evolution is an effective method based on population optimization which can be used to address this task. Furthermore, we adopt a strategy to recovery some of the removed connections to increase the capacity of the pruned model during the fine-tuning phase. The effectiveness of our method has been demonstrated in experimental studies. Our method compresses the number of weight parameters in LeNet-300-100, LeNet-5, AlexNet and VGG16 by 24×, 14×, 29× and 12×, respectively.
Journal Article
Neural Random Forests
by
Scornet, Erwan
,
Biau, Gérard
,
Welbl, Johannes
in
Mathematics
,
Mathematics and Statistics
,
Statistical Theory and Methods
2019
Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard networks, and less restrictions on the geometry of the decision boundaries than trees. Consistency results are proved, and substantial numerical evidence is provided on both synthetic and real data sets to assess the excellent performance of our methods in a large variety of prediction problems.
Journal Article
Large, Sparse Optimal Matching With Refined Covariate Balance in an Observational Study of the Health Outcomes Produced by New Surgeons
by
Kelz, Rachel R.
,
Rosenbaum, Paul R.
,
Silber, Jeffrey H.
in
Algorithms
,
Applications and Case Studies
,
Clinical outcomes
2015
Every newly trained surgeon performs her first unsupervised operation. How do the health outcomes of her patients compare with the patients of experienced surgeons? Using data from 498 hospitals, we compare 1252 pairs comprised of a new surgeon and an experienced surgeon working at the same hospital. We introduce a new form of matching that matches patients of each new surgeon to patients of an otherwise similar experienced surgeon at the same hospital, perfectly balancing 176 surgical procedures and closely balancing a total of 2.9 million categories of patients; additionally, the individual patient pairs are as close as possible. A new goal for matching is introduced, called \"refined covariate balance,\" in which a sequence of nested, ever more refined, nominal covariates is balanced as closely as possible, emphasizing the first or coarsest covariate in that sequence. A new algorithm for matching is proposed and the main new results prove that the algorithm finds the closest match in terms of the total within-pair covariate distances among all matches that achieve refined covariate balance. Unlike previous approaches to forcing balance on covariates, the new algorithm creates multiple paths to a match in a network, where paths that introduce imbalances are penalized and hence avoided to the extent possible. The algorithm exploits a sparse network to quickly optimize a match that is about two orders of magnitude larger than is typical in statistical matching problems, thereby permitting much more extensive use of fine and near-fine balance constraints. The match was constructed in a few minutes using a network optimization algorithm implemented in R. An R package called rcbalance implementing the method is available from CRAN.
Journal Article
Mean field limits of co-evolutionary signed heterogeneous networks
2025
Many science phenomena are modelled as interacting particle systems (IPS) coupled on static networks. In reality, network connections are far more dynamic. Connections among individuals receive feedback from nearby individuals and make changes to better adapt to the world. Hence, it is reasonable to model myriad real-world phenomena as co-evolutionary (or adaptive) networks . These networks are used in different areas including telecommunication, neuroscience, computer science, biochemistry, social science, as well as physics, where Kuramoto-type networks have been widely used to model interaction among a set of oscillators. In this paper, we propose a rigorous formulation for limits of a sequence of co-evolutionary Kuramoto oscillators coupled on heterogeneous co-evolutionary networks, which receive both positive and negative feedback from the dynamics of the oscillators on the networks. We show under mild conditions, the mean field limit (MFL) of the co-evolutionary network exists and the sequence of co-evolutionary Kuramoto networks converges to this MFL. Such MFL is described by solutions of a generalised Vlasov equation. We treat the graph limits as signed graph measures, motivated by the recent work in [Kuehn, Xu. Vlasov equations on digraph measures, JDE, 339 (2022), 261–349]. In comparison to the recently emerging works on MFLs of IPS coupled on non-co-evolutionary networks (i.e., static networks or time-dependent networks independent of the dynamics of the IPS), our work seems the first to rigorously address the MFL of a co-evolutionary network model. The approach is based on our formulation of a generalisation of the co-evolutionary network as a hybrid system of ODEs and measure differential equations parametrised by a vertex variable, together with an analogue of the variation of parameters formula , as well as the generalised Neunzert’s in-cell-particle method developed in [Kuehn, Xu. Vlasov equations on digraph measures, JDE, 339 (2022), 261–349].
Journal Article
Continual prune-and-select: class-incremental learning with specialized subnetworks
by
Bessa, Miguel A
,
Dekhovich, Aleksandr
,
Tax, David M.J
in
Artificial neural networks
,
Brain
,
Cognitive tasks
2023
The human brain is capable of learning tasks sequentially mostly without forgetting. However, deep neural networks (DNNs) suffer from catastrophic forgetting when learning one task after another. We address this challenge considering a class-incremental learning scenario where the DNN sees test data without knowing the task from which this data originates. During training, Continual Prune-and-Select (CP&S) finds a subnetwork within the DNN that is responsible for solving a given task. Then, during inference, CP&S selects the correct subnetwork to make predictions for that task. A new task is learned by training available neuronal connections of the DNN (previously untrained) to create a new subnetwork by pruning, which can include previously trained connections belonging to other subnetwork(s) because it does not update shared connections. This enables to eliminate catastrophic forgetting by creating specialized regions in the DNN that do not conflict with each other while still allowing knowledge transfer across them. The CP&S strategy is implemented with different subnetwork selection strategies, revealing superior performance to state-of-the-art continual learning methods tested on various datasets (CIFAR-100, CUB-200-2011, ImageNet-100 and ImageNet-1000). In particular, CP&S is capable of sequentially learning 10 tasks from ImageNet-1000 keeping an accuracy around 94% with negligible forgetting, a first-of-its-kind result in class-incremental learning. To the best of the authors’ knowledge, this represents an improvement in accuracy above 10% when compared to the best alternative method.
Journal Article
Edge Exchangeable Models for Interaction Networks
2018
Many modern network datasets arise from processes of interactions in a population, such as phone calls, email exchanges, co-authorships, and professional collaborations. In such interaction networks, the edges comprise the fundamental statistical units, making a framework for edge-labeled networks more appropriate for statistical analysis. In this context, we initiate the study of edge exchangeable network models and explore its basic statistical properties. Several theoretical and practical features make edge exchangeable models better suited to many applications in network analysis than more common vertex-centric approaches. In particular, edge exchangeable models allow for sparse structure and power law degree distributions, both of which are widely observed empirical properties that cannot be handled naturally by more conventional approaches. Our discussion culminates in the Hollywood model, which we identify here as the canonical family of edge exchangeable distributions. The Hollywood model is computationally tractable, admits a clear interpretation, exhibits good theoretical properties, and performs reasonably well in estimation and prediction as we demonstrate on real network datasets. As a generalization of the Hollywood model, we further identify the vertex components model as a nonparametric subclass of models with a convenient stick breaking construction.
Journal Article
An improved label propagation algorithm based on community core node and label importance for community detection in sparse network
2023
Community structure can be used to analyze and understand the structural functions in a network, reveal its implicit information, and predict its dynamic development pattern. Existing community detection algorithms are very sensitive to the sparsity of network, and they have difficulty in obtaining stable community detection results. To address these shortcomings, an improved label propagation algorithm combining community core nodes and label importance is proposed (CCLI-LPA). Firstly, the core nodes in a network are selected by fusing the first-order and second-order structures of the nodes, and the network is initialized by them. Then, a new label selection mechanism is defined by combining the importance of both neighboring nodes and their labels, and the label of a node is updated based on it. Validation experiments are conducted on six real networks and eight synthetic networks, and the results show that CCLI-LPA can not only obtain stable results in real networks but also obtain stable and accurate results in sparse networks.
Journal Article
Dynamic Unstructured Pruning Neural Network Image Super-resolution Reconstruction
2024
Many deep learning-based image super-resolution reconstruction algorithms improve the overall feature expression ability of a network by extending the depth of the network. However, excessively extending the depth of the network causes the model to be over-parameterized and complicated. Furthermore, redundant parameters increase the instability of feature expression. To address this issue, based on the unstructured pruning algorithm, the weight parameters are changed and the balanced learning strategy is used, this paper proposes a neural network unstructured pruning algorithm which is suitable for image super-resolution reconstruction tasks, called the unstructured pruning algorithm. Without changing the network structure and increasing the computational complexity, the overall feature expression ability of the network is improved by searching for an optimal yet sparse sub-network of the original network, which excludes the influence of redundant parameters and maximizes the ability of capturing fine-grained and richer features with limited parameters. The experimental results based on Set5, Set14 and BSD100 test sets show that, compared with the original network model and unstructured pruning algorithm, the PSNR and SSIM of the reconstructed images obtained by Dynamic unstructured pruning algorithm are improved, and they have richer detail features and clearer overall and local contours.
Journal Article