Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
566 result(s) for "null‐models"
Sort by:
Patrones de coocurrencia y conducta alimentaria a escala local de Phlebotominae (Diptera: Psychodidae) del estado Falcón, Venezuela
Los flebotominos son transmisores de los protozoarios parásitos del genero Leishmania, agentes causales de las leishmaniasis en humanos y otros mamíferos. Mediante modelos nulos, se estudio la estructura de las comunidades flebotominas en focos endémicos de leishmaniasis del estado Falcón, en el nor-occidente de Venezuela, a una escala reducida o local: en el domicilio, peridomicilio y el área silvestre de una zona de vida o en una localidad en particular. La aplicación de los modelos nulos reveló que a escala local las comunidades flebotominas se encuentran agregadas, sugiriendo que las especies coexisten y no compiten. Los estudios de co-ocurrencia con el análisis de estructura gremial y la prueba de la hipótesis de los estados favorecido mostró que los resultados obtenidos no son estadísticamente significativos (p> 0.05), lo que sugiere que las especies flebotominas pertenecen a un mismo gremio en sus preferencias alimentarias, lo que podría deberse a que la hematofagia se trata de un evento heterogéneo, circunstancial y oportunista. Se discuten aspectos sobre los posibles factores, como por ejemplo la transformación y homogenización de los hábitats por el impacto sinantrópico, que pudieran estar determinando el ensamble de los flebotominos en la región falconiana.
Comparing spatial null models for brain maps
Technological and data sharing advances have led to a proliferation of high-resolution structural and functional maps of the brain. Modern neuroimaging research increasingly depends on identifying correspondences between the topographies of these maps; however, most standard methods for statistical inference fail to account for their spatial properties. Recently, multiple methods have been developed to generate null distributions that preserve the spatial autocorrelation of brain maps and yield more accurate statistical estimates. Here, we comprehensively assess the performance of ten published null frameworks in statistical analyses of neuroimaging data. To test the efficacy of these frameworks in situations with a known ground truth, we first apply them to a series of controlled simulations and examine the impact of data resolution and spatial autocorrelation on their family-wise error rates. Next, we use each framework with two empirical neuroimaging datasets, investigating their performance when testing (1) the correspondence between brain maps (e.g., correlating two activation maps) and (2) the spatial distribution of a feature within a partition (e.g., quantifying the specificity of an activation map within an intrinsic functional network). Finally, we investigate how differences in the implementation of these null models may impact their performance. In agreement with previous reports, we find that naive null models that do not preserve spatial autocorrelation consistently yield elevated false positive rates and unrealistically liberal statistical estimates. While spatially-constrained null models yielded more realistic, conservative estimates, even these frameworks suffer from inflated false positive rates and variable performance across analyses. Throughout our results, we observe minimal impact of parcellation and resolution on null model performance. Altogether, our findings highlight the need for continued development of statistically-rigorous methods for comparing brain maps. The present report provides a harmonised framework for benchmarking and comparing future advancements.
Unraveling the molecular relevance of brain phenotypes: A comparative analysis of null models and test statistics
•Competitive null models may yield false positives from co-expression.•Self-contained null models may yield false positives from bimodal correlations.•Test statistics interact differently with two types of null models.•Supplementary analyses with various configurations support the findings. Correlating transcriptional profiles with imaging-derived phenotypes has the potential to reveal possible molecular architectures associated with cognitive functions, brain development and disorders. Competitive null models built by resampling genes and self-contained null models built by spinning brain regions, along with varying test statistics, have been used to determine the significance of transcriptional associations. However, there has been no systematic evaluation of their performance in imaging transcriptomics analyses. Here, we evaluated the performance of eight different test statistics (mean, mean absolute value, mean squared value, max mean, median, Kolmogorov-Smirnov (KS), Weighted KS and the number of significant correlations) in both competitive null models and self-contained null models. Simulated brain maps (n = 1,000) and gene sets (n = 500) were used to calculate the probability of significance (Psig) for each statistical test. Our results suggested that competitive null models may result in false positive results driven by co-expression within gene sets. Furthermore, we demonstrated that the self-contained null models may fail to account for distribution characteristics (e.g., bimodality) of correlations between all available genes and brain phenotypes, leading to false positives. These two confounding factors interacted differently with test statistics, resulting in varying outcomes. Specifically, the sign-sensitive test statistics (i.e., mean, median, KS, Weighted KS) were influenced by co-expression bias in the competitive null models, while median and sign-insensitive test statistics were sensitive to the bimodality bias in the self-contained null models. Additionally, KS-based statistics produced conservative results in the self-contained null models, which increased the risk of false negatives. Comprehensive supplementary analyses with various configurations, including realistic scenarios, supported the results. These findings suggest utilizing sign-insensitive test statistics such as mean absolute value, max mean in the competitive null models and the mean as the test statistic for the self-contained null models. Additionally, adopting the confounder-matched (e.g., coexpression-matched) null models as an alternative to standard null models can be a viable strategy. Overall, the present study offers insights into the selection of statistical tests for imaging transcriptomics studies, highlighting areas for further investigation and refinement in the evaluation of novel and commonly used tests.
Rare species contribute disproportionately to the functional structure of species assemblages
There is broad consensus that the diversity of functional traits within species assemblages drives several ecological processes. It is also widely recognized that rare species are the first to become extinct following human-induced disturbances. Surprisingly, however, the functional importance of rare species is still poorly understood, particularly in tropical species-rich assemblages where the majority of species are rare, and the rate of species extinction can be high. Here, we investigated the consequences of local and regional extinctions on the functional structure of species assemblages. We used three extensive datasets (stream fish from the Brazilian Amazon, rainforest trees from French Guiana, and birds from the Australian Wet Tropics) and built an integrative measure of species rarity versus commonness, combining local abundance, geographical range, and habitat breadth. Using different scenarios of species loss, we found a disproportionate impact of rare species extinction for the three groups, with significant reductions in levels of functional richness, specialization, and originality of assemblages, which may severely undermine the integrity of ecological processes. The whole breadth of functional abilities within species assemblages, which is disproportionately supported by rare species, is certainly critical in maintaining ecosystems particularly under the ongoing rapid environmental transitions.
treats: A modular R package for simulating trees and traits
Simulating biological realistic data is an important step to understand and investigate biodiversity. Simulated data can be used to generate null, base line or neutral models. These can be used either in comparison to observed data to estimate the mechanisms that generated the data. Or they can be used to explore, understand and develop theoretical advances by proposing toy models. In evolutionary biology, simulations often involve the need of an evolutionary process where descent with modification is at the core of how the simulated data are generated. These evolutionary processes can then be nearly infinitely modified to include complex processes that affect the simulations such as traits co‐evolution, competition mechanisms or mass extinction events. Here I present the treats package, a modular R package for trees and traits simulations. This package is based on a simple birth death algorithm from which all steps can easily be modified by users. treats also provides a tidy interface through the treats object, allowing users to easily run reproducible simulations. It also comes with an extend manual regularly updated following users' questions or suggestions.
Inferring monopartite projections of bipartite networks: an entropy-based approach
Bipartite networks are currently regarded as providing a major insight into the organization of many real-world systems, unveiling the mechanisms driving the interactions occurring between distinct groups of nodes. One of the most important issues encountered when modeling bipartite networks is devising a way to obtain a (monopartite) projection on the layer of interest, which preserves as much as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, according to which any two nodes sharing a statistically-significant number of neighbors are linked. Since assessing the statistical significance of nodes similarity requires a proper statistical benchmark, here we consider a set of four null models, defined within the exponential random graph framework. Our algorithm outputs a matrix of link-specific p-values, from which a validated projection is straightforwardly obtainable, upon running a multiple hypothesis testing procedure. Finally, we test our method on an economic network (i.e. the countries-products World Trade Web representation) and a social network (i.e. MovieLens, collecting the users' ratings of a list of movies). In both cases non-trivial communities are detected: while projecting the World Trade Web on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on the films layer allows clusters of movies whose affinity cannot be fully accounted for by genre similarity to be individuated.
Lightning network: a second path towards centralisation of the Bitcoin economy
The Bitcoin lightning network (BLN), a so-called ‘second layer’ payment protocol, was launched in 2018 to scale up the number of transactions between Bitcoin owners. In this paper, we analyse the structure of the BLN over a period of 18 months, ranging from 12th January 2018 to 17th July 2019, at the end of which the network has reached 8.216 users, 122.517 active channels and 2.732,5 transacted Bitcoins. Here, we consider three representations of the BLN: the daily snapshot one, the weekly snapshot one and the daily-block snapshot one. By studying the topological properties of the binary and weighted versions of the three representations above, we find that the total volume of transacted Bitcoins approximately grows as the square of the network size; however, despite the huge activity characterising the BLN, the Bitcoins distribution is very unequal: the average Gini coefficient of the node strengths (computed across the entire history of the Bitcoin lightning network) is, in fact, ≃0.88 causing the 10% (50%) of the nodes to hold the 80% (99%) of the Bitcoins at stake in the BLN (on average, across the entire period). This concentration brings up the question of which minimalist network model allows us to explain the network topological structure. Like for other economic systems, we hypothesise that local properties of nodes, like the degree, ultimately determine part of its characteristics. Therefore, we have tested the goodness of the undirected binary configuration model (UBCM) in reproducing the structural features of the BLN: the UBCM recovers the disassortative and the hierarchical character of the BLN but underestimates the centrality of nodes; this suggests that the BLN is becoming an increasingly centralised network, more and more compatible with a core-periphery structure. Further inspection of the resilience of the BLN shows that removing hubs leads to the collapse of the network into many components, an evidence suggesting that this network may be a target for the so-called split attacks .
Corrigendum: Lightning network: a second path towards centralisation of the bitcoin economy (2020 New J. Phys. 22 083022)
The bitcoin lightning network (BLN), a so-called ‘second layer’ payment protocol, was launched in 2018 to scale up the number of transactions between bitcoin owners. In this paper, we analyse the structure of the BLN over a period of 18 months, ranging from 12th January 2018 to 17th July 2019, at the end of which the network has reached 8.216 users, 122.517 active channels and 2.732,5 transacted bitcoins. Here, we consider three representations of the BLN: the daily snapshot one, the weekly snapshot one and the daily-block snapshot one. By studying the topological properties of the binary and weighted versions of the three representations above, we find that the total volume of transacted bitcoins approximately grows as the square of the network size; however, despite the huge activity characterising the BLN, the bitcoins distribution is very unequal: the average Gini coefficient of the node strengths (computed across the entire history of the Bitcoin Lightning Network) is, in fact, ≃0.88 causing the 10% (50%) of the nodes to hold the 80% (99%) of the bitcoins at stake in the BLN (on average, across the entire period). This concentration brings up the question of which minimalist network model allows us to explain the network topological structure. Like for other economic systems, we hypothesise that local properties of nodes, like the degree, ultimately determine part of its characteristics. Therefore, we have tested the goodness of the undirected binary configuration model (UBCM) in reproducing the structural features of the BLN: the UBCM recovers the disassortative and the hierarchical character of the BLN but underestimates the centrality of nodes; this suggests that the BLN is becoming an increasingly centralised network, more and more compatible with a core-periphery structure. Further inspection of the resilience of the BLN shows that removing hubs leads to the collapse of the network into many components, an evidence suggesting that this network may be a target for the so-called split attacks .
Community size can affect the signals of ecological drift and niche selection on biodiversity
Ecological drift can override the effects of deterministic niche selection on small populations and drive the assembly of some ecological communities. We tested this hypothesis with a unique data set sampled identically in 200 streams in two regions (tropical Brazil and boreal Finland) that differ in macroinvertebrate community size by fivefold. Null models allowed us to estimate the magnitude to which β-diversity deviates from the expectation under a random assembly process while taking differences in richness and relative abundance into account, i.e., β-deviation. We found that both abundance- and incidence-based β-diversity was negatively related to community size only in Brazil. Also, β-diversity of small tropical communities was closer to stochastic expectations compared with b-diversity of large communities. We suggest that ecological drift may drive variation in some small communities by changing the expected outcome of niche selection, increasing the chances of species with low abundance and narrow distribution to occur in some communities. Habitat destruction, overexploitation, pollution, and reductions in connectivity have been reducing the size of biological communities. These environmental pressures might make smaller communities more vulnerable to novel conditions and render community dynamics more unpredictable. Incorporation of community size into ecological models should provide conceptual and applied insights into a better understanding of the processes driving biodiversity.