Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
5,252 result(s) for "1925"
Sort by:
Biological constraints on neural network models of cognitive function
Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative, hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning to implementation of inhibition and control, along with neuroanatomical properties including areal structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, on the basis of these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling.Neural network models have potential for improving our understanding of brain functions. In this Perspective, Pulvermüller and colleagues examine various aspects of such models that may need to be constrained to make them more neurobiologically realistic and therefore better tools for understanding brain function.
Brain Cell Type Specific Gene Expression and Co-expression Network Architectures
Elucidating brain cell type specific gene expression patterns is critical towards a better understanding of how cell-cell communications may influence brain functions and dysfunctions. We set out to compare and contrast five human and murine cell type-specific transcriptome-wide RNA expression data sets that were generated within the past several years. We defined three measures of brain cell type-relative expression including specificity, enrichment, and absolute expression and identified corresponding consensus brain cell “signatures,” which were well conserved across data sets. We validated that the relative expression of top cell type markers are associated with proxies for cell type proportions in bulk RNA expression data from postmortem human brain samples. We further validated novel marker genes using an orthogonal ATAC-seq dataset. We performed multiscale coexpression network analysis of the single cell data sets and identified robust cell-specific gene modules. To facilitate the use of the cell type-specific genes for cell type proportion estimation and deconvolution from bulk brain gene expression data, we developed an R package, BRETIGEA. In summary, we identified a set of novel brain cell consensus signatures and robust networks from the integration of multiple datasets and therefore transcend limitations related to technical issues characteristic of each individual study.
Incorporating neuro-inspired adaptability for continual learning in artificial intelligence
Continual learning aims to empower artificial intelligence with strong adaptability to the real world. For this purpose, a desirable solution should properly balance memory stability with learning plasticity, and acquire sufficient compatibility to capture the observed distributions. Existing advances mainly focus on preserving memory stability to overcome catastrophic forgetting, but it remains difficult to flexibly accommodate incremental changes as biological intelligence does. Here, by modelling a robust Drosophila learning system that actively regulates forgetting with multiple learning modules, we propose a generic approach that appropriately attenuates old memories in parameter distributions to improve learning plasticity, and accordingly coordinates a multi-learner architecture to ensure solution compatibility. Through extensive theoretical and empirical validation, our approach not only enhances the performance of continual learning, especially over synaptic regularization methods in task-incremental settings, but also potentially advances the understanding of neurological adaptive mechanisms. Continual learning is an innate ability in biological intelligence to accommodate real-world changes, but it remains challenging for artificial intelligence. Wang, Zhang and colleagues model key mechanisms of a biological learning system, in particular active forgetting and parallel modularity, to incorporate neuro-inspired adaptability to improve continual learning in artificial intelligence systems.
GeNN: a code generation framework for accelerated brain simulations
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/ .
Hybrid computing using a neural network with dynamic external memory
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory. A ‘differentiable neural computer’ is introduced that combines the learning capabilities of a neural network with an external memory analogous to the random-access memory in a conventional computer. A neural network/computer program hybrid Conventional computer algorithms can process extremely large and complex data structures such as the worldwide web or social networks, but they must be programmed manually by humans. Neural networks can learn from examples to recognize complex patterns, but they cannot easily parse and organize complex data structures. Now Alex Graves, Greg Wayne and colleagues have developed a hybrid learning machine, called a differentiable neural computer (DNC), that is composed of a neural network that can read from and write to an external memory structure analogous to the random-access memory in a conventional computer. The DNC can thus learn to plan routes on the London Underground, and to achieve goals in a block puzzle, merely by trial and error—without prior knowledge or ad hoc programming for such tasks.