Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
180 result(s) for "Brunel, Nicolas"
Sort by:
Is cortical connectivity optimized for storing information?
Maximizing information storage in recurrent networks leads to connectivity matrices whose statistics reproduce experimentally observed features of the connectivity between pyramidal cells in cortex. These include a large fraction of potential synapses and an over-representation of bidirectionally connected pairs of neurons, as compared to random networks. Cortical networks are thought to be shaped by experience-dependent synaptic plasticity. Theoretical studies have shown that synaptic plasticity allows a network to store a memory of patterns of activity such that they become attractors of the dynamics of the network. Here we study the properties of the excitatory synaptic connectivity in a network that maximizes the number of stored patterns of activity in a robust fashion. We show that the resulting synaptic connectivity matrix has the following properties: it is sparse, with a large fraction of zero synaptic weights ('potential' synapses); bidirectionally coupled pairs of neurons are over-represented in comparison to a random network; and bidirectionally connected pairs have stronger synapses on average than unidirectionally connected pairs. All these features reproduce quantitatively available data on connectivity in cortex. This suggests synaptic connectivity in cortex is optimized to store a large number of attractor states in a robust fashion.
Calcium-based plasticity model explains sensitivity of synaptic changes to spike pattern, rate, and dendritic location
Multiple stimulation protocols have been found to be effective in changing synaptic efficacy by inducing long-term potentiation or depression. In many of those protocols, increases in postsynaptic calcium concentration have been shown to play a crucial role. However, it is still unclear whether and how the dynamics of the postsynaptic calcium alone determine the outcome of synaptic plasticity. Here, we propose a calcium-based model of a synapse in which potentiation and depression are activated above calcium thresholds. We show that this model gives rise to a large diversity of spike timing-dependent plasticity curves, most of which have been observed experimentally in different systems. It accounts quantitatively for plasticity outcomes evoked by protocols involving patterns with variable spike timing and firing rate in hippocampus and neocortex. Furthermore, it allows us to predict that differences in plasticity outcomes in different studies are due to differences in parameters defining the calcium dynamics. The model provides a mechanistic understanding of how various stimulation protocols provoke specific synaptic changes through the dynamics of calcium concentration and thresholds implementing in simplified fashion protein signaling cascades, leading to long-term potentiation and long-term depression. The combination of biophysical realism and analytical tractability makes it the ideal candidate to study plasticity at the synapse, neuron, and network levels.
Multiple forms of working memory emerge from synapse—astrocyte interactions in a neuron—glia network model
Persistent activity in populations of neurons, time-varying activity across a neural population, or activity-silent mechanisms carried out by hidden internal states of the neural population have been proposed as different mechanisms of working memory (WM). Whether these mechanisms could be mutually exclusive or occur in the same neuronal circuit remains, however, elusive, and so do their biophysical underpinnings. WhileWM is traditionally regarded to depend purely on neuronal mechanisms, cortical networks also include astrocytes that can modulate neural activity. We propose and investigate a network model that includes both neurons and glia and show that glia–synapse interactions can lead to multiple stable states of synaptic transmission. Depending on parameters, these interactions can lead in turn to distinct patterns of network activity that can serve as substrates for WM.
From Spiking Neuron Models to Linear-Nonlinear Models
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Attractor neural networks with double well synapses
It is widely believed that memory storage depends on activity-dependent synaptic modifications. Classical studies of learning and memory in neural networks describe synaptic efficacy either as continuous or discrete. However, recent results suggest an intermediate scenario in which synaptic efficacy can be described by a continuous variable, but whose distribution is peaked around a small set of discrete values. Motivated by these results, we explored a model in which each synapse is described by a continuous variable that evolves in a potential with multiple minima. External inputs to the network can switch synapses from one potential well to another. Our analytical and numerical results show that this model can interpolate between models with discrete synapses which correspond to the deep potential limit, and models in which synapses evolve in a single quadratic potential. We find that the storage capacity of the network with double well synapses exhibits a power law dependence on the network size, rather than the logarithmic dependence observed in models with single well synapses. In addition, synapses with deeper potential wells lead to more robust information storage in the presence of noise. When memories are sparsely encoded, the scaling of the capacity with network size is similar to previously studied network models in the sparse coding limit.
Coupled ripple oscillations between the medial temporal lobe and neocortex retrieve human memory
Episodic memory retrieval relies on the recovery of neural representations of waking experience. This process is thought to involve a communication dynamic between the medial temporal lobe memory system and the neocortex. How this occurs is largely unknown, however, especially as it pertains to awake human memory retrieval. Using intracranial electroencephalographic recordings, we found that ripple oscillations were dynamically coupled between the human medial temporal lobe (MTL) and temporal association cortex. Coupled ripples were more pronounced during successful verbal memory retrieval and recover the cortical neural representations of remembered items. Together, these data provide direct evidence that coupled ripples between the MTL and association cortex may underlie successful memory retrieval in the human brain.
Optimal Control and Additive Perturbations Help in Estimating Ill-Posed and Uncertain Dynamical Systems
Ordinary differential equations (ODE) are routinely calibrated on real data for estimating unknown parameters or for reverse-engineering. Nevertheless, standard statistical techniques can give disappointing results because of the complex relationship between parameters and states, which makes the corresponding estimation problem ill-posed. Moreover, ODE are mechanistic models that are prone to modeling errors, whose influences on inference are often neglected during statistical analysis. We propose a regularized estimation framework, called Tracking, which consists in adding a perturbation (L 2 function) to the original ODE. This perturbation facilitates data fitting and represents also possible model misspecifications, so that parameter estimation is done by solving a trade-off between data fidelity and model fidelity. We show that the underlying optimization problem is an optimal control problem that can be solved by the Pontryagin maximum principle for general nonlinear and partially observed ODE. The same methodology can be used for the joint estimation of finite and time-varying parameters. We show, in the case of a well-specified parametric model that our estimator is consistent and reaches the root-n rate. In addition, numerical experiments considering various sources of model misspecifications shows that Tracking still furnishes accurate estimates. Finally, we consider semiparametric estimation on both simulated data and on a real data example. Supplementary materials for this article are available online.
Dynamic control of sequential retrieval speed in networks with heterogeneous learning rules
Temporal rescaling of sequential neural activity has been observed in multiple brain areas during behaviors involving time estimation and motor execution at variable speeds. Temporally asymmetric Hebbian rules have been used in network models to learn and retrieve sequential activity, with characteristics that are qualitatively consistent with experimental observations. However, in these models sequential activity is retrieved at a fixed speed. Here, we investigate the effects of a heterogeneity of plasticity rules on network dynamics. In a model in which neurons differ by the degree of temporal symmetry of their plasticity rule, we find that retrieval speed can be controlled by varying external inputs to the network. Neurons with temporally symmetric plasticity rules act as brakes and tend to slow down the dynamics, while neurons with temporally asymmetric rules act as accelerators of the dynamics. We also find that such networks can naturally generate separate ‘preparatory’ and ‘execution’ activity patterns with appropriate external inputs.
Forgetting Leads to Chaos in Attractor Networks
Attractor networks are an influential theory for memory storage in brain systems. This theory has recently been challenged by the observation of strong temporal variability in neuronal recordings during memory tasks. In this work, we study a sparsely connected attractor network where memories are learned according to a Hebbian synaptic plasticity rule. After recapitulating known results for the continuous, sparsely connected Hopfield model, we investigate a model in which new memories are learned continuously and old memories are forgotten, using an online synaptic plasticity rule. We show that for a forgetting timescale that optimizes storage capacity, the qualitative features of the network’s memory retrieval dynamics are age dependent: most recent memories are retrieved as fixed-point attractors while older memories are retrieved as chaotic attractors characterized by strong heterogeneity and temporal fluctuations. Therefore, fixed-point and chaotic attractors coexist in the network phase space. The network presents a continuum of statistically distinguishable memory states, where chaotic fluctuations appear abruptly above a critical age and then increase gradually until the memory disappears. We develop a dynamical mean field theory to analyze the age-dependent dynamics and compare the theory with simulations of large networks. We compute the optimal forgetting timescale for which the number of stored memories is maximized. We found that the maximum age at which memories can be retrieved is given by an instability at which old memories destabilize and the network converges instead to a more recent one. Our numerical simulations show that a high degree of sparsity is necessary for the dynamical mean field theory to accurately predict the network capacity. To test the robustness and biological plausibility of our results, we study numerically the dynamics of a network with learning rules and transfer function inferred from in vivo data in the online learning scenario. We found that all aspects of the network’s dynamics characterized analytically in the simpler model also hold in this model. These results are highly robust to noise. Finally, our theory provides specific predictions for delay response tasks with aging memoranda. In particular, it predicts a higher degree of temporal fluctuations in retrieval states associated with older memories, and it also predicts fluctuations should be faster in older memories. Overall, our theory of attractor networks that continuously learn new information at the price of forgetting old memories can account for the observed diversity of retrieval states in the cortex, and in particular, the strong temporal fluctuations of cortical activity.