Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
88
result(s) for
"Zhu, Vicky"
Sort by:
Nonlinear stimulus representations in neural circuits with approximate excitatory-inhibitory balance
by
Zhu, Vicky
,
Rosenbaum, Robert
,
Baker, Cody
in
Animals
,
Artificial neural networks
,
Biology and Life Sciences
2020
Balanced excitation and inhibition is widely observed in cortex. How does this balance shape neural computations and stimulus representations? This question is often studied using computational models of neuronal networks in a dynamically balanced state. But balanced network models predict a linear relationship between stimuli and population responses. So how do cortical circuits implement nonlinear representations and computations? We show that every balanced network architecture admits stimuli that break the balanced state and these breaks in balance push the network into a \"semi-balanced state\" characterized by excess inhibition to some neurons, but an absence of excess excitation. The semi-balanced state produces nonlinear stimulus representations and nonlinear computations, is unavoidable in networks driven by multiple stimuli, is consistent with cortical recordings, and has a direct mathematical relationship to artificial neural networks.
Journal Article
The cells are all-right: Regulation of the Lefty genes by separate enhancers in mouse embryonic stem cells
by
Taylor, Tiegh
,
Zhu, Hongyu Vicky
,
Moorthy, Sakthi D.
in
Animals
,
Chromatin
,
Chromatin - genetics
2024
Enhancers play a critical role in regulating precise gene expression patterns essential for development and cellular identity; however, how gene-enhancer specificity is encoded within the genome is not clearly defined. To investigate how this specificity arises within topologically associated domains (TAD), we performed allele-specific genome editing of sequences surrounding the Lefty1 and Lefty2 paralogs in mouse embryonic stem cells. The Lefty genes arose from a tandem duplication event and these genes interact with each other in chromosome conformation capture assays which place these genes within the same TAD. Despite their physical proximity, we demonstrate that these genes are primarily regulated by separate enhancer elements. Through CRISPR-Cas9 mediated deletions to remove the intervening chromatin between the Lefty genes, we reveal a distance-dependent dosage effect of the Lefty2 enhancer on Lefty1 expression. These findings indicate a role for chromatin distance in insulating gene expression domains in the Lefty locus in the absence of architectural insulation.
Journal Article
Evaluating the extent to which homeostatic plasticity learns to compute prediction errors in unstructured neuronal networks
2022
The brain is believed to operate in part by making predictions about sensory stimuli and encoding deviations from these predictions in the activity of “prediction error neurons.” This principle defines the widely influential theory of predictive coding. The precise circuitry and plasticity mechanisms through which animals learn to compute and update their predictions are unknown. Homeostatic inhibitory synaptic plasticity is a promising mechanism for training neuronal networks to perform predictive coding. Homeostatic plasticity causes neurons to maintain a steady, baseline firing rate in response to inputs that closely match the inputs on which a network was trained, but firing rates can deviate away from this baseline in response to stimuli that are mismatched from training. We combine computer simulations and mathematical analysis systematically to test the extent to which randomly connected, unstructured networks compute prediction errors after training with homeostatic inhibitory synaptic plasticity. We find that homeostatic plasticity alone is sufficient for computing prediction errors for trivial time-constant stimuli, but not for more realistic time-varying stimuli. We use a mean-field theory of plastic networks to explain our findings and characterize the assumptions under which they apply.
Journal Article
Fixed Points, Learning, and Plasticity in Recurrent Neuronal Network Models
by
Zhu, Vicky R
,
Hauenstein, Jonathan
in
Applied Mathematics
,
Artificial intelligence
,
Neurosciences
2023
Recurrent neural network models (RNNs) are widely used in machine learning and in computational neuroscience. While recurrent in artificial neural networks (ANNs) share some basic building blocks with cortical neuronal networks in the brain, they differ in some fundamental ways. For example, neurons communicate and learn differently. In ANNs, neurons communicate through activations. In comparison, biological neurons communicate via synapses and signal processing along with neuron spiking behaviors. To link neuroscience and machine learning, I study models of recurrent neuronal networks to establish direct, one-to-one analogs between artificial and biological neuronal networks.I first showed their connection by formalizing the features of cortical networks into theorems that link to machine learning activations. This work extended the traditional excitatory-inhibitory balance network theory into a “semi-balanced” state in which networks implement high-dimensional and nonlinear stimulus representations. To understand brain operations and neuron plasticity, I combined numerical simulations of biological networks and mean-field rate models to evaluate the extent to which homeostatic inhibitory plasticity learns to compute prediction errors in randomly connected, unstructured neuronal networks. I found that homeostatic synaptic plasticity alone is not sufficient to learn and perform non-trivial predictive coding tasks in unstructured neuronal network models. To further invest in learning, I derived two new biologically-inspired RNN learning rules for the fixed points of recurrent dynamics. Under a natural re-parameterization of the network model, they can be interpreted as steepest descent and gradient descent on the weight matrix with respect to a non-Euclidean metric and gradient, respectively. Moreover, compared with the standard gradient-based learning methods, one of our alternative learning rules is robust and computationally more efficient. These learning rules produce results that have implications for training RNNs to be used in computational neuroscience studies and machine learning applications.
Dissertation