Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
23
result(s) for
"Bonacchi, Niccolò"
Sort by:
Spatial maps in piriform cortex during olfactory navigation
2022
Odours are a fundamental part of the sensory environment used by animals to guide behaviours such as foraging and navigation
1
,
2
. Primary olfactory (piriform) cortex is thought to be the main cortical region for encoding odour identity
3
–
8
. Here, using neural ensemble recordings in freely moving rats performing an odour-cued spatial choice task, we show that posterior piriform cortex neurons carry a robust spatial representation of the environment. Piriform spatial representations have features of a learned cognitive map, being most prominent near odour ports, stable across behavioural contexts and independent of olfactory drive or reward availability. The accuracy of spatial information carried by individual piriform neurons was predicted by the strength of their functional coupling to the hippocampal theta rhythm. Ensembles of piriform neurons concurrently represented odour identity as well as spatial locations of animals, forming an odour–place map. Our results reveal a function for piriform cortex in spatial cognition and suggest that it is well-suited to form odour–place associations and guide olfactory-cued spatial navigation.
Studies using neural ensemble recordings in rats show that cells in the piriform cortex carry a spatial representation of the environment and link locations to olfactory sensory inputs.
Journal Article
Partitioning variability in animal behavioral videos using semi-supervised variational autoencoders
2021
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.
Journal Article
Open multi-center intracranial electroencephalography dataset with task probing conscious visual perception
by
Kahraman, Kyle
,
Lepauvre, Alex
,
Devinsky, Orrin
in
631/378/2613/2616
,
631/378/2649/1398
,
Adult
2025
We introduce an intracranial EEG (iEEG) dataset collected as part of an adversarial collaboration between proponents of two theories of consciousness: Global Neuronal Workspace Theory and Integrated Information Theory. The data were recorded from 38 patients undergoing intracranial monitoring of epileptic seizures across three research centers using the same experimental protocol. Participants were presented with suprathreshold visual stimuli belonging to four different categories (faces, objects, letters, false fonts) in three orientations (front, left, right view), and for three durations (0.5, 1.0, 1.5 s). Participants engaged in a non-speeded Go/No-Go target detection task to identify infrequent targets with some stimuli becoming task-relevant and others task-irrelevant. Participants also engaged in a motor localizer task. The data were checked for its quality and converted to Brain Imaging Data Structure (BIDS). The de-identified dataset contains demographics, clinical information, electrode reconstruction, behavioral performance, and eye-tracking data. We also provide code to preprocess and analyze the data. This dataset holds promise for reuse in consciousness science and vision neuroscience to answer questions related to stimulus processing, target detection, and task-relevance, among many others.
Journal Article
A modular architecture for organizing, processing and sharing neurophysiology data
by
Steven J. West
,
Valeria Aguillon-Rodriguez
,
Kamron Saniee
in
631/114/2401
,
631/1647/2198
,
631/1647/334/1874/345
2023
We describe an architecture for organizing, integrating and sharing neurophysiology data within a single laboratory or across a group of collaborators. It comprises a database linking data files to metadata and electronic laboratory notes; a module collecting data from multiple laboratories into one location; a protocol for searching and sharing data and a module for automatic analyses that populates a website. These modules can be used together or individually, by single laboratories or worldwide collaborations.
A modular architecture for managing and sharing electrophysiology, behavior, colony management and other data has been built to support individual laboratories or large consortia.
Journal Article
Reproducibility of in vivo electrophysiological measurements in mice
2025
Understanding brain function relies on the collective work of many labs generating reproducible results. However, reproducibility has not been systematically assessed within the context of electrophysiological recordings during cognitive behaviors. To address this, we formed a multi-lab collaboration using a shared, open-source behavioral task and experimental apparatus. Experimenters in 10 laboratories repeatedly targeted Neuropixels probes to the same location (spanning secondary visual areas, hippocampus, and thalamus) in mice making decisions; this generated a total of 121 experimental replicates, a unique dataset for evaluating reproducibility of electrophysiology experiments. Despite standardizing both behavioral and electrophysiological procedures, some experimental outcomes were highly variable. A closer analysis uncovered that variability in electrode targeting hindered reproducibility, as did the limited statistical power of some routinely used electrophysiological analyses, such as single-neuron tests of modulation by individual task parameters. Reproducibility was enhanced by histological and electrophysiological quality-control criteria. Our observations suggest that data from systems neuroscience is vulnerable to a lack of reproducibility, but that across-lab standardization, including metrics we propose, can serve to mitigate this.
Journal Article
An olfactory self-test effectively screens for COVID-19
by
Abebe Medhanie
,
Cindy Poo
,
Sara Spinelli
in
631/378/2624
,
692/1807
,
[SDV.NEU.PC]Life Sciences [q-bio]/Neurons and Cognition [q-bio.NC]/Psychology and behavior
2022
Background
Key to curtailing the COVID-19 pandemic are wide-scale screening strategies. An ideal screen is one that would not rely on transporting, distributing, and collecting physical specimens. Given the olfactory impairment associated with COVID-19, we developed a perceptual measure of olfaction that relies on smelling household odorants and rating them online.
Methods
Each participant was instructed to select 5 household items, and rate their perceived odor pleasantness and intensity using an online visual analogue scale. We used this data to assign an olfactory perceptual fingerprint, a value that reflects the perceived difference between odorants. We tested the performance of this real-time tool in a total of 13,484 participants (462 COVID-19 positive) from 134 countries who provided 178,820 perceptual ratings of 60 different household odorants.
Results
We observe that olfactory ratings are indicative of COVID-19 status in a country, significantly correlating with national infection rates over time. More importantly, we observe indicative power at the individual level (79% sensitivity and 87% specificity). Critically, this olfactory screen remains effective in participants with COVID-19 but without symptoms, and in participants with symptoms but without COVID-19.
Conclusions
The current odorant-based olfactory screen adds a component to online symptom-checkers, to potentially provide an added first line of defense that can help fight disease progression at the population level. The data derived from this tool may allow better understanding of the link between COVID-19 and olfaction.
Plain language summary
From early on in the COVID-19 pandemic, a symptom associated with infection was rapid and often complete loss of the sense of smell. This rendered smell testing a potentially helpful tool in large-scale screening for SARS-CoV-2 infection. We built an online tool (smelltracker.org) that enables assessment of the sense of smell using commonly available household odorants. Initial use by 13,484 participants (462 COVID-19 positive) from 134 countries corroborated that SARS-CoV-2 infection is associated with impaired smell. Moreover, the tool detected infection in the absence of any other symptoms, including subjective loss in smell. Use of this tool may provide an added instrument for screening SARS-CoV-2 infection, and the data generated by the tool may provide for deeper understanding of the brain mechanisms involved with loss of smell associated with COVID-19.
Snitz et al. develop a web-based olfactory screening tool for COVID-19, which relies on users smelling household odorants. Based on data from participants in 134 countries, the authors report that olfactory ratings are indicative of COVID-19 status.
Journal Article
Standardized and reproducible measurement of decision-making in mice
by
Zador, Anthony M
,
Forrest, Hamish
,
Vergara, Hernando
in
Animal behavior
,
Animal experimentation
,
Animals
2021
Progress in science requires standardized assays whose results can be readily shared, compared, and reproduced across laboratories. Reproducibility, however, has been a concern in neuroscience, particularly for measurements of mouse behavior. Here, we show that a standardized task to probe decision-making in mice produces reproducible results across multiple laboratories. We adopted a task for head-fixed mice that assays perceptual and value-based decision making, and we standardized training protocol and experimental hardware, software, and procedures. We trained 140 mice across seven laboratories in three countries, and we collected 5 million mouse choices into a publicly available database. Learning speed was variable across mice and laboratories, but once training was complete there were no significant differences in behavior across laboratories. Mice in different laboratories adopted similar reliance on visual stimuli, on past successes and failures, and on estimates of stimulus prior probability to guide their choices. These results reveal that a complex mouse behavior can be reproduced across multiple laboratories. They establish a standard for reproducible rodent behavior, and provide an unprecedented dataset and open-access tools to study decision-making in mice. More generally, they indicate a path toward achieving reproducibility in neuroscience through collaborative open-science approaches. In science, it is of vital importance that multiple studies corroborate the same result. Researchers therefore need to know all the details of previous experiments in order to implement the procedures as exactly as possible. However, this is becoming a major problem in neuroscience, as animal studies of behavior have proven to be hard to reproduce, and most experiments are never replicated by other laboratories. Mice are increasingly being used to study the neural mechanisms of decision making, taking advantage of the genetic, imaging and physiological tools that are available for mouse brains. Yet, the lack of standardized behavioral assays is leading to inconsistent results between laboratories. This makes it challenging to carry out large-scale collaborations which have led to massive breakthroughs in other fields such as physics and genetics. To help make these studies more reproducible, the International Brain Laboratory (a collaborative research group) et al. developed a standardized approach for investigating decision making in mice that incorporates every step of the process; from the training protocol to the software used to analyze the data. In the experiment, mice were shown images with different contrast and had to indicate, using a steering wheel, whether it appeared on their right or left. The mice then received a drop of sugar water for every correction decision. When the image contrast was high, mice could rely on their vision. However, when the image contrast was very low or zero, they needed to consider the information of previous trials and choose the side that had recently appeared more frequently. This method was used to train 140 mice in seven laboratories from three different countries. The results showed that learning speed was different across mice and laboratories, but once training was complete the mice behaved consistently, relying on visual stimuli or experiences to guide their choices in a similar way. These results show that complex behaviors in mice can be reproduced across multiple laboratories, providing an unprecedented dataset and open-access tools for studying decision making. This work could serve as a foundation for other groups, paving the way to a more collaborative approach in the field of neuroscience that could help to tackle complex research challenges.
Journal Article
Bonsai: an event-based framework for processing and controlling data streams
by
Matias, Sara
,
Correia, PatrÃcia A.
,
Kampff, Adam R.
in
Augmented reality
,
Automation
,
Behavior Control
2015
The design of modern scientific experiments requires the control and monitoring of many different data streams. However, the serial execution of programming instructions in a computer makes it a challenge to develop software that can deal with the asynchronous, parallel nature of scientific data. Here we present Bonsai, a modular, high-performance, open-source visual programming framework for the acquisition and online processing of data streams. We describe Bonsai's core principles and architecture and demonstrate how it allows for the rapid and flexible prototyping of integrated experimental designs in neuroscience. We specifically highlight some applications that require the combination of many different hardware and software components, including video tracking of behavior, electrophysiology and closed-loop control of stimulation.
Journal Article
Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling and cloud-native open-source tools
by
Steven J. West
,
Valeria Aguillon-Rodriguez
,
Robert Campbell
in
631/378/116
,
631/378/2632
,
Algorithms
2024
Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce ‘Lightning Pose’, an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We released a cloud application that allows users to label data, train networks and process new videos directly from the browser.
Lightning Pose is an efficient pose estimation approach that requires few labeled training data owing to its semi-supervised learning strategy and ensembling.
Journal Article