Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
68 result(s) for "Walker, Kerry M. M."
Sort by:
Cortical adaptation to sound reverberation
In almost every natural environment, sounds are reflected by nearby objects, producing many delayed and distorted copies of the original sound, known as reverberation. Our brains usually cope well with reverberation, allowing us to recognize sound sources regardless of their environments. In contrast, reverberation can cause severe difficulties for speech recognition algorithms and hearing-impaired people. The present study examines how the auditory system copes with reverberation. We trained a linear model to recover a rich set of natural, anechoic sounds from their simulated reverberant counterparts. The model neurons achieved this by extending the inhibitory component of their receptive filters for more reverberant spaces, and did so in a frequency-dependent manner. These predicted effects were observed in the responses of auditory cortical neurons of ferrets in the same simulated reverberant environments. Together, these results suggest that auditory cortical neurons adapt to reverberation by adjusting their filtering properties in a manner consistent with dereverberation.
Ultrasonic vocalisation rate tracks the diurnal pattern of activity in winter phenotype Djungarian hamsters (Phodopus sungorus)
Vocalisations are increasingly being recognised as an important aspect of normal rodent behaviour yet little is known of how they interact with other spontaneous behaviours such as sleep and torpor, particularly in a social setting. We obtained chronic recordings of the vocal behaviour of adult male and female Djungarian hamsters ( Phodopus sungorus ) housed under short photoperiod (8 h light, 16 h dark, square wave transitions), in different social contexts. The animals were kept in isolation or in same-sex sibling pairs, separated by a grid which allowed non-physical social interaction. On approximately 20% of days hamsters spontaneously entered torpor, a state of metabolic depression that coincides with the rest phase of many small mammal species in response to actual or predicted energy shortages. Animals produced ultrasonic vocalisations (USVs) with a peak frequency of 57 kHz in both social and asocial conditions and there was a high degree of variability in vocalisation rate between subjects. Vocalisation rate was correlated with locomotor activity across the 24-h light cycle, occurring more frequently during the dark period when the hamsters were more active and peaking around light transitions. Solitary-housed animals did not vocalise whilst torpid and animals remained in torpor despite overlapping with vocalisations in social-housing. Besides a minor decrease in peak USV frequency when isolated hamsters were re-paired with their siblings, changing social contexts did not influence vocalisation behaviour or structure. In rare instances, temporally overlapping USVs occurred when animals were socially-housed and were grouped in such a way that could indicate coordination. We did not observe broadband calls (BBCs) contemporaneous with USVs in this paradigm, corroborating their correlation with physical aggression which was absent from our experiment. Overall, we find little evidence to suggest a direct social function of hamster USVs. We conclude that understanding the effects of vocalisations on spontaneous behaviours, such as sleep and torpor, will inform experimental design of future studies, especially where the role of social interactions is investigated.
Integrating information from different senses in the auditory cortex
Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies.
Complexity of frequency receptive fields predicts tonotopic variability across species
Primary cortical areas contain maps of sensory features, including sound frequency in primary auditory cortex (A1). Two-photon calcium imaging in mice has confirmed the presence of these global tonotopic maps, while uncovering an unexpected local variability in the stimulus preferences of individual neurons in A1 and other primary regions. Here we show that local heterogeneity of frequency preferences is not unique to rodents. Using two-photon calcium imaging in layers 2/3, we found that local variance in frequency preferences is equivalent in ferrets and mice. Neurons with multipeaked frequency tuning are less spatially organized than those tuned to a single frequency in both species. Furthermore, we show that microelectrode recordings may describe a smoother tonotopic arrangement due to a sampling bias towards neurons with simple frequency tuning. These results help explain previous inconsistencies in cortical topography across species and recording techniques.
Across-species differences in pitch perception are consistent with differences in cochlear filtering
Pitch perception is critical for recognizing speech, music and animal vocalizations, but its neurobiological basis remains unsettled, in part because of divergent results across species. We investigated whether species-specific differences exist in the cues used to perceive pitch and whether these can be accounted for by differences in the auditory periphery. Ferrets accurately generalized pitch discriminations to untrained stimuli whenever temporal envelope cues were robust in the probe sounds, but not when resolved harmonics were the main available cue. By contrast, human listeners exhibited the opposite pattern of results on an analogous task, consistent with previous studies. Simulated cochlear responses in the two species suggest that differences in the relative salience of the two pitch cues can be attributed to differences in cochlear filter bandwidths. The results support the view that cross-species variation in pitch perception reflects the constraints of estimating a sound’s fundamental frequency given species-specific cochlear tuning.
Harmonic Training and the Formation of Pitch Representation in a Neural Network Model of the Auditory Brain
Attempting to explain the perceptual qualities of pitch has proven to be, and remains, a difficult problem. The wide range of sounds which elicit pitch and a lack of agreement across neurophysiological studies on how pitch is encoded by the brain have made this attempt more difficult. In describing the potential neural mechanisms by which pitch may be processed, a number of neural networks have been proposed and implemented. However, no unsupervised neural networks with biologically accurate cochlear inputs have yet been demonstrated. This paper proposes a simple system in which pitch representing neurons are produced in a biologically plausible setting. Purely unsupervised regimes of neural network learning are implemented and these prove to be sufficient in identifying the pitch of sounds with a variety of spectral profiles, including sounds with missing fundamental frequencies and iterated rippled noises.
Pitch perception is adapted to species-specific cochlear filtering
Pitch perception is critical for recognizing speech, music and animal vocalizations, but its neurobiological basis remains unsettled, in part because of divergent results from different species. We used a combination of behavioural measurements and cochlear modelling to investigate whether species-specific differences exist in the cues used to perceive pitch and whether these can be accounted for by differences in the auditory periphery. Ferrets performed a pitch discrimination task well whenever temporal envelope cues were robust, but not when resolved harmonics only were available. By contrast, human listeners exhibited the opposite pattern of results on an analogous task, consistent with previous studies. Simulated cochlear responses in the two species suggest that the relative salience of the two types of pitch cues can be attributed to differences in cochlear filter bandwidths. Cross-species variation in pitch perception may therefore reflect the constraints of estimating a sound's fundamental frequency given species-specific cochlear tuning.
Auditory training alters the cortical representation of complex sounds
Auditory learning is supported by long-term changes in the neural processing of sound. We examined these task-depend changes in auditory cortex by mapping neural sensitivity to timbre, pitch and location cues in trained ferrets (n = 5), and untrained control ferrets (n = 5). Trained animals either identified vowels in a two-alternative forced choice task (n = 3) or discriminated when a repeating vowel changed in identity or pitch (n = 2). Neural responses were recorded under anesthesia in two primary auditory cortical fields and two tonotopically organized non-primary fields. In trained animals, the overall sensitivity to sound timbre was reduced across three cortical fields compared to control animals, but maintained in a non primary field (the posterior pseudosylvian field). While training did not increase sensitivity to timbre across auditory cortex, it did change the way in which neurons integrated spectral information with neural responses in trained animals increasing their sensitivity to first and second formant frequencies, whereas in control animals' cortical sensitivity to spectral timbre depends mostly on the second formant. Animals trained on timbre identification were required to generalize across pitch when discriminating timbre and their neurons became less modulated by fundamental frequency relative to control animals. Finally, both trained groups showed increased spatial sensitivity and an enhanced response to sound source locations close to the midline, where the loudspeaker was located in the training chamber. These results demonstrate that training elicited widespread alterations in the cortical representation of complex sounds.Competing Interest StatementThe authors have declared no competing interest.Footnotes* This revision includes additional statistical analysis to rule out differences in the sampled characteristic frequency distribution as an explanation for the training induced differences. The revision also includes a re-written introduction to place the emphasis on previous studies of plasticity for complex sounds.
Auditory training alters the cortical representation of both learned and task irrelevant sound features
Auditory learning is supported by long-term changes in the neural processing of sound. We mapped neural sensitivity to timbre, pitch and location in animals trained to discriminate the identity of artificial vowels based on their spectral timbre in a two-alternative forced choice (T2AFC, n=3, female ferrets) or to detect changes in fundamental frequency or timbre of repeating artificial vowels in a go/no-go task (n=2 female ferrets). Neural responses were recorded under anaesthesia in two primary cortical fields and two tonotopically organised non-primary fields. Responses were compared these data to that of naïve control animals. We observed that in both groups of trained animals the overall sensitivity to sound timbre was reduced across three cortical fields but enhanced in non-primary field PSF. Neural responses in trained animals were able to discriminate vowels that differed in either their first or second formant frequency unlike control animals whose sensitivity was mostly driven by changes in the second formant. Neural responses in the T2AFC animals, who were required to generalise across pitch when discriminating timbre, became less modulated by fundamental frequency, while those in the go/no-go animals were unchanged relative to controls. Finally, both trained groups showed increased spatial sensitivity and altered tuning. Trained animals showed an enhanced representation of the midline, where the speaker was located in the experimental chamber. Overall, these results demonstrate training elicited widespread changes in the way in which auditory cortical neurons represent complex sounds with changes in how both task relevant and task-irrelevant features were represented.
The perception and cortical processing of communication sounds
The neural processes used to extract perceptual features of vocal calls, and subsequently to re-integrate those features to form a coherent auditory object, are poorly understood. In this thesis, extracellular recordings were carried out in order to investigate how the temporal envelope, pitch, timbre and spatial location of communication sounds are represented by neurons in two core and three belt areas of ferret (Mustela putorius furo) auditory cortex. Potential neural underpinnings of auditory perception were tested using neurometric analysis to relate the reliability of neural responses to the performance of ferret and human listeners on psychophysical tasks. I found that human listeners' discrimination of the temporal envelopes of vocalization sounds matched the best neurometrics calculated from the temporal spiking patterns of ferret cortical neurons. Neurometric scores based on the spike rates of cortical neurons accounted for ferrets' discrimination of the pitch of artificial vowels. I show that most auditory cortical neurons are modulated by a number of stimulus features, rather than being tuned to only one feature. Neurons in the core auditory cortical fields often respond uniquely to particular combinations of pitch and timbre features, while those in belt regions respond more linearly to feature combinations. Subtle differences in the sensitivity of neurons to pitch, timbre and azimuthal cues were found across cortical areas and depths. These results suggest that auditory cortical neurons provide widely distributed representations of vocalizations, and a single neuron can often use combinations of spike rate and temporal spiking responses to encode multiple sound features.