Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
82 result(s) for "Mishkin, Mortimer"
Sort by:
A new neural framework for visuospatial processing
Key Points Originally, the dorsal visual processing stream was proposed as a 'Where' pathway, supporting spatial processing, but later accounts proposed that it is a 'How' pathway subserving primarily non-conscious visually-guided action. We resolve this debate by showing that at least three pathways emerge from the dorsal stream, supporting three different forms of spatial processing. The parieto–prefrontal pathway connects the posterior parietal with the prefrontal cortex and supports eye movements and spatial working memory. The parieto–premotor pathway connects the posterior parietal with the premotor cortices and supports visually guided action. The parieto–medial temporal pathway is the most complex projection from the posterior parietal cortex. It is a multisynaptic projection emerging from the caudal portion of the inferior parietal lobule and terminating in the parahippocampal cortex and hippocampus, supporting navigation. The intermediate areas along the parieto–medial temporal pathway — the posterior cingulate and retrosplenial cortices — seem to aid in the coordination of allocentric and egocentric spatial representations. Various proposals have defined the dorsal visual stream as a 'Where' or 'How' pathway. Synthesizing data from anatomical and functional studies, Mishkin and colleagues propose that in the posterior parietal cortex, three different pathways emerge from the dorsal stream, each supporting a different aspect of spatial processing. The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a 'What' pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception ('Where'), more recent accounts suggest it primarily serves non-conscious visually guided action ('How'). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively.
Monkeys have a limited form of short-term memory in audition
A stimulus trace may be temporarily retained either actively [i.e., in working memory (WM)] or by the weaker mnemonic process we will call passive short-term memory, in which a given stimulus trace is highly susceptible to “overwriting” by a subsequent stimulus. It has been suggested that WM is the more robust process because it exploits long-term memory (i.e., a current stimulus activates a stored representation of that stimulus, which can then be actively maintained). Recent studies have suggested that monkeys may be unable to store acoustic signals in long-term memory, raising the possibility that they may therefore also lack auditory WM. To explore this possibility, we tested rhesus monkeys on a serial delayed match-to-sample (DMS) task using a small set of sounds presented with ∼1-s interstimulus delays. Performance was accurate whenever a match or a nonmatch stimulus followed the sample directly, but it fell precipitously if a single nonmatch stimulus intervened between sample and match. The steep drop in accuracy was found to be due not to passive decay of the sample’s trace, but to retroactive interference from the intervening nonmatch stimulus. This “overwriting” effect was far greater than that observed previously in serial DMS with visual stimuli. The results, which accord with the notion that WM relies on long-term memory, indicate that monkeys perform serial DMS in audition remarkably poorly and that whatever success they had on this task depended largely, if not entirely, on the retention of stimulus traces in the passive form of short-term memory.
Test of a motor theory of long-term auditory memory
Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75–80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve.
FOXP2 and the neuroanatomy of speech and language
Key Points Familial disorders of speech and language provided early evidence that genetic mutations could impair these abilities, but a causative mutation in a single gene was only recently identified. Mutations in FOXP2 cause an inherited verbal dyspraxia associated with an orofacial movement disorder in a family known by the label KE. Although the behavioural phenotype has been carefully studied, it is still unclear whether all the effects of the mutation are caused by a single core deficit in orofacial movement, or whether there are additional core deficits that can account for the grammatical, semantic and cognitive impairments that are found in affected family members. MRI scans of affected individuals showed no obvious focal abnormalities on conventional neuroradiological assesments, but more detailed analyses have revealed reductions in the volumes of several brain areas that are involved in motor functions, including the caudate nuclei, Broca's area, the precentral gyrus and the ventral cerebellum. Functional neuroimaging studies have also shown some abnormalities in patterns of activation. FOXP2 encodes a transcription factor that is expressed in the brain, lungs, heart and gut. In the brain, it is widely expressed in sensory, limbic and motor structures. The effects of a mutation in FOXP2 , together with data on its expression, allow us to propose a model of FOXP2 -dependent circuitry. We assume that the circuitry that underlies normal speech is similar to the frontostriatal and frontocerebellar circuits that modulate and control the motor cortex in the performance of other types of movement. Most of the areas in the proposed circuit express FOXP2 , and several of these show abnormalities in affected members of the KE family. Much work is needed to clarify the details of the deficits caused by mutations in FOXP2 and to provide evidence that supports or contradicts our proposed circuitry. This work will involve behavioural, imaging, gene expression and gene knockout studies. That speech and language are innate capacities of the human brain has long been widely accepted, but only recently has an entry point into the genetic basis of these remarkable faculties been found. The discovery of a mutation in FOXP2 in a family with a speech and language disorder has enabled neuroscientists to trace the neural expression of this gene during embryological development, track the effects of this gene mutation on brain structure and function, and so begin to decipher that part of our neural inheritance that culminates in articulate speech.
Two processes support visual recognition memory in rhesus monkeys
A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans.
Frontal and Insular Input to the Dorsolateral Temporal Pole in Primates: Implications for Auditory Memory
The temporal pole (TP) has been involved in multiple functions from emotional and social behavior, semantic processing, memory, language in humans and epilepsy surgery, to the fronto-temporal neurodegenerative disorder (semantic) dementia. However, the role of the TP subdivisions is still unclear, in part due to the lack of quantitative data about TP connectivity. This study focuses in the dorsolateral subdivision of the TP: area 38 . Area 38 main input originates in the auditory processing areas of the rostral superior temporal gyrus. Among other connections, area 38 conveys this auditory highly processed information to the entorhinal, rostral perirhinal, and posterior parahippocampal cortices, presumably for storage in long-term memory (Muñoz-López et al., 2015). However, the connections of the TP with cortical areas beyond the temporal cortex suggest that this area is part of a wider network. With the aim to quantitatively determine the topographical, laminar pattern and weighting of the lateral TP afferents from the frontal and insular cortices, we placed a total of 11 tracer injections of the fluorescent retrograde neuronal tracers Fast Blue and Diamidino Yellow at different levels of the lateral TP in rhesus monkeys. The results showed that circa 50% of the total cortical input to area 38 originates in medial frontal areas 14, 25, 32, and 24 (25%); orbitofrontal areas Pro and PAll (15%); and the agranular, parainsular and disgranular insula (10%). This study sets the anatomical bases to better understand the function of the dorsolateral division of the TP. More specifically, these results suggest that area 38 forms part of the wider limbic circuit that might contribute, among other functions, with an auditory component to multimodal memory processing.
In Search of an Auditory Engram
Monkeys trained preoperatively on a task designed to assess auditory recognition memory were impaired after removal of either the rostral superior temporal gyrus or the medial temporal lobe but were unaffected by lesions of the rhinal cortex. Behavioral analysis indicated that this result occurred because the monkeys did not or could not use long-term auditory recognition, and so depended instead on short-term working memory, which is unaffected by rhinal lesions. The findings suggest that monkeys may be unable to place representations of auditory stimuli into a long-term store and thus question whether the monkey's cerebral memory mechanisms in audition are intrinsically different from those in other sensory modalities. Furthermore, it raises the possibility that language is unique to humans not only because it depends on speech but also because it requires long-term auditory memory.
Extent of hippocampal atrophy predicts degree of deficit in recall
Which specific memory functions are dependent on the hippocampus is still debated. The availability of a large cohort of patients who had sustained relatively selective hippocampal damage early in life enabled us to determine which type of mnemonic deficit showed a correlation with extent of hippocampal injury. We assessed our patient cohort on a test that provides measures of recognition and recall that are equated for difficulty and found that the patients’ performance on the recall tests correlated significantly with their hippocampal volumes, whereas their performance on the equally difficult recognition tests did not and, indeed, was largely unaffected regardless of extent of hippocampal atrophy. The results provide new evidence in favor of the view that the hippocampus is essential for recall but not for recognition.
Participation of the Classical Speech Areas in Auditory Long-Term Memory
Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG) and in the inferior frontal gyrus (IFG) may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS) to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results.
Functional Mapping of the Primate Auditory System
Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.