Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
77 result(s) for "Fattori, Patrizia"
Sort by:
Structural connectivity and functional properties of the macaque superior parietal lobule
Despite the consolidated belief that the macaque superior parietal lobule (SPL) is entirely occupied by Brodmann’s area 5, recent data show that macaque SPL also hosts a large cortical region with structural and functional features similar to that of Brodmann’s area 7. According to these data, the anterior part of SPL is occupied by a somatosensory-dominated cortical region that hosts three architectural and functional distinct regions (PE, PEci, PEip) and the caudal half of SPL by a bimodal somato-visual region that hosts four areas: PEc, MIP, PGm, V6A. To date, the most studied areas of SPL are PE, PEc, and V6A. PE is essentially a high-order somatomotor area, while PEc and V6A are bimodal somatomotor–visuomotor areas, the former with predominant somatosensory input and the latter with predominant visual input. The functional properties of these areas and their anatomical connectivity strongly suggest their involvement in the control of limb movements. PE is suggested to be involved in the preparation/execution of limb movements, in particular, the movements of the upper limb; PEc in the control of movements of both upper and lower limbs, as well as in their interaction with the visual environment; V6A in the control of reach-to-grasp movements performed with the upper limb. In humans, SPL is traditionally considered to have a different organization with respect to macaques. Here, we review several lines of evidence suggesting that this is not the case, showing a similar structure for human and non-human primate SPLs.
Vision for action: thalamic and cortical inputs to the macaque superior parietal lobule
The dorsal visual stream, the cortical circuit that in the primate brain is mainly dedicated to the visual control of actions, is split into two routes, a lateral and a medial one, both involved in coding different aspects of sensorimotor control of actions. The lateral route, named “lateral grasping network”, is mainly involved in the control of the distal part of prehension, namely grasping and manipulation. The medial route, named “reach-to-grasp network”, is involved in the control of the full deployment of prehension act, from the direction of arm movement to the shaping of the hand according to the object to be grasped. In macaque monkeys, the reach-to-grasp network (the target of this review) includes areas of the superior parietal lobule (SPL) that hosts visual and somatosensory neurons well suited to control goal-directed limb movements toward stationary as well as moving objects. After a brief summary of the neuronal functional properties of these areas, we will analyze their cortical and thalamic inputs thanks to retrograde neuronal tracers separately injected into the SPL areas V6, V6A, PEc, and PE. These areas receive visual and somatosensory information distributed in a caudorostral, visuosomatic trend, and some of them are directly connected with the dorsal premotor cortex. This review is particularly focused on the origin and type of visual information reaching the SPL, and on the functional role this information can play in guiding limb interaction with objects in structured and dynamic environments.
Time-dependent enhancement of corticospinal excitability during cortico-cortical paired associative stimulation of the hV6A-M1 network in the human brain
•Brain stimulation induces time-dependent plasticity in the medial parietofrontal circuit.•These effects were observed in healthy participants at rest.•Only the inter pulse interval of 12 ms was effective in strengthening the functional connections. Cortico-cortical paired associative stimulation (ccPAS) is a powerful transcranial magnetic stimulation (TMS) protocol thought to rely on Hebbian plasticity and known to strengthen effective connectivity, mainly within frontal lobe networks. Here, we expand on previous work by exploring the effects of ccPAS on the pathway linking the medial posterior parietal area hV6A with the primary motor cortex (M1), whose plasticity mechanisms remain largely unexplored. To assess the effective connectivity of the hV6A-M1 network, we measured motor-evoked potentials (MEPs) in 30 right-handed volunteers at rest during dual-site, paired-pulse TMS. Consistent with previous findings, we found that MEPs were inhibited when the conditioning stimulus over hV6A preceded the test stimulus over M1 by 12 ms, highlighting inhibitory hV6A-M1 causal interactions. We then manipulated the hV6A-M1 circuit via ccPAS using different inter-stimulus intervals (ISI) never tested before. Our results revealed a time-dependent modulation. Specifically, only when the conditioning stimulus preceded the test one by 12 ms did we find a gradual increase of MEP amplitude during ccPAS, and excitatory aftereffects. In contrast, when ccPAS was applied with an ISI of 4 ms or 500 ms, no corticospinal excitability changes were observed, suggesting that temporal specificity is a critical factor in modulating the hV6A-M1 network. These results suggest that ccPAS can induce time-dependent Hebbian plasticity in the dorsomedial parieto-frontal network at rest, offering novel insights into the network’s plasticity and temporal dynamics.
The effect of viewing-only, reaching, and grasping on size perception in virtual reality
In virtual environments (VEs), distance perception is often inaccurate but can be improved through active engagement, such as walking. While prior research suggests that action planning and execution can enhance the perception of action-related features, the effects of specific actions on perception in VEs remain unclear. This study investigates how different interactions – viewing-only, reaching, and grasping – affect size perception in Virtual Reality (VR) and whether teleportation (Experiment 1) and smooth locomotion (Experiment 2) influences these effects. Participants approached a virtual object using either teleportation or smooth locomotion and interacted with the target with a virtual hand. They then estimated the target’s size before and after the approach by adjusting the size of a comparison object. Results revealed that size perception improved after interaction across all conditions in both experiments, with viewing-only leading to the most accurate estimations. This suggests that, unlike in real environments, additional manual interaction does not significantly enhance size perception in VR when only visual input is available. Additionally, teleportation was more effective than smooth locomotion for improving size estimations. These findings extend action-based perceptual theories to VR, showing that interaction type and approach method can influence size perception accuracy without tactile feedback. Further, by analysing gaze spatial distribution during the different interaction conditions, this study suggests that specific motor responses combined with movement approaches affect gaze behaviour, offering insights for applied VR settings that prioritize perceptual accuracy.
A common neural substrate for processing scenes and egomotion-compatible visual motion
Neuroimaging studies have revealed two separate classes of category-selective regions specialized in optic flow (egomotion-compatible) processing and in scene/place perception. Despite the importance of both optic flow and scene/place recognition to estimate changes in position and orientation within the environment during self-motion, the possible functional link between egomotion- and scene-selective regions has not yet been established. Here we reanalyzed functional magnetic resonance images from a large sample of participants performing two well-known “localizer” fMRI experiments, consisting in passive viewing of navigationally relevant stimuli such as buildings and places (scene/place stimulus) and coherently moving fields of dots simulating the visual stimulation during self-motion (flow fields). After interrogating the egomotion-selective areas with respect to the scene/place stimulus and the scene-selective areas with respect to flow fields, we found that the egomotion-selective areas V6+ and pIPS/V3A responded bilaterally more to scenes/places compared to faces, and all the scene-selective areas (parahippocampal place area or PPA, retrosplenial complex or RSC, and occipital place area or OPA) responded more to egomotion-compatible optic flow compared to random motion. The conjunction analysis between scene/place and flow field stimuli revealed that the most important focus of common activation was found in the dorsolateral parieto-occipital cortex, spanning the scene-selective OPA and the egomotion-selective pIPS/V3A. Individual inspection of the relative locations of these two regions revealed a partial overlap and a similar response profile to an independent low-level visual motion stimulus, suggesting that OPA and pIPS/V3A may be part of a unique motion-selective complex specialized in encoding both egomotion- and scene-relevant information, likely for the control of navigation in a structured environment.
Distinct modulation of microsaccades in motor planning and covert attention
The degree of overlap between the mechanisms underlying attention control and motor planning remains debated. In this study, we examined whether microsaccades—tiny gaze shifts occurring during fixation—are modulated differently by covert attention and motor intention. Eye movements were recorded using high-precision eye-tracking. Our results reveal that whereas in a covert attention task, microsaccade direction was biased toward the attended location, in a motor planning task, microsaccades were not directionally biased toward the cued location. Further, the rate of microsaccades over time varied between the two tasks and whereas in the attention task a clear correlation emerged between microsaccade rate and visual detection reaction times across subjects, there was no relationship between microsaccade rate and reach/saccade reaction times. This study advances our understanding of the relationship between attention and motor processes, suggesting that the mechanisms governing microsaccade generation are differentially influenced by motor planning versus spatial covert attention engagement.
Visuospatial performance and its neural substrates in Dementia with Lewy Bodies during a pointing task
Dementia with Lewy Bodies (DLB) is characterized by motor and cognitive deficits that often overlap with other neurodegenerative disorders, complicating its diagnosis. This study combined linear mixed-effects modeling and machine learning to investigate key parameters of pointing movements, saccadic behavior, and superior parietal lobule (SPL) volumetry in differentiating DLB patients from controls. DLB patients exhibited distinct motor impairments, including increased movement times, greater pointing errors, and spatially modulated deficits in pointing accuracy. Saccadic analysis revealed prolonged saccade latencies, larger amplitudes, and pervasive hypermetria, with notable spatial asymmetries in accuracy and amplitude. Specifically, reduced hypermetria for upward-directed saccades suggests direction-specific modulation in DLB, highlighting potential disruptions in visuomotor pathways. Brain volumetric analysis demonstrated significant volumetric loss of SPL, particularly in the left hemisphere, further implicating this region in the visuospatial and motor deficits observed in DLB. Interestingly, an inverse relationship between SPL volumetry and task performance was found, more evident for hand-related parameters. The integration of behavioral, saccadic, and volumetric data revealed that a combined approach highlights the complementary contributions of motor, oculomotor, and neural changes in distinguishing patients from controls. This study provides novel insights into the visuomotor and neural substrates underlying DLB and emphasizes the importance of adopting a multimodal approach to its diagnosis. The results go beyond traditional visuospatial assessments, offering a robust framework for the identification of DLB-specific biomarkers. Future research should explore the generalizability of this combined model across other neurodegenerative conditions to refine diagnostic tools and improve patient outcomes.
Exploring the impact of visual function degradation on manual prehension movements in normal-sighted individuals
Impairments of visual function abilities, such as visual acuity and contrast sensitivity, can negatively impact our ability to perform manual prehension tasks. Despite the clear link between visual input and motor output, there is still limited understanding of how visual function deficits affect hand motor behavior. This study aimed to explore the impact of different levels of visual function degradation, specifically in terms of visual acuity and contrast sensitivity, on the reach and grasp components of manual prehension. To this end, visual function degradation was induced in young participants with normal vision using five different densities of Bangerter occlusion foils. Participants were instructed to perform a natural and accurate reach-to-grasp task towards a cylindrical object with two different diameters (3.5 or 7 cm) and positioned at two distances (25 or 50 cm). The effects of visual function degradation, object size, and distance were evaluated by recording the position and trajectory of the right hand using an optoelectronic motion capture system. Three-dimensional kinematic analysis revealed that visual function degradation in normal-sighted individuals directly altered the reach and grasp components of prehension movements. These alterations included longer movement durations, lower velocity and acceleration profiles, slower deceleration phases, over-scaled hand grip apertures, and greater trajectory deviations. The effects were dependent on the level of visual degradation induced and the intrinsic (size) and extrinsic (distance) object properties. Reductions exceeding 70% in visual acuity and 55% in CS had the most pronounced impact on prehension components. However, subtle reductions greater than 30% in visual acuity and 15% in contrast sensitivity were sufficient to trigger compensatory mechanisms. These findings provide further understanding of how visual function degradation affects prehension movement strategies, highlighting the crucial relationship between visual feedback quality and object properties in the motor online control of the transport, manipulation and spatial components. Our results offer new insights into the implications of visual impairments on manual prehension movements.
Horizontal target size perturbations during grasping movements are described by subsequent size perception and saccade amplitude
Perception and action are essential in our day-to-day interactions with the environment. Despite the dual-stream theory of action and perception, it is now accepted that action and perception processes interact with each other. However, little is known about the impact of unpredicted changes of target size during grasping actions on perception. We assessed whether size perception and saccade amplitude were affected before and after grasping a target that changed its horizontal size during the action execution under the presence or absence of tactile feedback. We have tested twenty-one participants in 4 blocks of 30 trials. Blocks were divided into two experimental tactile feedback paradigms: tactile and non-tactile. Trials consisted of 3 sequential phases: pre-grasping size perception, grasping, and post-grasping size perception. During pre- and post-phases, participants executed a saccade towards a horizontal bar and performed a manual size estimation of the bar size. During grasping phase, participants were asked to execute a saccade towards the bar and to make a grasping action towards the screen. While grasping, 3 horizontal size perturbation conditions were applied: non-perturbation, shortening, and lengthening. 30% of the trials presented perturbation, meaning a symmetrically shortened or lengthened by 33% of the original size. Participants' hand and eye positions were assessed by a motion capture system and a mobile eye-tracker, respectively. After grasping, in both tactile and non-tactile feedback paradigms, size estimation was significantly reduced in lengthening (p = 0.002) and non-perturbation (p<0.001), whereas shortening did not induce significant adjustments (p = 0.86). After grasping, saccade amplitude became significantly longer in shortening (p<0.001) and significantly shorter in lengthening (p<0.001). Non-perturbation condition did not display adjustments (p = 0.95). Tactile feedback did not generate changes in the collected perceptual responses, but horizontal size perturbations did so, suggesting that all relevant target information used in the movement can be extracted from the post-action target perception.