Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
14,465 result(s) for "Object processing"
Sort by:
The representational dynamics of task and object processing in humans
Despite the importance of an observer’s goals in determining how a visual object is categorized, surprisingly little is known about how humans process the task context in which objects occur and how it may interact with the processing of objects. Using magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI) and multivariate techniques, we studied the spatial and temporal dynamics of task and object processing. Our results reveal a sequence of separate but overlapping task-related processes spread across frontoparietal and occipitotemporal cortex. Task exhibited late effects on object processing by selectively enhancing task-relevant object features, with limited impact on the overall pattern of object representations. Combining MEG and fMRI data, we reveal a parallel rise in task-related signals throughout the cerebral cortex, with an increasing dominance of task over object representations from early to higher visual areas. Collectively, our results reveal the complex dynamics underlying task and object representations throughout human cortex.
Cultural differences in spatial frequency tunings to faces do not generalize to visual scenes and object stimuli
Previous research has identified cultural differences in visual perception, where East Asians focus more on global object structure and display a larger breadth of attention compared with Westerners. East Asians rely on lower spatial frequencies (SFs) compared to Westerners for face recognition, which may be linked to this. Investigating whether such differences extend to other high-level stimulus categories would clarify if SF tuning differences reflect more general or face specific cognitive processes. The present study compared the SF tunings of Canadians and Chinese during object (Exp. 1; N  = 50) and scene (Exp. 3; N  = 47) categorization. In both experiments, results did not indicate a significant difference between groups. In Experiment 3 ( N  = 128), we conducted an online replication of Experiment 1 while measuring the SF tunings of the same participants during face perception. Again, no significant difference between the groups was found during object categorization, but the finding that East Asians rely on lower SF than Westerners was replicated. Together, these results suggest that unique mechanisms may underlie the cultural differences in face processing, though alternative explanations, such as the feature consistency of faces, could also account for these findings.
The involvement of monocular channels in the face pareidolia effect
Studies examining the neural mechanisms of face perception in humans have mainly focused on cortical networks of face-selective regions. However, subcortical regions are known to play a significant role in face perception as well. For instance, upon presenting pairs of faces sequentially to the same eye or to different eyes, superior performance is observed in the former condition. This superiority was explained by monocular, pre-striate processing of face stimuli. One of the intriguing face-related effects is the face pareidolia phenomenon , wherein observers perceive faces in inanimate objects. In this study, we examined whether face pareidolia involves similar low-level neural substrates to those that are involved in face perception. We presented participants with pairs of houses or face-like houses using a stereoscope to manipulate the information presented to each eye and asked them to determine whether the stimuli were similar or different. We managed to examine the contribution of monocular channels (mostly subcortical) in processing face-like stimuli. We hypothesized that besides their involvement in actual face perception, subcortical structures are engaged in face pareidolia as well. To test our hypothesis, we conducted three experiments to replicate and strengthen the reliability of our results and rule out alternative explanations. We demonstrated a perceptual benefit when presenting similar face-like houses to the same eye in comparison to their presentation to different eyes. This finding matches previous results found for images of real faces and indicates subcortical involvement not only in face perception but also in processing face-like objects.
Are the early stages of orthographic processing universal? Insights from masked priming with Semitic words
Two views contend to account for the processes at play during the early stages of visual word recognition. The first holds that these stages are not modulated by the idiosyncratic properties of different languages. The second maintains that the structural properties of the language determine the weighting of the different domains of linguistic knowledge (e.g., orthographic and morphological domains may be differentially weighted across languages). To explore this question, we focused on orthographic priming in Arabic. In this Semitic language, lexical representations are claimed to be based on morphological similarity, with little or no role for orthographic similarity. We conducted two masked priming experiments using the yes–no and go/no-go versions of the lexical decision task to determine if Arabic target words (e.g., مدير ‘mudyr’, director ) are facilitated by nonword primes that are orthographically but not morphologically related (i.e., pairs share neither a root nor a word pattern; e.g.,ماير ‘maAyr’) relative to unrelated primes. Results showed faster responses for the orthographically related target words than for the unrelated target words in the two experiments. These findings favor the view that the early phases of visual word processing in Semitic and Indo-European languages are fundamentally the same.
Agent-based representations of objects and actions in the monkey pre-supplementary motor area
Information about objects around us is essential for planning actions and for predicting those of others. Here, we studied pre-supplementary motor area F6 neurons with a task in which monkeys viewed and grasped (or refrained from grasping) objects, and then observed a human doing the same task. We found “action-related neurons” encoding selectively monkey’s own action [self-type (ST)], another agent’s action [other-type (OT)], or both [self- and other-type (SOT)]. Interestingly, we found “object-related neurons” exhibiting the same type of selectivity before action onset: Indeed, distinct sets of neurons discharged when visually presented objects were targeted by the monkey’s own action (ST), another agent’s action (OT), or both (SOT). Notably, object-related neurons appear to signal self and other’s intention to grasp and the most likely grip type that will be performed, whereas action-related neurons encode a general goal attainment signal devoid of any specificity for the observed grip type. Time-resolved cross-modal population decoding revealed that F6 neurons first integrate information about object and context to generate an agent-shared signal specifying whether and how the object will be grasped, which progressively turns into a broader agent-based goal attainment signal during action unfolding. Importantly, shared representation of objects critically depends upon their location in the observer’s peripersonal space, suggesting an “object-mirroring” mechanism through which observers could accurately predict others’ impending action by recruiting the same motor representation they would activate if they were to act upon the same object in the same context.
Peripersonal and extrapersonal space encoding in virtual reality: Insights from an fMRI study
•Presentation of objects in PPS and EPS controlled for object size.•Coding of objects in PPS in dorsal- and EPS more ventral visual stream.•Activation of affordances in human CIP for PPS.•Spatial context, rather than retinal image determine object representation. The brain processes objects in reachable peripersonal space and non-reachable extrapersonal space in different neural networks. In contrast to extrapersonal space, spatial processing in peripersonal space is linked to the activation of affordances in dorsal visual pathways. However, the literature on how object characteristics like size, graspability and stereoscopic presentation influence object processing in virtual environments is still unclear. In the current study, 44 healthy participants performed a visual discrimination task involving graspable objects presented in peripersonal space and extrapersonal space. The paradigm was presented via MRI-compatible goggles during fMRI scanning. The four sessions alternated between monoscopic and stereoscopic presentations and stimuli varied within the sessions in apparent distance, size, and orientation. To validate the effect of distance, the pixel size of objects was also controlled. Stereoscopic presentation enhanced dorsal stream activation, particularly in V5/MT, lateral occipital cortex and the posterior intraparietal sulcus, associated with depth processing, suggesting increased peripersonal space processing. In addition to that, analyses revealed characteristic bilateral activation patterns of primary to tertiary visual areas, extending dorsally from the lateral occipital cortex to the posterior intraparietal sulcus for stimuli in peripersonal space, while extrapersonal space activated mostly ventral regions of the tertiary visual cortex. Notably, as the first study to control for pixel object size, these patterns persist, indicating that stimuli in peripersonal space engage the dorsal visual stream, potentially reflecting action-oriented and grasping feature encoding linked to their interactive affordances, while stimuli in extrapersonal space engage ventral regions primarily mediating semantic aspects and scene analysis.
Differential destinations, dynamics, and functions of high- and low-order features in the feedback signal during object processing
Brain is a hierarchical information processing system, in which the feedback signals from high-level to low-level regions are critical. The feedback signals may convey complex high-order features (e.g. category, identity) and simple low-order features (e.g. orientation, spatial frequency) to sensory cortex to interact with the feedforward information, but how these types of feedback information are represented and how they differ in facilitating visual processing remains unclear. The current study used the peripheral object discrimination task, 7T fMRI, and MEG to isolate feedback from feedforward signals in human early visual cortex. The results showed that feedback signals conveyed both low-order features natively encoded in early visual cortex and high-order features generated in high-level regions, but with different spatial and temporal properties. The high-order feedback information targeted both superficial and deep layers, whereas the low-order feedback information reached only deep layers in V1. In addition, MEG results revealed that the feedback information from occipitotemporal to early visual cortex emerged around 200 ms after stimulus onset, and only the representational strength of high-order feedback information was significantly correlated with behavioral performance. These results indicate that the complex and simple components of feedback information play different roles in predictive processing mechanisms to facilitate sensory processing.
Improved Multimedia Object Processing for the Internet of Vehicles
The combination of edge computing and deep learning helps make intelligent edge devices that can make several conditional decisions using comparatively secured and fast machine learning algorithms. An automated car that acts as the data-source node of an intelligent Internet of vehicles or IoV system is one of these examples. Our motivation is to obtain more accurate and rapid object detection using the intelligent cameras of a smart car. The competent supervision camera of the smart automobile model utilizes multimedia data for real-time automation in real-time threat detection. The corresponding comprehensive network combines cooperative multimedia data processing, Internet of Things (IoT) fact handling, validation, computation, precise detection, and decision making. These actions confront real-time delays during data offloading to the cloud and synchronizing with the other nodes. The proposed model follows a cooperative machine learning technique, distributes the computational load by slicing real-time object data among analogous intelligent Internet of Things nodes, and parallel vision processing between connective edge clusters. As a result, the system increases the computational rate and improves accuracy through responsible resource utilization and active–passive learning. We achieved low latency and higher accuracy for object identification through real-time multimedia data objectification.
Task alters category representations in prefrontal but not high-level visual cortex
A central question in neuroscience is how cognitive tasks affect category representations across the human brain. Regions in lateral occipito-temporal cortex (LOTC), ventral temporal cortex (VTC), and ventro-lateral prefrontal cortex (VLFPC) constitute the extended “what” pathway, which is considered instrumental for visual category processing. However, it is unknown (1) whether distributed responses across LOTC, VTC, and VLPFC explicitly represent category, task, or some combination of both, and (2) in what way representations across these subdivisions of the extended ‘what’ pathway may differ. To fill these gaps in knowledge, we scanned 12 participants using fMRI to test the effect of category and task on distributed responses across LOTC, VTC, and VLPFC. Results reveal that task and category modulate responses in both high-level visual regions, as well as prefrontal cortex. However, we found fundamentally different types of representations across the brain. Distributed responses in high-level visual regions are more strongly driven by category than task, and exhibit task-independent category representations. In contrast, distributed responses in prefrontal cortex are more strongly driven by task than category, and contain task-dependent category representations. Together, these findings of differential representations across the brain support a new idea that LOTC and VTC maintain stable category representations allowing efficient processing of visual information, while prefrontal cortex contains flexible representations in which category information may emerge only when relevant to the task. [Display omitted]
Semantic and perceptual priming activate partially overlapping brain networks as revealed by direct cortical recordings in humans
Facilitation of object processing in the brain due to a related context (priming) can be influenced by both semantic connections and perceptual similarity. It is thus important to discern these two when evaluating the spatio-temporal dynamics of primed object processing. The repetition-priming paradigm frequently used to study perceptual priming is, however, unable to differentiate between the mentioned priming effects, possibly leading to confounded results. In the current study, we recorded brain signals from the scalp and cerebral convexity of nine patients with refractory epilepsy in response to related and unrelated image-pairs, all of which shared perceptual features while only related ones had a semantic connection. While previous studies employing a repetition-priming paradigm observed largely overlapping networks between semantic and perceptual priming effects, our results suggest that this overlap is only partial (both temporally and spatially). These findings stress the importance of controlling for perceptual features when studying semantic priming. •Perceptual priming and semantic priming overlap in repetition paradigms.•Intracranial EEG was used to discern semantic and perceptual priming in time and space.•By testing similar but unrelated object pairs we studied pure perceptual priming.•Perceptual priming engages both hemispheres early in object processing.•Semantic priming is observed later and is more confined in the left temporal cortex.