Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
300 result(s) for "Tactile discrimination learning"
Sort by:
Tactile sensory coding and learning with bio-inspired optoelectronic spiking afferent nerves
The integration and cooperation of mechanoreceptors, neurons and synapses in somatosensory systems enable humans to efficiently sense and process tactile information. Inspired by biological somatosensory systems, we report an optoelectronic spiking afferent nerve with neural coding, perceptual learning and memorizing capabilities to mimic tactile sensing and processing. Our system senses pressure by MXene-based sensors, converts pressure information to light pulses by coupling light-emitting diodes to analog-to-digital circuits, then integrates light pulses using a synaptic photomemristor. With neural coding, our spiking nerve is capable of not only detecting simultaneous pressure inputs, but also recognizing Morse code, braille, and object movement. Furthermore, with dimensionality-reduced feature extraction and learning, our system can recognize and memorize handwritten alphabets and words, providing a promising approach towards e-skin, neurorobotics and human-machine interaction technologies. Designing artificial somatosensory systems to efficiently emulate biological tactile information sensing, coding, and processing remains a challenge. Here, the authors demonstrate a tactile sensory system based on optoelectronic spiking afferent nerves with both coding and learning capabilities.
A star-nose-like tactile-olfactory bionic sensing array for robust object recognition in non-visual environments
Object recognition is among the basic survival skills of human beings and other animals. To date, artificial intelligence (AI) assisted high-performance object recognition is primarily visual-based, empowered by the rapid development of sensing and computational capabilities. Here, we report a tactile-olfactory sensing array, which was inspired by the natural sense-fusion system of star-nose mole, and can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input. The tactile-olfactory information is processed by a bioinspired olfactory-tactile associated machine-learning algorithm, essentially mimicking the biological fusion procedures in the neural system of the star-nose mole. Aiming to achieve human identification during rescue missions in challenging environments such as dark or buried scenarios, our tactile-olfactory intelligent sensing system could classify 11 typical objects with an accuracy of 96.9% in a simulated rescue scenario at a fire department test site. The tactile-olfactory bionic sensing system required no visual input and showed superior tolerance to environmental interference, highlighting its great potential for robust object recognition in difficult environments where other methods fall short. For the object recognition in lightless environments, the authors propose the olfactory-tactile machine learning approach, inspired by the star-nose mole’s neural system. They show how bionic flexible sensor arrays allow for real-time acquisition of object’s form and odor when touching it.
Tactile-GAT: tactile graph attention networks for robot tactile perception classification
As one of the most important senses in human beings, touch can also help robots better perceive and adapt to complex environmental information, improving their autonomous decision-making and execution capabilities. Compared to other perception methods, tactile perception needs to handle multi-channel tactile signals simultaneously, such as pressure, bending, temperature, and humidity. However, directly transferring deep learning algorithms that work well on temporal signals to tactile signal tasks does not effectively utilize the physical spatial connectivity information of tactile sensors. In this paper, we propose a tactile perception framework based on graph attention networks, which incorporates explicit and latent relation graphs. This framework can effectively utilize the structural information between different tactile signal channels. We constructed a tactile glove and collected a dataset of pressure and bending tactile signals during grasping and holding objects, and our method achieved 89.58% accuracy in object tactile signal classification. Compared to existing time-series signal classification algorithms, our graph-based tactile perception algorithm can better utilize and learn sensor spatial information, making it more suitable for processing multi-channel tactile data. Our method can serve as a general strategy to improve a robot’s tactile perception capabilities.
Human orbitofrontal cortex signals decision outcomes to sensory cortex during behavioral adaptations
The ability to respond flexibly to an ever-changing environment relies on the orbitofrontal cortex (OFC). However, how the OFC associates sensory information with predicted outcomes to enable flexible sensory learning in humans remains elusive. Here, we combine a probabilistic tactile reversal learning task with functional magnetic resonance imaging (fMRI) to investigate how lateral OFC (lOFC) interacts with the primary somatosensory cortex (S1) to guide flexible tactile learning in humans. fMRI results reveal that lOFC and S1 exhibit distinct task-dependent engagement: while the lOFC responds transiently to unexpected outcomes immediately following reversals, S1 is persistently engaged during re-learning. Unlike the contralateral stimulus-selective S1, activity in ipsilateral S1 mirrors the outcomes of behavior during re-learning, closely related to top-down signals from lOFC. These findings suggest that lOFC contributes to teaching signals to dynamically update representations in sensory areas, which implement computations critical for adaptive behavior. How the prefrontal cortex interacts with sensory cortex for behavioral adaptation in humans is unclear. Here, Wang et al. show that prediction-error related activity in lateral orbitofrontal cortex is conveyed as a teaching signal to update the outcome representation in sensory cortex.
Capturing forceful interaction with deformable objects using a deep learning-powered stretchable tactile array
Capturing forceful interaction with deformable objects during manipulation benefits applications like virtual reality, telemedicine, and robotics. Replicating full hand-object states with complete geometry is challenging because of the occluded object deformations. Here, we report a visual-tactile recording and tracking system for manipulation featuring a stretchable tactile glove with 1152 force-sensing channels and a visual-tactile joint learning framework to estimate dynamic hand-object states during manipulation. To overcome the strain interference caused by contact with deformable objects, an active suppression method based on symmetric response detection and adaptive calibration is proposed and achieves 97.6% accuracy in force measurement, contributing to an improvement of 45.3%. The learning framework processes the visual-tactile sequence and reconstructs hand-object states. We experiment on 24 objects from 6 categories including both deformable and rigid ones with an average reconstruction error of 1.8 cm for all sequences, demonstrating a universal ability to replicate human knowledge in manipulating objects with varying degrees of deformability. The authors report a stretchable tactile array with strain insensitivity, and a visual-tactile joint learning framework, achieving high-accuracy force measurement and replicating full states of hand and manipulated objects with fine-grained geometry.
Artificial organic afferent nerves enable closed-loop tactile feedback for intelligent robot
The emulation of tactile sensory nerves to achieve advanced sensory functions in robotics with artificial intelligence is of great interest. However, such devices remain bulky and lack reliable competence to functionalize further synaptic devices with proprioceptive feedback. Here, we report an artificial organic afferent nerve with low operating bias (−0.6 V) achieved by integrating a pressure-activated organic electrochemical synaptic transistor and artificial mechanoreceptors. The dendritic integration function for neurorobotics is achieved to perceive directional movement of object, further reducing the control complexity by exploiting the distributed and parallel networks. An intelligent robot assembled with artificial afferent nerve, coupled with a closed-loop feedback program is demonstrated to rapidly implement slip recognition and prevention actions upon occurrence of object slippage. The spatiotemporal features of tactile patterns are well differentiated with a high recognition accuracy after processing spike-encoded signals with deep learning model. This work represents a breakthrough in mimicking synaptic behaviors, which is essential for next-generation intelligent neurorobotics and low-power biomimetic electronics. Intelligent artificial tactile system for neurorobotics remains challenging. Here, Chen et al. developed an artificial organic afferent nerve to implement slip recognition and prevention actions by learning the real-time spatial information of directional touch.
Boost event-driven tactile learning with location spiking neurons
Tactile sensing is essential for a variety of daily tasks. Inspired by the event-driven nature and sparse spiking communication of the biological systems, recent advances in event-driven tactile sensors and Spiking Neural Networks (SNNs) spur the research in related fields. However, SNN-enabled event-driven tactile learning is still in its infancy due to the limited representation abilities of existing spiking neurons and high spatio-temporal complexity in the event-driven tactile data. In this paper, to improve the representation capability of existing spiking neurons, we propose a novel neuron model called “location spiking neuron,” which enables us to extract features of event-based data in a novel way. Specifically, based on the classical Time Spike Response Model (TSRM), we develop the Location Spike Response Model (LSRM). In addition, based on the most commonly-used Time Leaky Integrate-and-Fire (TLIF) model, we develop the Location Leaky Integrate-and-Fire (LLIF) model. Moreover, to demonstrate the representation effectiveness of our proposed neurons and capture the complex spatio-temporal dependencies in the event-driven tactile data, we exploit the location spiking neurons to propose two hybrid models for event-driven tactile learning. Specifically, the first hybrid model combines a fully-connected SNN with TSRM neurons and a fully-connected SNN with LSRM neurons. And the second hybrid model fuses the spatial spiking graph neural network with TLIF neurons and the temporal spiking graph neural network with LLIF neurons. Extensive experiments demonstrate the significant improvements of our models over the state-of-the-art methods on event-driven tactile learning, including event-driven tactile object recognition and event-driven slip detection. Moreover, compared to the counterpart artificial neural networks (ANNs), our SNN models are 10× to 100× energy-efficient, which shows the superior energy efficiency of our models and may bring new opportunities to the spike-based learning community and neuromorphic engineering. Finally, we thoroughly examine the advantages and limitations of various spiking neurons and discuss the broad applicability and potential impact of this work on other spike-based learning applications.
Exploring motor skill acquisition in bimanual coordination: insights from navigating a novel maze task
In this study, we introduce a novel maze task designed to investigate naturalistic motor learning in bimanual coordination. We developed and validated an extended set of movement primitives tailored to capture the full spectrum of scenarios encountered in a maze game. Over a 3-day training period, we evaluated participants’ performance using these primitives and a custom-developed software, enabling precise quantification of performance. Our methodology integrated the primitives with in-depth kinematic analyses and thorough thumb pressure assessments, charting the trajectory of participants’ progression from novice to proficient stages. Results demonstrated consistent improvement in maze performance and significant adaptive changes in joint behaviors and strategic recalibrations in thumb pressure distribution. These findings highlight the central nervous system’s adaptability in orchestrating sophisticated motor strategies and the crucial role of tactile feedback in precision tasks. The maze platform and setup emerge as a valuable foundation for future experiments, providing a tool for the exploration of motor learning and coordination dynamics. This research underscores the complexity of bimanual motor learning in naturalistic environments, enhancing our understanding of skill acquisition and task efficiency while emphasizing the necessity for further exploration and deeper investigation into these adaptive mechanisms.
An automated homecage system for multiwhisker detection and discrimination learning in mice
Automated, homecage behavioral training for rodents has many advantages: it is low stress, requires little interaction with the experimenter, and can be easily manipulated to adapt to different experimental conditions. We have developed an inexpensive, Arduino-based, homecage training apparatus for sensory association training in freely-moving mice using multiwhisker air current stimulation coupled to a water reward. Animals learn this task readily, within 1–2 days of training, and performance progressively improves with training. We examined the parameters that regulate task acquisition using different stimulus intensities, directions, and reward valence. Learning was assessed by comparing anticipatory licking for the stimulus compared to the no-stimulus (blank) trials. At high stimulus intensities (>9 psi), animals showed markedly less participation in the task. Conversely, very weak air current intensities (1–2 psi) were not sufficient to generate rapid learning behavior. At intermediate stimulus intensities (5–6 psi), a majority of mice learned that the multiwhisker stimulus predicted the water reward after 24–48 hrs of training. Both exposure to isoflurane and lack of whiskers decreased animals’ ability to learn the task. Following training at an intermediate stimulus intensity, mice were able to transfer learning behavior when exposed to a lower stimulus intensity, an indicator of perceptual learning. Mice learned to discriminate between two directions of stimulation rapidly and accurately, even when the angular distance between the stimuli was <15 degrees. Switching the reward to a more desirable reward, aspartame, had little effect on learning trajectory. Our results show that a tactile association task in an automated homecage environment can be monitored by anticipatory licking to reveal rapid and progressive behavioral change. These Arduino-based, automated mouse cages enable high-throughput training that facilitate analysis of large numbers of genetically modified mice with targeted manipulations of neural activity.
Resting‐State Network Plasticity Following Category Learning Depends on Sensory Modality
Learning new categories is fundamental to cognition, occurring in daily life through various sensory modalities. However, it is not well known how acquiring new categories can modulate the brain networks. Resting‐state functional connectivity is an effective method for detecting short‐term brain alterations induced by various modality‐based learning experiences. Using fMRI, our study investigated the intricate link between novel category learning and brain network reorganization. Eighty‐four adults participated in an object categorization experiment utilizing visual (n = 41, with 20 females and a mean age of 23.91 ± 3.11 years) or tactile (n = 43, with 21 females and a mean age of 24.57 ± 2.58 years) modalities. Resting‐state networks (RSNs) were identified using independent component analysis across the group of participants, and their correlation with individual differences in object category learning across modalities was examined using dual regression. Our results reveal an increased functional connectivity of the frontoparietal network with the left superior frontal gyrus in visual category learning task and with the right superior occipital gyrus and the left middle temporal gyrus after tactile category learning. Moreover, the somatomotor network demonstrated an increased functional connectivity with the left parahippocampus exclusively after tactile category learning. These findings illuminate the neural mechanisms of novel category learning, emphasizing distinct brain networks' roles in diverse modalities. The dynamic nature of RSNs emphasizes the ongoing adaptability of the brain, which is essential for efficient novel object category learning. This research provides valuable insights into the dynamic interplay between sensory learning, brain plasticity, and network reorganization, advancing our understanding of cognitive processes across different modalities. Novel category learning is necessary for cognitive activities such as navigation and interaction. We used functional MRI to investigate resting‐state network plasticity across sensory modalities (vision and tactile). Our findings underscore the dynamic nature of resting‐state networks in facilitating novel object category acquisition across visual and tactile modalities.