Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
36
result(s) for
"Robotics (cs.RO)"
Sort by:
Progress and prospects of the human–robot collaboration
by
Zanchettin, Andrea Maria
,
Ivaldi, Serena
,
Ajoudani, Arash
in
Collaboration
,
Control stability
,
Economic impact
2018
Recent technological advances in hardware design of the robotic platforms enabled the implementation of various control modalities for improved interactions with humans and unstructured environments. An important application area for the integration of robots with such advanced interaction capabilities is human–robot collaboration. This aspect represents high socio-economic impacts and maintains the sense of purpose of the involved people, as the robots do not completely replace the humans from the work process. The research community’s recent surge of interest in this area has been devoted to the implementation of various methodologies to achieve intuitive and seamless human–robot-environment interactions by incorporating the collaborative partners’ superior capabilities, e.g. human’s cognitive and robot’s physical power generation capacity. In fact, the main purpose of this paper is to review the state-of-the-art on intermediate human–robot interfaces (bi-directional), robot control modalities, system stability, benchmarking and relevant use cases, and to extend views on the required future developments in the realm of human–robot collaboration.
Journal Article
On Designing Expressive Robot Behavior: The Effect of Affective Cues on Interaction
2020
Creating a convincing affective robot behavior is a challenging task. In this paper, we are trying to coordinate between different modalities of communication: speech, facial expressions, and gestures to make the robot interact with human users in an expressive manner. The proposed system employs videos to induce target emotions in the participants so as to start interactive discussions between each participant and the robot around the content of each video. During each experiment of interaction, the expressive ALICE robot generates an adapted multimodal behavior to the affective content of the video, and the participant evaluates its characteristics at the end of the experiment. This study discusses the multimodality of the robot behavior and its positive effect on the clarity of the emotional content of interaction. Moreover, it provides personality and gender-based evaluations of the emotional expressivity of the generated behavior so as to investigate the way it was perceived by the introverted–extroverted and male–female participants within a human–robot interaction context.
Journal Article
Modeling the Minimal Newborn's Intersubjective Mind: The Visuotopic-Somatotopic Alignment Hypothesis in the Superior Colliculus
2013
The question whether newborns possess inborn social skills is a long debate in developmental psychology. Fetal behavioral and anatomical observations show evidences for the control of eye movements and facial behaviors during the third trimester of pregnancy whereas specific sub-cortical areas, like the superior colliculus (SC) and the striatum appear to be functionally mature to support these behaviors. These observations suggest that the newborn is potentially mature for developing minimal social skills. In this manuscript, we propose that the mechanism of sensory alignment observed in SC is particularly important for enabling the social skills observed at birth such as facial preference and facial mimicry. In a computational simulation of the maturing superior colliculus connected to a simulated facial tissue of a fetus, we model how the incoming tactile information is used to direct visual attention toward faces. We suggest that the unisensory superficial visual layer (eye-centered) and the deep somatopic layer (face-centered) in SC are combined into an intermediate layer for visuo-tactile integration and that multimodal alignment in this third layer allows newborns to have a sensitivity to configuration of eyes and mouth. We show that the visual and tactile maps align through a Hebbian learning stage and and strengthen their synaptic links from each other into the intermediate layer. It results that the global network produces some emergent properties such as sensitivity toward the spatial configuration of face-like patterns and the detection of eyes and mouth movement.
Journal Article
Spatial Calibration of Humanoid Robot Flexible Tactile Skin for Human–Robot Interaction
by
Cisneros-Limón, Rafael
,
Nobeshima, Taiki
,
Kaminaga, Hiroshi
in
Arrays
,
Artificial intelligence
,
Calibration
2023
Recent developments in robotics have enabled humanoid robots to be used in tasks where they have to physically interact with humans, including robot-supported caregiving. This interaction—referred to as physical human–robot interaction (pHRI)—requires physical contact between the robot and the human body; one way to improve this is to use efficient sensing methods for the physical contact. In this paper, we use a flexible tactile sensing array and integrate it as a tactile skin for the humanoid robot HRP-4C. As the sensor can take any shape due to its flexible property, a particular focus is given on its spatial calibration, i.e., the determination of the locations of the sensor cells and their normals when attached to the robot. For this purpose, a novel method of spatial calibration using B-spline surfaces has been developed. We demonstrate with two methods that this calibration method gives a good approximation of the sensor position and show that our flexible tactile sensor can be fully integrated on a robot and used as input for robot control tasks. These contributions are a first step toward the use of flexible tactile sensors in pHRI applications.
Journal Article
Self-organizing neural network for reproducing human postural mode alternation through deep reinforcement learning
2023
A self-organized phenomenon in postural coordination is essential for understanding the auto-switching mechanism of in-phase and anti-phase postural coordination modes during standing and related supra-postural activities. Previously, a model-based approach was proposed to reproduce such self-organized phenomenon. However, if we set this problem including the process of how we establish the internal predictive model in our central nervous system, the learning process is critical to be considered for establishing a neural network for managing adaptive postural control. Particularly when body characteristics may change due to growth or aging or are initially unknown for infants, a learning capability can improve the hyper-adaptivity of human motor control for maintaining postural stability and saving energy in daily living. This study attempted to generate a self-organizing neural network that can adaptively coordinate the postural mode without assuming a prior body model regarding body dynamics and kinematics. Postural coordination modes are reproduced in head-target tracking tasks through a deep reinforcement learning algorithm. The transitions between the postural coordination types, i.e. in-phase and anti-phase coordination modes, could be reproduced by changing the task condition of the head tracking target, by changing the frequencies of the moving target. These modes are considered emergent phenomena existing in human head tracking tasks. Various evaluation indices, such as correlation, and relative phase of hip and ankle joint, are analyzed to verify the self-organizing neural network performance to produce the postural coordination transition between the in-phase and anti-phase modes. In addition, after learning, the neural network can also adapt to continuous task condition changes and even to unlearned body mass conditions keeping consistent in-phase and anti-phase mode alternation.
Journal Article
More than just co-workers: Presence of humanoid robot co-worker influences human performance
by
Abderrahmane Kheddar
,
Ashesh Vasalya
,
Gowrishankar Ganesh
in
[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO]
,
[INFO]Computer Science [cs]
,
Adult
2018
Does the presence of a robot co-worker influence the performance of humans around it? Studies of motor contagions during human-robot interactions have examined either how the observation of a robot affects a human's movement velocity, or how it affects the human's movement variance, but never both together. Performance however, has to be measured considering both task speed (or frequency) as well as task accuracy. Here we examine an empirical repetitive industrial task in which a human participant and a humanoid robot work near each other. We systematically varied the robot behavior, and observed whether and how the performance of a human participant is affected by the presence of the robot. To investigate the effect of physical form, we added conditions where the robot co-worker torso and head were covered, and only the moving arm was visible to the human participants. Finally, we compared these behaviors with a human co-worker, and examined how the observed behavioral affects scale with experience of robots. Our results show that human task frequency, but not task accuracy, is affected by the observation of a humanoid robot co-worker, provided the robot's head and torso are visible.
Journal Article
Estimating Muscle Activity from the Deformation of a Sequential 3D Point Cloud
by
Niu, Hui
,
Ayusawa, Ko
,
Desclaux, Damien
in
Artificial neural networks
,
Computer Science
,
Correspondence
2022
Estimation of muscle activity is very important as it can be a cue to assess a person’s movements and intentions. If muscle activity states can be obtained through non-contact measurement, through visual measurement systems, for example, muscle activity will provide data support and help for various study fields. In the present paper, we propose a method to predict human muscle activity from skin surface strain. This requires us to obtain a 3D reconstruction model with a high relative accuracy. The problem is that reconstruction errors due to noise on raw data generated in a visual measurement system are inevitable. In particular, the independent noise between each frame on the time series makes it difficult to accurately track the motion. In order to obtain more precise information about the human skin surface, we propose a method that introduces a temporal constraint in the non-rigid registration process. We can achieve more accurate tracking of shape and motion by constraining the point cloud motion over the time series. Using surface strain as input, we build a multilayer perceptron artificial neural network for inferring muscle activity. In the present paper, we investigate simple lower limb movements to train the network. As a result, we successfully achieve the estimation of muscle activity via surface strain.
Journal Article
Spatio-Temporal Tolerance of Visuo-Tactile Illusions in Artificial Skin by Recurrent Neural Network with Spike-Timing-Dependent Plasticity
by
Philippe Gaussier
,
Alexandre Pitti
,
Ganna Pugach
in
631/114/116/1925
,
631/114/1305
,
[INFO.INFO-RB]Computer Science [cs]/Robotics [cs.RO]
2017
Perceptual illusions across multiple modalities, such as the rubber-hand illusion, show how dynamic the brain is at adapting its body image and at determining what is part of it (the self) and what is not (others). Several research studies showed that redundancy and contingency among sensory signals are essential for perception of the illusion and that a lag of 200–300 ms is the critical limit of the brain to represent one’s own body. In an experimental setup with an artificial skin, we replicate the visuo-tactile illusion within artificial neural networks. Our model is composed of an associative map and a recurrent map of spiking neurons that learn to predict the contingent activity across the visuo-tactile signals. Depending on the temporal delay incidentally added between the visuo-tactile signals or the spatial distance of two distinct stimuli, the two maps detect contingency differently. Spiking neurons organized into complex networks and synchrony detection at different temporal interval can well explain multisensory integration regarding self-body.
Journal Article