Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
444 result(s) for "human–robot interface"
Sort by:
Human-robot interaction strategies for walker-assisted locomotion
This book presents the development of a new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation. The aim is to achieve a closer interaction between the robotic device and the individual, empowering the rehabilitation potential of such devices in clinical applications. A new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation is presented. Trends and opportunities for future advances in the field of assistive locomotion via the development of hybrid solutions based on the combination of smart walkers and biomechatronic exoskeletons are also discussed.
Progress and prospects of the human–robot collaboration
Recent technological advances in hardware design of the robotic platforms enabled the implementation of various control modalities for improved interactions with humans and unstructured environments. An important application area for the integration of robots with such advanced interaction capabilities is human–robot collaboration. This aspect represents high socio-economic impacts and maintains the sense of purpose of the involved people, as the robots do not completely replace the humans from the work process. The research community’s recent surge of interest in this area has been devoted to the implementation of various methodologies to achieve intuitive and seamless human–robot-environment interactions by incorporating the collaborative partners’ superior capabilities, e.g. human’s cognitive and robot’s physical power generation capacity. In fact, the main purpose of this paper is to review the state-of-the-art on intermediate human–robot interfaces (bi-directional), robot control modalities, system stability, benchmarking and relevant use cases, and to extend views on the required future developments in the realm of human–robot collaboration.
Semi-Autonomous Robotic Arm Reaching With Hybrid Gaze–Brain Machine Interface
Recent developments in the non-muscular human-robot interface (HRI) and shared control strategies have shown potential for controlling the assistive robotic arm by people with no residual movement or muscular activity in upper limbs. However, most non-muscular HRIs only produce discrete-valued commands, resulting in non-intuitive and less effective control of the dexterous assistive robotic arm. Furthermore, the user commands and the robot autonomy commands usually switch in the shared control strategies of such applications. This characteristic has been found to yield a reduced sense of agency as well as frustration for the user according to previous user studies. In this study, we firstly propose an intuitive and easy-to-learn-and-use hybrid HRI by combing the Brain-machine interface (BMI) and the gaze-tracking interface. For the proposed hybrid gaze-BMI, the continuous modulation of the movement speed via the motor intention occurs seamlessly and simultaneously to the unconstrained movement direction control with the gaze signals. We then propose a shared control paradigm that always combines user input and the autonomy with the dynamic combination regulation. The proposed hybrid gaze-BMI and shared control paradigm were validated for a robotic arm reaching task performed with healthy subjects. All the users were able to employ the hybrid gaze-BMI for moving the end-effector sequentially to reach the target across the horizontal plane while also avoiding collisions with obstacles. The shared control paradigm maintained as much volitional control as possible, while providing the assistance for the most difficult parts of the task. The presented semi-autonomous robotic system yielded continuous, smooth, and collision-free motion trajectories for the end effector approaching the target. Compared to a system without assistances from robot autonomy, it significantly reduces the rate of failure as well as the time and effort spent by the user to complete the tasks.
Cooperative and Multimodal Capabilities Enhancement in the CERNTAURO Human–Robot Interface for Hazardous and Underwater Scenarios
The use of remote robotic systems for inspection and maintenance in hazardous environments is a priority for all tasks potentially dangerous for humans. However, currently available robotic systems lack that level of usability which would allow inexperienced operators to accomplish complex tasks. Moreover, the task’s complexity increases drastically when a single operator is required to control multiple remote agents (for example, when picking up and transporting big objects). In this paper, a system allowing an operator to prepare and configure cooperative behaviours for multiple remote agents is presented. The system is part of a human–robot interface that was designed at CERN, the European Center for Nuclear Research, to perform remote interventions in its particle accelerator complex, as part of the CERNTAURO project. In this paper, the modalities of interaction with the remote robots are presented in detail. The multimodal user interface enables the user to activate assisted cooperative behaviours according to a mission plan. The multi-robot interface has been validated at CERN in its Large Hadron Collider (LHC) mockup using a team of two mobile robotic platforms, each one equipped with a robotic manipulator. Moreover, great similarities were identified between the CERNTAURO and the TWINBOT projects, which aim to create usable robotic systems for underwater manipulations. Therefore, the cooperative behaviours were validated within a multi-robot pipe transport scenario in a simulated underwater environment, experimenting more advanced vision techniques. The cooperative teleoperation can be coupled with additional assisted tools such as vision-based tracking and grasping determination of metallic objects, and communication protocols design. The results show that the cooperative behaviours enable a single user to face a robotic intervention with more than one robot in a safer way.
Robot adaptation to human physical fatigue in human–robot co-manipulation
In this paper, we propose a novel method for human–robot collaboration, where the robot physical behaviour is adapted online to the human motor fatigue. The robot starts as a follower and imitates the human. As the collaborative task is performed under the human lead, the robot gradually learns the parameters and trajectories related to the task execution. In the meantime, the robot monitors the human fatigue during the task production. When a predefined level of fatigue is indicated, the robot uses the learnt skill to take over physically demanding aspects of the task and lets the human recover some of the strength. The human remains present to perform aspects of collaborative task that the robot cannot fully take over and maintains the overall supervision. The robot adaptation system is based on the Dynamical Movement Primitives, Locally Weighted Regression and Adaptive Frequency Oscillators. The estimation of the human motor fatigue is carried out using a proposed online model, which is based on the human muscle activity measured by the electromyography. We demonstrate the proposed approach with experiments on real-world co-manipulation tasks: material sawing and surface polishing.
Detection of movement onset using EMG signals for upper-limb exoskeletons in reaching tasks
Background To assist people with disabilities, exoskeletons must be provided with human-robot interfaces and smart algorithms capable to identify the user’s movement intentions. Surface electromyographic (sEMG) signals could be suitable for this purpose, but their applicability in shared control schemes for real-time operation of assistive devices in daily-life activities is limited due to high inter-subject variability, which requires custom calibrations and training. Here, we developed a machine-learning-based algorithm for detecting the user’s motion intention based on electromyographic signals, and discussed its applicability for controlling an upper-limb exoskeleton for people with severe arm disabilities . Methods Ten healthy participants, sitting in front of a screen while wearing the exoskeleton, were asked to perform several reaching movements toward three LEDs, presented in a random order. EMG signals from seven upper-limb muscles were recorded. Data were analyzed offline and used to develop an algorithm that identifies the onset of the movement across two different events: moving from a resting position toward the LED ( Go-forward ), and going back to resting position ( Go - backward ). A set of subject-independent time-domain EMG features was selected according to information theory and their probability distributions corresponding to rest and movement phases were modeled by means of a two-component Gaussian Mixture Model (GMM). The detection of movement onset by two types of detectors was tested: the first type based on features extracted from single muscles, whereas the second from multiple muscles. Their performances in terms of sensitivity, specificity and latency were assessed for the two events with a leave one-subject out test method. Results The onset of movement was detected with a maximum sensitivity of 89.3% for Go-forward and 60.9% for Go-backward events. Best performances in terms of specificity were 96.2 and 94.3% respectively. For both events the algorithm was able to detect the onset before the actual movement, while computational load was compatible with real-time applications. Conclusions The detection performances and the low computational load make the proposed algorithm promising for the control of upper-limb exoskeletons in real-time applications. Fast initial calibration makes it also suitable for helping people with severe arm disabilities in performing assisted functional tasks.
A Socially Assistive Robot for Long-Term Cardiac Rehabilitation in the Real World
What are the benefits of using a socially assistive robot for long-term cardiac rehabilitation ? To answer this question we designed and conducted a real-world long-term study, in collaboration with medical specialists, at the Fundación Cardioinfantil-Instituto de Cardiología clinic (Bogotá, Colombia) lasting 2.5 years. The study took place within the context of the outpatient phase of patients' cardiac rehabilitation programme and aimed to compare the patients' progress and adherence in the conventional cardiac rehabilitation programme ( control condition ) against rehabilitation supported by a fully autonomous socially assistive robot which continuously monitored the patients during exercise to provide immediate feedback and motivation based on sensory measures ( robot condition ). The explicit aim of the social robot is to improve patient motivation and increase adherence to the programme to ensure a complete recovery. We recruited 15 patients per condition. The cardiac rehabilitation programme was designed to last 36 sessions (18 weeks) per patient. The findings suggest that robot increases adherence (by 13.3%) and leads to faster completion of the programme. In addition, the patients assisted by the robot had more rapid improvement in their recovery heart rate, better physical activity performance and a higher improvement in cardiovascular functioning, which indicate a successful cardiac rehabilitation programme performance. Moreover, the medical staff and the patients acknowledged that the robot improved the patient motivation and adherence to the programme, supporting its potential in addressing the major challenges in rehabilitation programmes.
Review of sEMG for Exoskeleton Robots: Motion Intention Recognition Techniques and Applications
The global aging trend is becoming increasingly severe, and the demand for life assistance and medical rehabilitation for frail and disabled elderly people is growing. As the best solution for assisting limb movement, guiding limb rehabilitation, and enhancing limb strength, exoskeleton robots are becoming the focus of attention from all walks of life. This paper reviews the progress of research on upper limb exoskeleton robots, sEMG technology, and intention recognition technology. It analyzes the literature using keyword clustering analysis and comprehensively discusses the application of sEMG technology, deep learning methods, and machine learning methods in the process of human movement intention recognition by exoskeleton robots. It is proposed that the focus of current research is to find algorithms with strong adaptability and high classification accuracy. Finally, traditional machine learning and deep learning algorithms are discussed, and future research directions are proposed, such as using a deep learning algorithm based on multi-information fusion to fuse EEG signals, electromyographic signals, and basic reference signals. A model with stronger generalization ability is obtained after training, thereby improving the accuracy of human movement intention recognition based on sEMG technology, which provides important support for the realization of human–machine fusion-embodied intelligence of exoskeleton robots.
Graphene‐based dual‐function acoustic transducers for machine learning‐assisted human–robot interfaces
Human–robot interface (HRI) electronics are critical for realizing robotic intelligence. Here, we report graphene‐based dual‐function acoustic transducers for machine learning‐assisted human–robot interfaces (GHRI). The GHRI functions both an artificial ear through the triboelectric acoustic sensing mechanism and an artificial mouth through the thermoacoustic sound emission mechanism. The success of the integrated device is also attributed to the multifunctional laser‐induced graphene, as either triboelectric materials, electrodes, or thermoacoustic sources. By systematically optimizing the structure parameters, the GHRI achieves high sensitivity (4500 mV Pa–1) and operating durability (1 000 000 cycles and 60 days), capable of recognizing speech identities, emotions, content, and other information in the human speech. With the assistance of machine learning, 30 speech categories are trained by a convolutional neural network, and the accuracy reaches 99.66% and 96.63% in training datasets and test datasets. Furthermore, GHRI is used for artificial intelligence communication based on recognized speech features. Our work shows broad prospects for the development of robotic intelligence. Graphene‐based dual‐function acoustic human–robot interface functions both an artificial ear through the triboelectric acoustic sensing mechanism and an artificial mouth through the thermoacoustic sound emission mechanism. With the assisted of machine learning, identity, emotion, and content speech features are accurately extracted, and sophisticated artificial intelligence communications are executed.
Robotic assembly solution by human-in-the-loop teaching method based on real-time stiffness modulation
We propose a novel human-in-the-loop approach for teaching robots how to solve assembly tasks in unpredictable and unstructured environments. In the proposed method the human sensorimotor system is integrated into the robot control loop though a teleoperation setup. The approach combines a 3-DoF end-effector force feedback with an interface for modulation of the robot end-effector stiffness. When operating in unpredictable and unstructured environments, modulation of limb impedance is essential in terms of successful task execution, stability and safety. We developed a novel hand-held stiffness control interface that is controlled by the motion of the human finger. A teaching approach was then used to achieve autonomous robot operation. In the experiments, we analysed and solved two part-assembly tasks: sliding a bolt fitting inside a groove and driving a self-tapping screw into a material of unknown properties. We experimentally compared the proposed method to complementary robot learning methods and analysed the potential benefits of direct stiffness modulation in the force-feedback teleoperation.