Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
7 result(s) for "Strazdas, Dominykas"
Sort by:
Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction
This paper presents an implementation of RoSA, a Robot System Assistant, for safe and intuitive human-machine interaction. The interaction modalities were chosen and previously reviewed using a Wizard of Oz study emphasizing a strong propensity for speech and pointing gestures. Based on these findings, we design and implement a new multi-modal system for contactless human-machine interaction based on speech, facial, and gesture recognition. We evaluate our proposed system in an extensive study with multiple subjects to examine the user experience and interaction efficiency. It reports that our method achieves similar usability scores compared to the entirely human remote-controlled robot interaction in our Wizard of Oz study. Furthermore, our framework’s implementation is based on the Robot Operating System (ROS), allowing modularity and extendability for our multi-device and multi-user method.
Assessing the Value of Multimodal Interfaces: A Study on Human–Machine Interaction in Weld Inspection Workstations
Multimodal user interfaces promise natural and intuitive human–machine interactions. However, is the extra effort for the development of a complex multisensor system justified, or can users also be satisfied with only one input modality? This study investigates interactions in an industrial weld inspection workstation. Three unimodal interfaces, including spatial interaction with buttons augmented on a workpiece or a worktable, and speech commands, were tested individually and in a multimodal combination. Within the unimodal conditions, users preferred the augmented worktable, but overall, the interindividual usage of all input technologies in the multimodal condition was ranked best. Our findings indicate that the implementation and the use of multiple input modalities is valuable and that it is difficult to predict the usability of individual input modalities for complex systems.
Robo-HUD: Interaction Concept for Contactless Operation of Industrial Cobotic Systems
Intuitive and safe interfaces for robots are challenging issues in robotics. Robo-HUD is a gadget-less interaction concept for contactless operation of industrial systems. We use virtual collision detection based on time-of-flight sensor data, combined with augmented reality and audio feedback, allowing the operators to navigate a virtual menu by “hover and hold” gestures. When incorporated with virtual safety barriers, the collision detection also functions as a safety feature, slowing or stopping the robot if a barrier is breached. Additionally, a user focus recognition module monitors the awareness, enabling the interaction only when intended. Early case studies show that these features present good use-cases for inspection tasks and operation in difficult environments, where contactless operation is needed.
RELAY: Robotic EyeLink AnalYsis of the EyeLink 1000 Using an Artificial Eye
The impact of ambient brightness surroundings on the peak velocities of visually guided saccades remains a topic of debate in the field of eye-tracking research. While some studies suggest that saccades in darkness are slower than in light, others question this finding, citing inconsistencies influenced by factors such as pupil deformation during saccades, gaze position, or the measurement technique itself. To investigate these, we developed RELAY (Robotic EyeLink AnalYsis), a low-cost, stepper motor-driven artificial eye capable of simulating human saccades with controlled pupil, gaze directions, and brightness. Using the EyeLink 1000, a widely employed eye tracker, we assessed accuracy and precision across three illumination settings. Our results confirm the reliability of the EyeLink 1000, demonstrating no artifacts in pupil-based eye tracking related to brightness variations. This suggests that previously observed changes in peak velocities with varying brightness are likely due to human factors, warranting further investigation. However, we observed systematic deviations in measured pupil size depending on gaze direction. These findings emphasize the importance of reporting illumination conditions and gaze parameters in eye-tracking experiments to ensure data consistency and comparability. Our novel artificial eye provides a robust and reproducible platform for evaluating eye tracking systems and deepening our understanding of the human visual system.
Face Recognition and Tracking Framework for Human–Robot Interaction
Recently, face recognition became a key element in social cognition which is used in various applications including human–robot interaction (HRI), pedestrian identification, and surveillance systems. Deep convolutional neural networks (CNNs) have achieved notable progress in recognizing faces. However, achieving accurate and real-time face recognition is still a challenging problem, especially in unconstrained environments due to occlusion, lighting conditions, and the diversity in head poses. In this paper, we present a robust face recognition and tracking framework in unconstrained settings. We developed our framework based on lightweight CNNs for all face recognition stages, including face detection, alignment and feature extraction, to achieve higher accuracies in these challenging circumstances while maintaining the real-time capabilities required for HRI systems. To maintain the accuracy, a single-shot multi-level face localization in the wild (RetinaFace) is utilized for face detection, and additive angular margin loss (ArcFace) is employed for recognition. For further enhancement, we introduce a face tracking algorithm that combines the information from tracked faces with the recognized identity to use in the further frames. This tracking algorithm improves the overall processing time and accuracy. The proposed system performance is tested in real-time experiments applied in an HRI study. Our proposed framework achieves real-time capabilities with an average of 99%, 95%, and 97% precision, recall, and F-score respectively. In addition, we implemented our system as a modular ROS package that makes it straightforward for integration in different real-world HRI systems.
IM HERE: Interaction Model for Human Effort Based Robot Engagement
The effectiveness of human-robot interaction often hinges on the ability to cultivate engagement - a dynamic process of cognitive involvement that supports meaningful exchanges. Many existing definitions and models of engagement are either too vague or lack the ability to generalize across different contexts. We introduce IM HERE, a novel framework that models engagement effectively in human-human, human-robot, and robot-robot interactions. By employing an effort-based description of bilateral relationships between entities, we provide an accurate breakdown of relationship patterns, simplifying them to focus placement and four key states. This framework captures mutual relationships, group behaviors, and actions conforming to social norms, translating them into specific directives for autonomous systems. By integrating both subjective perceptions and objective states, the model precisely identifies and describes miscommunication. The primary objective of this paper is to automate the analysis, modeling, and description of social behavior, and to determine how autonomous systems can behave in accordance with social norms for full social integration while simultaneously pursuing their own social goals.
RELAY: Robotic EyeLink AnalYsis of the EyeLink 1000 using an Artificial Eye
There is a widespread assumption that the peak velocities of visually guided saccades in the dark are up to 10~\\% slower than those made in the light. Studies that questioned the impact of the surrounding brightness conditions, come to differing conclusions, whether they have an influence or not and if so, in which manner. The problem is of a complex nature as the illumination condition itself may not contribute to different measured peak velocities solely but in combination with the estimation of the pupil size due to its deformation during saccades or different gaze positions. Even the measurement technique of video-based eye tracking itself could play a significant role. To investigate this issue, we constructed a stepper motor driven artificial eye with fixed pupil size to mimic human saccades with predetermined peak velocity \\& amplitudes under three different brightness conditions with the EyeLink 1000, one of the most common used eye trackers. The aim was to control the pupil and brightness. With our device, an overall good accuracy and precision of the EyeLink 1000 could be confirmed. Furthermore, we could find that there is no artifact for pupil based eye tracking in relation to changing brightness conditions, neither for the pupil size nor for the peak velocities. What we found, was a systematic, small, yet significant change of the measured pupil sizes as a function of different gaze directions.