Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
903 result(s) for "Man machine interaction"
Sort by:
Human-robot interaction strategies for walker-assisted locomotion
This book presents the development of a new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation. The aim is to achieve a closer interaction between the robotic device and the individual, empowering the rehabilitation potential of such devices in clinical applications. A new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation is presented. Trends and opportunities for future advances in the field of assistive locomotion via the development of hybrid solutions based on the combination of smart walkers and biomechatronic exoskeletons are also discussed.
Experienced mental workload, perception of usability, their interaction and impact on task performance
Past research in HCI has generated a number of procedures for assessing the usability of interacting systems. In these procedures there is a tendency to omit characteristics of the users, aspects of the context and peculiarities of the tasks. Building a cohesive model that incorporates these features is not obvious. A construct greatly invoked in Human Factors is human Mental Workload. Its assessment is fundamental for predicting human performance. Despite the several uses of Usability and Mental Workload, not much has been done to explore their relationship. This empirical research focused on I) the investigation of such a relationship and II) the investigation of the impact of the two constructs on human performance. A user study was carried out with participants executing a set of information-seeking tasks over three popular web-sites. A deep correlation analysis of usability and mental workload, by task, by user and by classes of objective task performance was done (I). A number of Supervised Machine Learning techniques based upon different learning strategy were employed for building models aimed at predicting classes of task performance (II). Findings strongly suggests that usability and mental workload are two non overlapping constructs and they can be jointly employed to greatly improve the prediction of human performance.
Imperceptible magnetoelectronics
Future electronic skin aims to mimic nature’s original both in functionality and appearance. Although some of the multifaceted properties of human skin may remain exclusive to the biological system, electronics opens a unique path that leads beyond imitation and could equip us with unfamiliar senses. Here we demonstrate giant magnetoresistive sensor foils with high sensitivity, unmatched flexibility and mechanical endurance. They are <2 μm thick, extremely flexible (bending radii <3 μm), lightweight (≈3 g m −2 ) and wearable as imperceptible magneto-sensitive skin that enables proximity detection, navigation and touchless control. On elastomeric supports, they can be stretched uniaxially or biaxially, reaching strains of >270% and endure over 1,000 cycles without fatigue. These ultrathin magnetic field sensors readily conform to ubiquitous objects including human skin and offer a new sense for soft robotics, safety and healthcare monitoring, consumer electronics and electronic skin devices. Birds and many other animals can sense the Earth’s magnetic field, but not human beings. Here, Melzer et al . develop a type of artificial skin based on giant magnetoresistive sensor foils with micrometre thickness, which can be stretched up to >250% without sacrifices in device performance.
The UI Design of the Picture Logo of the Terminal Human-Computer Interaction Interface
With the advancement of the information society and the development of China’s economy, enterprises to digital, information, intelligent production requirements continue to improve, the application of industrial software gradually popularized in the production of enterprises. However, in the Internet era, the research on man-machine interface design is in full rage, but the man-machine interface design of domestic industrial software has not been paid attention to. The level of industrial software interface is seriously backward, disjointed with The Times, and completely contrary to, high technology, high efficiency and other industrial execution concepts. On the one hand, this phenomenon will affect the working efficiency of the staff in the operation of the software and the user experience, reduce manufacturing enterprises related to attract talent, on the other hand can also lead to the software under the same technical level, domestic industry software products to attract customers’ ability is insufficient, in the competition with foreign brand products in bad situation.
Human Control Model Estimation in Physical Human–Machine Interaction: A Survey
The study of human–machine interaction as a unique control system was one of the first research interests in the engineering field, with almost a century having passed since the first works appeared in this area. At the same time, it is a crucial aspect of the most recent technological developments made in application fields such as collaborative robotics and artificial intelligence. Learning the processes and dynamics underlying human control strategies when interacting with controlled elements or objects of a different nature has been the subject of research in neuroscience, aerospace, robotics, and artificial intelligence. The cross-domain nature of this field of study can cause difficulties in finding a guiding line that links motor control theory, modelling approaches in physiological control systems, and identifying human–machine general control models in manipulative tasks. The discussed models have varying levels of complexity, from the first quasi-linear model in the frequency domain to the successive optimal control model. These models include detailed descriptions of physiologic subsystems and biomechanics. The motivation behind this work is to provide a complete view of the linear models that could be easily handled both in the time domain and in the frequency domain by using a well-established methodology in the classical linear systems and control theory.
CNN-SVM-based Human-Computer Interaction Model for Automotive Systems in Complex Driving Environments
In the complex driving environment, with the increase of task difficulty, the change of diversity and relevance, the phenomenon of perceptual mode conflict, strong cognition or increased difficulty of operation appears when drivers deal with tasks, which affects the execution of primary and secondary tasks. The information expressed and transmitted by multimedia technology is real-time, and only with real-time can we interact and transmit information with users. This paper discusses the design of automobile man-machine interaction based on multimedia information acquisition technology in complex driving environment. Based on users' situational awareness, this paper studies users' interactive needs and experiences in different driving situations, and proposes CNN-SVM (Convolutional Neural Networksupport vector machine) emotional perception model. After automatically extracting spectral features through CNN, support vector machine (SVM) is used instead of traditional Softmax classifier to achieve accurate classification of multi class emotions. The experiment focuses on identifying the following core emotion categories: Anger, Neutral, Joy, and Anxiety, and verifies the model's generalization ability in driving scenarios through cross validation. In the spectrum segmentation experiment, the same network structure as CNN Net was used, and SVM was used instead of softmax classifier. CNN Net was used for training each time. After training, use the test sample set to calculate the input features of the softmax classifier, and input the new features into SVM to calculate the classification result of CNN-SVM. Select 500 test set images from the CityScapes dataset for testing, with MIoU (Mean Intersection Point on Consortium) used as the testing metric. The experimental results show that the model has improved the segmentation accuracy of roads, sidewalks, buildings, etc. Compared with several mainstream segmentation algorithms currently available, this algorithm has relatively small improvements for smaller target objects such as lights, signs, vegetables, and riders, with improvements of 1%, 0.5%, and 0.9%, respectively. In driving scenarios, users' judgment of \"safety\" mainly depends on whether secondary tasks occupy the cognitive and interactive channels of the primary driving task. This article explores how avoiding these two factors can provide users with a sense of security and improve their interaction experience.
Predicting the Valence of a Scene from Observers’ Eye Movements
Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that 'saliency map', 'fixation histogram', 'histogram of fixation duration', and 'histogram of saccade slope' are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images.
The design of tourism product CAD three-dimensional modeling system using VR technology
In view of the high homogeneity of tourism products all over the country, an attempt is made to design virtual visit tourism products with cultural experience background, which can reflect the characteristics of culture + tourism in different scenic spots, so that tourists can deeply experience the local culture. Combined with computer aided design (CAD), the virtual three-dimensional (3D) modeling system of scenic spots is designed, and VR real scene visit interactive tourism products suitable for different scenic spots are designed. 360° VR panoramic display technology is used for 360° VR panoramic video shooting and visiting system display production of Elephant Trunk Hill park scenery. A total of 157 images are collected and 720 cloud panoramic interactive H5 tool is selected to produce a display system suitable for 360° VR panoramic display of scenic spots. Meanwhile, based on single view RGB-D image, the latest convolutional neural network (CNN) algorithm and point cloud processing algorithm are used to design the indoor 3D scene reconstruction algorithm based on semantic understanding. Experiments show that the pixel accuracy and mean intersection over union of the indoor scene layout segmentation network segmentation results are 89.5% and 60.9%, respectively, that is, it has high accuracy. The VR real scene visit interactive tourism product can make tourists have a more immersive sense of interaction and experience before, during and after the tour.
A comprehensive evaluation of explainable Artificial Intelligence techniques in stroke diagnosis: A systematic review
Stroke presents a formidable global health threat, carrying significant risks and challenges. Timely intervention and improved outcomes hinge on the integration of Explainable Artificial Intelligence (XAI) into medical decision-making. XAI, an evolving field, enhances the transparency of conventional Artificial Intelligence (AI) models. This systematic review addresses key research questions: How is XAI applied in the context of stroke diagnosis? To what extent can XAI elucidate the outputs of machine learning models? Which systematic evaluation methodologies are employed, and what categories of explainable approaches (Model Explanation, Outcome Explanation, Model Inspection) are prevalent We conducted this review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Our search encompassed five databases: Google Scholar, PubMed, IEEE Xplore, ScienceDirect, and Scopus, spanning studies published between January 1988 and June 2023. Various combinations of search terms, including \"stroke,\" \"explainable,\" \"interpretable,\" \"machine learning,\" \"artificial intelligence,\" and \"XAI,\" were employed. This study identified 17 primary studies employing explainable machine learning techniques for stroke diagnosis. Among these studies, 94.1% incorporated XAI for model visualization, and 47.06% employed model inspection. It is noteworthy that none of the studies employed evaluation metrics such as D, R, F, or S to assess the performance of their XAI systems. Furthermore, none evaluated human confidence in utilizing XAI for stroke diagnosis. Explainable Artificial Intelligence serves as a vital tool in enhancing trust among both patients and healthcare providers in the diagnostic process. The effective implementation of systematic evaluation metrics is crucial for harnessing the potential of XAI in improving stroke diagnosis.
Measuring recognition of body changes over time: A human-computer interaction tool using dynamic morphing and body ownership illusion
Measuring body image is crucial at both personal and social levels. Previous studies have attempted to quantitatively measure body image but methods for measuring body change recognition over time have not yet been established. The present study proposes a novel human-computer interaction technique using dynamic morphing and body ownership illusion, and we conducted a user study to investigate how body ownership illusion and gender would affect to body change recognition. The results showed that a participant's body change recognition was weak when the body ownership illusion was strong. In addition, female participants were less sensitive than male participants. With our proposed technique, we demonstrated that we were able to quantitatively measure body change recognition and our empirical data indicated that body change recognition varied depending on body ownership illusion and gender, suggesting that our methodology could not only be used in future body image studies but also in eating disorder treatments.