Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
27,743
result(s) for
"Human-computer interface"
Sort by:
WiGeR: WiFi-Based Gesture Recognition System
2016
Recently, researchers around the world have been striving to develop and modernize human–computer interaction systems by exploiting advances in modern communication systems. The priority in this field involves exploiting radio signals so human–computer interaction will require neither special devices nor vision-based technology. In this context, hand gesture recognition is one of the most important issues in human–computer interfaces. In this paper, we present a novel device-free WiFi-based gesture recognition system (WiGeR) by leveraging the fluctuations in the channel state information (CSI) of WiFi signals caused by hand motions. We extract CSI from any common WiFi router and then filter out the noise to obtain the CSI fluctuation trends generated by hand motions. We design a novel and agile segmentation and windowing algorithm based on wavelet analysis and short-time energy to reveal the specific pattern associated with each hand gesture and detect duration of the hand motion. Furthermore, we design a fast dynamic time warping algorithm to classify our system’s proposed hand gestures. We implement and test our system through experiments involving various scenarios. The results show that WiGeR can classify gestures with high accuracy, even in scenarios where the signal passes through multiple walls.
Journal Article
Human-robot interaction strategies for walker-assisted locomotion
This book presents the development of a new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation. The aim is to achieve a closer interaction between the robotic device and the individual, empowering the rehabilitation potential of such devices in clinical applications. A new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation is presented. Trends and opportunities for future advances in the field of assistive locomotion via the development of hybrid solutions based on the combination of smart walkers and biomechatronic exoskeletons are also discussed.
Hand Gestures Recognition Using Radar Sensors for Human-Computer-Interaction: A Review
2021
Human–Computer Interfaces (HCI) deals with the study of interface between humans and computers. The use of radar and other RF sensors to develop HCI based on Hand Gesture Recognition (HGR) has gained increasing attention over the past decade. Today, devices have built-in radars for recognizing and categorizing hand movements. In this article, we present the first ever review related to HGR using radar sensors. We review the available techniques for multi-domain hand gestures data representation for different signal processing and deep-learning-based HGR algorithms. We classify the radars used for HGR as pulsed and continuous-wave radars, and both the hardware and the algorithmic details of each category is presented in detail. Quantitative and qualitative analysis of ongoing trends related to radar-based HCI, and available radar hardware and algorithms is also presented. At the end, developed devices and applications based on gesture-recognition through radar are discussed. Limitations, future aspects and research directions related to this field are also discussed.
Journal Article
CNN based feature extraction and classification for sign language
by
Jain, Rahul
,
Barbhuiya Abul Abbas
,
Karsh, Ram Kumar
in
Accuracy
,
Artificial neural networks
,
Character recognition
2021
Hand gesture is one of the most prominent ways of communication since the beginning of the human era. Hand gesture recognition extends human-computer interaction (HCI) more convenient and flexible. Therefore, it is important to identify each character correctly for calm and error-free HCI. Literature survey reveals that most of the existing hand gesture recognition (HGR) systems have considered only a few simple discriminating gestures for recognition performance. This paper applies deep learning-based convolutional neural networks (CNNs) for robust modeling of static signs in the context of sign language recognition. In this work, CNN is employed for HGR where both alphabets and numerals of ASL are considered simultaneously. The pros and cons of CNNs used for HGR are also highlighted. The CNN architecture is based on modified AlexNet and modified VGG16 models for classification. Modified pre-trained AlexNet and modified pre-trained VGG16 based architectures are used for feature extraction followed by a multiclass support vector machine (SVM) classifier. The results are evaluated based on different layer features for best recognition performance. To examine the accuracy of the HGR schemes, both the leave-one-subject-out and a random 70–30 form of cross-validation approach were adopted. This work also highlights the recognition accuracy of each character, and their similarities with identical gestures. The experiments are performed in a simple CPU system instead of high-end GPU systems to demonstrate the cost-effectiveness of this work. The proposed system has achieved a recognition accuracy of 99.82%, which is better than some of the state-of-art methods.
Journal Article
A novel muscle-computer interface for hand gesture recognition using depth vision
by
Hu, Yingbai
,
Zhang, Longbin
,
Ferrigno, Giancarlo
in
Accuracy
,
Algorithms
,
Artificial Intelligence
2020
Muscle computer Interface (muCI), one of the widespread human-computer interfaces, has been widely adopted for the identification of hand gestures by using the electrical activity of muscles. Although multi-modal theory and machine learning algorithms have made enormous progress in muCI over the last decades, the processing of the collecting and labeling large data sets creates a high workload and leads to time-consuming implementations. In this paper, a novel muCI was developed to integrate the advantages of EMG signals and depth vision, which could be used to automatically label the cluster of EMG data collected using depth vision. A three layers hierarchical k-medoids approach was designed to extract and label the clustering feature of ten hand gestures. A multi-class linear discriminant analysis algorithm was applied to build the hand gesture classifier. The results showed that the proposed algorithm had high accuracy and the muCI performed well, which could automatically label the hand gesture in all experiments. The proposed muCI can be utilized for hand gesture recognition without labeling the data in advance and has the potential for robot manipulation and virtual reality applications.
Journal Article
Hand posture and gesture recognition techniques for virtual reality applications: a survey
2017
Motion recognition is a topic in software engineering and dialect innovation with a goal of interpreting human signals through mathematical algorithm. Hand gesture is a strategy for nonverbal communication for individuals as it expresses more liberally than body parts. Hand gesture acknowledgment has more prominent significance in planning a proficient human computer interaction framework, utilizing signals as a characteristic interface favorable to circumstance of movements. Regardless, the distinguishing proof and acknowledgment of posture, gait, proxemics and human behaviors is furthermore the subject of motion to appreciate human nonverbal communication, thus building a richer bridge between machines and humans than primitive text user interfaces or even graphical user interfaces, which still limits the majority of input to electronics gadget. In this paper, a study on various motion recognition methodologies is given specific accentuation on available motions. A survey on hand posture and gesture is clarified with a detailed comparative analysis of hidden Markov model approach with other classifier techniques. Difficulties and future investigation bearing are also examined.
Journal Article
Factors influencing students' intention to adopt and use ChatGPT in higher education: A study in the Vietnamese context
by
Maheshwari, Greeni
in
Chatbots
,
Computer Appl. in Social and Behavioral Sciences
,
Computer Science
2024
ChatGPT, an extensively recognised language model created by OpenAI, has gained significant prominence across various industries, particularly in education. This study aimed to investigate the factors that influence students' intentions to adopt and utilise ChatGPT for their academic studies. The study used a Structural Equation Model (SEM) for analysing the data gathered from 108 participants, comprising both undergraduate and postgraduate students enrolled in public and private universities in Vietnam. The findings indicated that students' inclination to adopt ChatGPT (referred to as adoption intention or AI) was influenced by their perception of its user-friendliness (PEU). However, the perceived usefulness (PU) of ChatGPT did not have a direct impact on students' adoption intention; instead, it had an indirect influence through personalisation (with a positive effect) and interactivity (with a negative effect). Importantly, there was no significant indirect effect of PU on AI mediated by perceived trust and perceived intelligence. This study is one of the initial empirical inquiries into ChatGPT adoption within an Asian context, providing valuable insights in this emerging area of research. As the use of ChatGPT by students becomes increasingly inevitable, educational institutions should carefully consider integrating it into the assessment process. It is crucial to design assessments that encourage responsible usage of ChatGPT, preserving students' critical thinking abilities and creativity in their assessment writing. Moving forward, educators will play a pivotal role by offering clear guidelines and instructions that set out the appropriate and ethical use of artificial intelligence tools in the assessments.
Journal Article
Immersive Virtual Reality in K-12 and Higher Education: A systematic review of the last decade scientific literature
2021
There has been an increasing interest in applying immersive virtual reality (VR) applications to support various instructional design methods and outcomes not only in K-12 (Primary and Secondary), but also in higher education (HE) settings. However, there is a scarcity of studies to provide the potentials and challenges of VR-supported instructional design strategies and/or techniques that can influence teaching and learning. This systematic review presents a variety of studies that provide qualitative and/or quantitative data to investigate the current practices with VR support focusing on students’ outcomes, performance, alongside with the benefits and challenges of this technology concerning the analysis of visual features and design elements with mobile and desktop computing devices in different learning subjects. During the selection and screening process, forty-six (n = 46) articles published from the middle of 2009 until the middle of 2020 were finally included for a detailed analysis and synthesis of which twenty-one and twenty-five in K-12 and HE, respectively. The majority of studies were focused on describing and evaluating the appropriateness or the effectiveness of the applied instructional design processes using various VR applications to disseminate their findings on user experience, usability issues, students’ outcomes, and/or learning performance. This study contributes by reviewing how instructional design strategies and techniques can potentially benefit students’ learning performance using a wide range of VR applications. It also proposes some recommendations to guide and lead effective instructional design settings in several teaching and learning contexts to outline a more accurate and up-to-date picture of the current state of literature.
Journal Article
Development of a Low-Cost, Modular Muscle–Computer Interface for At-Home Telerehabilitation for Chronic Stroke
by
Phanord, Coralie
,
Marin-Pardo, Octavio
,
Laine, Christopher M.
in
Biofeedback
,
Computers
,
Digitization
2021
Stroke is a leading cause of long-term disability in the United States. Recent studies have shown that high doses of repeated task-specific practice can be effective at improving upper-limb function at the chronic stage. Providing at-home telerehabilitation services with therapist supervision may allow higher dose interventions targeted to this population. Additionally, muscle biofeedback to train patients to avoid unwanted simultaneous activation of antagonist muscles (co-contractions) may be incorporated into telerehabilitation technologies to improve motor control. Here, we present the development and feasibility of a low-cost, portable, telerehabilitation biofeedback system called Tele-REINVENT. We describe our modular electromyography acquisition, processing, and feedback algorithms to train differentiated muscle control during at-home therapist-guided sessions. Additionally, we evaluated the performance of low-cost sensors for our training task with two healthy individuals. Finally, we present the results of a case study with a stroke survivor who used the system for 40 sessions over 10 weeks of training. In line with our previous research, our results suggest that using low-cost sensors provides similar results to those using research-grade sensors for low forces during an isometric task. Our preliminary case study data with one patient with stroke also suggest that our system is feasible, safe, and enjoyable to use during 10 weeks of biofeedback training, and that improvements in differentiated muscle activity during volitional movement attempt may be induced during a 10-week period. Our data provide support for using low-cost technology for individuated muscle training to reduce unintended coactivation during supervised and unsupervised home-based telerehabilitation for clinical populations, and suggest this approach is safe and feasible. Future work with larger study populations may expand on the development of meaningful and personalized chronic stroke rehabilitation.
Journal Article