Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
2,005
result(s) for
"hand gestures"
Sort by:
A systematic review on hand gesture recognition techniques, challenges and applications
2019
With the development of today's technology, and as humans tend to naturally use hand gestures in their communication process to clarify their intentions, hand gesture recognition is considered to be an important part of Human Computer Interaction (HCI), which gives computers the ability of capturing and interpreting hand gestures, and executing commands afterwards. The aim of this study is to perform a systematic literature review for identifying the most prominent techniques, applications and challenges in hand gesture recognition.
To conduct this systematic review, we have screened 560 papers retrieved from IEEE Explore published from the year 2016 to 2018, in the searching process keywords such as \"hand gesture recognition\" and \"hand gesture techniques\" have been used. However, to focus the scope of the study 465 papers have been excluded. Only the most relevant hand gesture recognition works to the research questions, and the well-organized papers have been studied.
The results of this paper can be summarized as the following; the surface electromyography (sEMG) sensors with wearable hand gesture devices were the most acquisition tool used in the work studied, also Artificial Neural Network (ANN) was the most applied classifier, the most popular application was using hand gestures for sign language, the dominant environmental surrounding factor that affected the accuracy was the background color, and finally the problem of overfitting in the datasets was highly experienced.
The paper will discuss the gesture acquisition methods, the feature extraction process, the classification of hand gestures, the applications that were recently proposed, the challenges that face researchers in the hand gesture recognition process, and the future of hand gesture recognition. We shall also introduce the most recent research from the year 2016 to the year 2018 in the field of hand gesture recognition for the first time.
Journal Article
Motion Estimation and Hand Gesture Recognition-Based Human–UAV Interaction Approach in Real Time
2022
As an alternative to traditional remote controller, research on vision-based hand gesture recognition is being actively conducted in the field of interaction between human and unmanned aerial vehicle (UAV). However, vision-based gesture system has a challenging problem in recognizing the motion of dynamic gesture because it is difficult to estimate the pose of multi-dimensional hand gestures in 2D images. This leads to complex algorithms, including tracking in addition to detection, to recognize dynamic gestures, but they are not suitable for human–UAV interaction (HUI) systems that require safe design with high real-time performance. Therefore, in this paper, we propose a hybrid hand gesture system that combines an inertial measurement unit (IMU)-based motion capture system and a vision-based gesture system to increase real-time performance. First, IMU-based commands and vision-based commands are divided according to whether drone operation commands are continuously input. Second, IMU-based control commands are intuitively mapped to allow the UAV to move in the same direction by utilizing estimated orientation sensed by a thumb-mounted micro-IMU, and vision-based control commands are mapped with hand’s appearance through real-time object detection. The proposed system is verified in a simulation environment through efficiency evaluation with dynamic gestures of the existing vision-based system in addition to usability comparison with traditional joystick controller conducted for applicants with no experience in manipulation. As a result, it proves that it is a safer and more intuitive HUI design with a 0.089 ms processing speed and average lap time that takes about 19 s less than the joystick controller. In other words, it shows that it is viable as an alternative to existing HUI.
Journal Article
An overview of hand gesture recognition based on computer vision
2024
Hand gesture recognition emerges as one of the foremost sectors which has gone through several developments within pattern recognition. Numerous studies and research endeavors have explored methodologies grounded in computer vision within this domain. Despite extensive research endeavors, there is still a need for a more thorough evaluation of the efficiency of various methods in different environments along with the challenges encountered during the application of these methods. The focal point of this paper is the comparison of different research in the domain of vision-based hand gesture recognition. The objective is to find out the most prominent methods by reviewing efficiency. Concurrently, the paper delves into presenting potential solutions for challenges faced in different research. A comparative analysis particularly centered around traditional methods and convolutional neural networks like random forest, long short-term memory (LSTM), heatmap, and you only look once (YOLO). considering their efficacy. Where convolutional neural network-based algorithms performed best for recognizing the gestures and gave effective solutions for the challenges faced by the researchers. In essence, the findings of this review paper aim to contribute to future implementations and the discovery of more efficient approaches in the gesture recognition sector.
Journal Article
A Novel Bilateral Data Fusion Approach for EMG-Driven Deep Learning in Post-Stroke Paretic Gesture Recognition
by
Anastasiev, Alexey
,
Zaboronok, Alexander
,
Nishiyama, Hiroyuki
in
Aged
,
Analysis
,
Artificial intelligence
2025
We introduce a hybrid deep learning model for recognizing hand gestures from electromyography (EMG) signals in subacute stroke patients: the one-dimensional convolutional long short-term memory neural network (CNN-LSTM). The proposed network was trained, tested, and cross-validated on seven hand gesture movements, collected via EMG from 25 patients exhibiting clinical features of paresis. EMG data from these patients were collected twice post-stroke, at least one week apart, and divided into datasets A and B to assess performance over time while balancing subject-specific content and minimizing training bias. Dataset A had a median post-stroke time of 16.0 ± 8.6 days, while dataset B had a median of 19.2 ± 13.7 days. In classification tests based on the number of gesture classes (ranging from two to seven), the hybrid model achieved accuracies ranging from 85.66% to 82.27% in dataset A and from 88.36% to 81.69% in dataset B. To address the limitations of deep learning with small datasets, we developed a novel bilateral data fusion approach that incorporates EMG signals from the non-paretic limb during training. This approach significantly enhanced model performance across both datasets, as evidenced by improvements in sensitivity, specificity, accuracy, and F1-score metrics. The most substantial gains were observed in the three-gesture subset, where classification accuracy increased from 73.01% to 78.42% in dataset A, and from 77.95% to 85.69% in dataset B. In conclusion, although these results may be slightly lower than those of traditional supervised learning algorithms, the combination of bilateral data fusion and the absence of feature engineering offers a novel perspective for neurorehabilitation, where every data segment is critically significant.
Journal Article
Spatio-Temporal Transformer with Kolmogorov–Arnold Network for Skeleton-Based Hand Gesture Recognition
2025
Manually crafted features often suffer from being subjective, having an inadequate accuracy, or lacking in robustness in recognition. Meanwhile, existing deep learning methods often overlook the structural and dynamic characteristics of the human hand, failing to fully explore the contextual information of joints in both the spatial and temporal domains. To effectively capture dependencies between the hand joints that are not adjacent but may have potential connections, it is essential to learn long-term relationships. This study proposes a skeleton-based hand gesture recognition framework, the ST-KT, a spatio-temporal graph convolution network, and a transformer with the Kolmogorov–Arnold Network (KAN) model. It incorporates spatio-temporal graph convolution network (ST-GCN) modules and a spatio-temporal transformer module with KAN (KAN–Transformer). ST-GCN modules, which include a spatial graph convolution network (SGCN) and a temporal convolution network (TCN), extract primary features from skeleton sequences by leveraging the strength of graph convolutional networks in the spatio-temporal domain. A spatio-temporal position embedding method integrates node features, enriching representations by including node identities and temporal information. The transformer layer includes a spatial KAN–Transformer (S-KT) and a temporal KAN–Transformer (T-KT), which further extract joint features by learning edge weights and node embeddings, providing richer feature representations and the capability for nonlinear modeling. We evaluated the performance of our method on two challenging skeleton-based dynamic gesture datasets: our method achieved an accuracy of 97.5% on the SHREC’17 track dataset and 94.3% on the DHG-14/28 dataset. These results demonstrate that our proposed method, ST-KT, effectively captures dynamic skeleton changes and complex joint relationships.
Journal Article
Survey on vision-based dynamic hand gesture recognition
2024
To communicate with one another hand, gesture is very important. The task of using the hand gesture in technology is influenced by a very common way humans communicate with the natural environment. The recognizing and finding pose estimation of hand comes under the area of hand gesture analysis. To find out the gesturing hand is very difficult than finding the another part of the human body because the hand is smaller in size. The hand has greater complexity and more challenges due to differences between the cultural or individual factors of users and gestures invented from ad hoc. The complication and divergences of finding hand gestures will deeply affect the recognition rate and accuracy. This paper emphasizes on summary of hand gestures technique, recognition methods, merits and demerits, various applications, available data sets, and achieved accuracy rate, classifiers, algorithm, and gesture types. This paper also scrutinizes the performance of traditional and deep learning methods on dynamic hand gesture recognition.
Journal Article
Ethnomathematics in Balinese Traditional Dance: A Study of Angles in Hand Gestures
by
Apsari, R A
,
Nurmawanti, I
,
Gunawan, G
in
Angles (geometry)
,
Angles in Hand Gestures
,
Balinese Traditional Dance
2021
The study was aimed to explore the ethnomathematics in hand gestures of Balinese traditional dances. The scope of ethnomathematics discussed was limited to the geometry object, specifically angles. The object of the study was Pendet Dance, one of the popular traditional dance from Bali which usually perform to welcome the guest in a formal and informal event. The method of the research was a case study in which we deeply analyse the hand gestures in Pendet Dance. The data were gathered from observation of the dance and interview the experts in Balinese Dance. The data were analysed qualitatively using descriptive method. From the analysis, it was found that the hand gestures of the dancers in Pendet Dance is forming three types of angles, i.e. acute angle (0° < a < 90°), right angle (a = 90°) and obtuse angle (90° < a < 180°). The angles can be seen in a number of pattern of the hand gestures such as in ngumbang, agem, ulap-ulap and ngelung movement.
Journal Article
Implementing a Hand Gesture Recognition System Based on Range-Doppler Map
2022
There have been several studies of hand gesture recognition for human–machine interfaces. In the early work, most solutions were vision-based and usually had privacy problems that make them unusable in some scenarios. To address the privacy issues, more and more research on non-vision-based hand gesture recognition techniques has been proposed. This paper proposes a dynamic hand gesture system based on 60 GHz FMCW radar that can be used for contactless device control. In this paper, we receive the radar signals of hand gestures and transform them into human-understandable domains such as range, velocity, and angle. With these signatures, we can customize our system to different scenarios. We proposed an end-to-end training deep learning model (neural network and long short-term memory), that extracts the transformed radar signals into features and classifies the extracted features into hand gesture labels. In our training data collecting effort, a camera is used only to support labeling hand gesture data. The accuracy of our model can reach 98%.
Journal Article
Real-Time Monocular Skeleton-Based Hand Gesture Recognition Using 3D-Jointsformer
2023
Automatic hand gesture recognition in video sequences has widespread applications, ranging from home automation to sign language interpretation and clinical operations. The primary challenge lies in achieving real-time recognition while managing temporal dependencies that can impact performance. Existing methods employ 3D convolutional or Transformer-based architectures with hand skeleton estimation, but both have limitations. To address these challenges, a hybrid approach that combines 3D Convolutional Neural Networks (3D-CNNs) and Transformers is proposed. The method involves using a 3D-CNN to compute high-level semantic skeleton embeddings, capturing local spatial and temporal characteristics of hand gestures. A Transformer network with a self-attention mechanism is then employed to efficiently capture long-range temporal dependencies in the skeleton sequence. Evaluation of the Briareo and Multimodal Hand Gesture datasets resulted in accuracy scores of 95.49% and 97.25%, respectively. Notably, this approach achieves real-time performance using a standard CPU, distinguishing it from methods that require specialized GPUs. The hybrid approach’s real-time efficiency and high accuracy demonstrate its superiority over existing state-of-the-art methods. In summary, the hybrid 3D-CNN and Transformer approach effectively addresses real-time recognition challenges and efficient handling of temporal dependencies, outperforming existing methods in both accuracy and speed.
Journal Article
Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions
by
Kim, Joongrock
,
Kim, Kwangtaek
,
Lee, Sangyoun
in
3D gesture control
,
3D hand gesture tracking
,
Accuracy
2015
Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user’s hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user’s gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.
Journal Article