Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,909 result(s) for "gesture recognition"
Sort by:
Real-Time Hand Gesture Recognition Using Fine-Tuned Convolutional Neural Network
Hand gesture recognition is one of the most effective modes of interaction between humans and computers due to being highly flexible and user-friendly. A real-time hand gesture recognition system should aim to develop a user-independent interface with high recognition performance. Nowadays, convolutional neural networks (CNNs) show high recognition rates in image classification problems. Due to the unavailability of large labeled image samples in static hand gesture images, it is a challenging task to train deep CNN networks such as AlexNet, VGG-16 and ResNet from scratch. Therefore, inspired by CNN performance, an end-to-end fine-tuning method of a pre-trained CNN model with score-level fusion technique is proposed here to recognize hand gestures in a dataset with a low number of gesture images. The effectiveness of the proposed technique is evaluated using leave-one-subject-out cross-validation (LOO CV) and regular CV tests on two benchmark datasets. A real-time American sign language (ASL) recognition system is developed and tested using the proposed technique.
Real-Time Human Detection and Gesture Recognition for On-Board UAV Rescue
Unmanned aerial vehicles (UAVs) play an important role in numerous technical and scientific fields, especially in wilderness rescue. This paper carries out work on real-time UAV human detection and recognition of body and hand rescue gestures. We use body-featuring solutions to establish biometric communications, like yolo3-tiny for human detection. When the presence of a person is detected, the system will enter the gesture recognition phase, where the user and the drone can communicate briefly and effectively, avoiding the drawbacks of speech communication. A data-set of ten body rescue gestures (i.e., Kick, Punch, Squat, Stand, Attention, Cancel, Walk, Sit, Direction, and PhoneCall) has been created by a UAV on-board camera. The two most important gestures are the novel dynamic Attention and Cancel which represent the set and reset functions respectively. When the rescue gesture of the human body is recognized as Attention, the drone will gradually approach the user with a larger resolution for hand gesture recognition. The system achieves 99.80% accuracy on testing data in body gesture data-set and 94.71% accuracy on testing data in hand gesture data-set by using the deep learning method. Experiments conducted on real-time UAV cameras confirm our solution can achieve our expected UAV rescue purpose.
A systematic review on hand gesture recognition techniques, challenges and applications
With the development of today's technology, and as humans tend to naturally use hand gestures in their communication process to clarify their intentions, hand gesture recognition is considered to be an important part of Human Computer Interaction (HCI), which gives computers the ability of capturing and interpreting hand gestures, and executing commands afterwards. The aim of this study is to perform a systematic literature review for identifying the most prominent techniques, applications and challenges in hand gesture recognition. To conduct this systematic review, we have screened 560 papers retrieved from IEEE Explore published from the year 2016 to 2018, in the searching process keywords such as \"hand gesture recognition\" and \"hand gesture techniques\" have been used. However, to focus the scope of the study 465 papers have been excluded. Only the most relevant hand gesture recognition works to the research questions, and the well-organized papers have been studied. The results of this paper can be summarized as the following; the surface electromyography (sEMG) sensors with wearable hand gesture devices were the most acquisition tool used in the work studied, also Artificial Neural Network (ANN) was the most applied classifier, the most popular application was using hand gestures for sign language, the dominant environmental surrounding factor that affected the accuracy was the background color, and finally the problem of overfitting in the datasets was highly experienced. The paper will discuss the gesture acquisition methods, the feature extraction process, the classification of hand gestures, the applications that were recently proposed, the challenges that face researchers in the hand gesture recognition process, and the future of hand gesture recognition. We shall also introduce the most recent research from the year 2016 to the year 2018 in the field of hand gesture recognition for the first time.
Computer vision-based hand gesture recognition for human-robot interaction: a review
As robots have become more pervasive in our daily life, natural human-robot interaction (HRI) has had a positive impact on the development of robotics. Thus, there has been growing interest in the development of vision-based hand gesture recognition for HRI to bridge human-robot barriers. The aim is for interaction with robots to be as natural as that between individuals. Accordingly, incorporating hand gestures in HRI is a significant research area. Hand gestures can provide natural, intuitive, and creative methods for communicating with robots. This paper provides an analysis of hand gesture recognition using both monocular cameras and RGB-D cameras for this purpose. Specifically, the main process of visual gesture recognition includes data acquisition, hand gesture detection and segmentation, feature extraction and gesture classification, which are discussed in this paper. Experimental evaluations are also reviewed. Furthermore, algorithms of hand gesture recognition for human-robot interaction are examined in this study. In addition, the advances required for improvement in the present hand gesture recognition systems, which can be applied for effective and efficient human-robot interaction, are discussed.
Hand Gestures Recognition Using Radar Sensors for Human-Computer-Interaction: A Review
Human–Computer Interfaces (HCI) deals with the study of interface between humans and computers. The use of radar and other RF sensors to develop HCI based on Hand Gesture Recognition (HGR) has gained increasing attention over the past decade. Today, devices have built-in radars for recognizing and categorizing hand movements. In this article, we present the first ever review related to HGR using radar sensors. We review the available techniques for multi-domain hand gestures data representation for different signal processing and deep-learning-based HGR algorithms. We classify the radars used for HGR as pulsed and continuous-wave radars, and both the hardware and the algorithmic details of each category is presented in detail. Quantitative and qualitative analysis of ongoing trends related to radar-based HCI, and available radar hardware and algorithms is also presented. At the end, developed devices and applications based on gesture-recognition through radar are discussed. Limitations, future aspects and research directions related to this field are also discussed.
Hand gesture recognition based on convolution neural network
Due to the complexity issue of the hand gesture recognition feature extraction, for example the variation of the light and background. In this paper, the convolution neural network is applied to the recognition of gestures, and the characteristics of convolution neural network are used to avoid the feature extraction process, reduce the number of parameters needs to be trained, and finally achieve the purpose of unsupervised learning. Error back propagation algorithm, is loaded into the convolution neural network algorithm, modify the threshold and weights of neural network to reduce the error of the model. In the classifier, the support vector machine that is added to optimize the classification function of the convolution neural network to improve the validity and robustness of the whole model.
Multistage Spatial Attention-Based Neural Network for Hand Gesture Recognition
The definition of human-computer interaction (HCI) has changed in the current year because people are interested in their various ergonomic devices ways. Many researchers have been working to develop a hand gesture recognition system with a kinetic sensor-based dataset, but their performance accuracy is not satisfactory. In our work, we proposed a multistage spatial attention-based neural network for hand gesture recognition to overcome the challenges. We included three stages in the proposed model where each stage is inherited the CNN; where we first apply a feature extractor and a spatial attention module by using self-attention from the original dataset and then multiply the feature vector with the attention map to highlight effective features of the dataset. Then, we explored features concatenated with the original dataset for obtaining modality feature embedding. In the same way, we generated a feature vector and attention map in the second stage with the feature extraction architecture and self-attention technique. After multiplying the attention map and features, we produced the final feature, which feeds into the third stage, a classification module to predict the label of the correspondent hand gesture. Our model achieved 99.67%, 99.75%, and 99.46% accuracy for the senz3D, Kinematic, and NTU datasets.
An overview of hand gesture recognition based on computer vision
Hand gesture recognition emerges as one of the foremost sectors which has gone through several developments within pattern recognition. Numerous studies and research endeavors have explored methodologies grounded in computer vision within this domain. Despite extensive research endeavors, there is still a need for a more thorough evaluation of the efficiency of various methods in different environments along with the challenges encountered during the application of these methods. The focal point of this paper is the comparison of different research in the domain of vision-based hand gesture recognition. The objective is to find out the most prominent methods by reviewing efficiency. Concurrently, the paper delves into presenting potential solutions for challenges faced in different research. A comparative analysis particularly centered around traditional methods and convolutional neural networks like random forest, long short-term memory (LSTM), heatmap, and you only look once (YOLO). considering their efficacy. Where convolutional neural network-based algorithms performed best for recognizing the gestures and gave effective solutions for the challenges faced by the researchers. In essence, the findings of this review paper aim to contribute to future implementations and the discovery of more efficient approaches in the gesture recognition sector.
Deep Learning Based Hand Gesture Recognition and UAV Flight Controls
Dynamic hand gesture recognition is a desired alternative means for human-computer interactions. This paper presents a hand gesture recognition system that is designed for the control of flights of unmanned aerial vehicles (UAV). A data representation model that represents a dynamic gesture sequence by converting the 4-D spatiotemporal data to 2-D matrix and a 1-D array is introduced. To train the system to recognize designed gestures, skeleton data collected from a Leap Motion Controller are converted to two different data models. As many as 9 124 samples of the training dataset, 1 938 samples of the testing dataset are created to train and test the proposed three deep learning neural networks, which are a 2-layer fully connected neural network, a 5-layer fully connected neural network and an 8-layer convolutional neural network. The static testing results show that the 2-layer fully connected neural network achieves an average accuracy of 96.7% on scaled datasets and 12.3% on non-scaled datasets. The 5-layer fully connected neural network achieves an average accuracy of 98.0% on scaled datasets and 89.1% on non-scaled datasets. The 8-layer convolutional neural network achieves an average accuracy of 89.6% on scaled datasets and 96.9% on non-scaled datasets. Testing on a drone-kit simulator and a real drone shows that this system is feasible for drone flight controls.
Deep learning in vision-based static hand gesture recognition
Hand gesture for communication has proven effective for humans, and active research is ongoing in replicating the same success in computer vision systems. Human–computer interaction can be significantly improved from advances in systems that are capable of recognizing different hand gestures. In contrast to many earlier works, which consider the recognition of significantly differentiable hand gestures, and therefore often selecting a few gestures from the American Sign Language (ASL) for recognition, we propose applying deep learning to the problem of hand gesture recognition for the whole 24 hand gestures obtained from the Thomas Moeslund’s gesture recognition database. We show that more biologically inspired and deep neural networks such as convolutional neural network and stacked denoising autoencoder are capable of learning the complex hand gesture classification task with lower error rates. The considered networks are trained and tested on data obtained from the above-mentioned public database; results comparison is then made against earlier works in which only small subsets of the ASL hand gestures are considered for recognition.