Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,718 result(s) for "gait recognition"
Sort by:
Multi-Biometric Feature Extraction from Multiple Pose Estimation Algorithms for Cross-View Gait Recognition
Gait recognition is a behavioral biometric technique that identifies individuals based on their unique walking patterns, enabling long-distance identification. Traditional gait recognition methods rely on appearance-based approaches that utilize background-subtracted silhouette sequences to extract gait features. While effective and easy to compute, these methods are susceptible to variations in clothing, carried objects, and illumination changes, compromising the extraction of discriminative features in real-world applications. In contrast, model-based approaches using skeletal key points offer robustness against these covariates. Advances in human pose estimation (HPE) algorithms using convolutional neural networks (CNNs) have facilitated the extraction of skeletal key points, addressing some challenges of model-based approaches. However, the performance of skeleton-based methods still lags behind that of appearance-based approaches. This paper aims to bridge this performance gap by introducing a multi-biometric framework that extracts features from multiple HPE algorithms for gait recognition, employing feature-level fusion (FLF) and decision-level fusion (DLF) by leveraging a single-source multi-sample technique. We utilized state-of-the-art HPE algorithms, OpenPose, AlphaPose, and HRNet, to generate diverse skeleton data samples from a single source video. Subsequently, we employed a residual graph convolutional network (ResGCN) to extract features from the generated skeleton data. In the FLF approach, the features extracted from ResGCN and applied to the skeleton data samples generated by multiple HPE algorithms are aggregated point-wise for gait recognition, while in the DLF approach, the decisions of ResGCN applied to each skeleton data sample are integrated using majority voting for the final recognition. Our proposed method demonstrated state-of-the-art skeleton-based cross-view gait recognition performance on a popular dataset, CASIA-B.
Exploration of Effective Time-Velocity Distribution for Doppler-Radar-Based Personal Gait Identification Using Deep Learning
Personal identification based on radar gait measurement is an important application of biometric technology because it enables remote and continuous identification of people, irrespective of the lighting conditions and subjects’ outfits. This study explores an effective time-velocity distribution and its relevant parameters for Doppler-radar-based personal gait identification using deep learning. Most conventional studies on radar-based gait identification used a short-time Fourier transform (STFT), which is a general method to obtain time-velocity distribution for motion recognition using Doppler radar. However, the length of the window function that controls the time and velocity resolutions of the time-velocity image was empirically selected, and several other methods for calculating high-resolution time-velocity distributions were not considered. In this study, we compared four types of representative time-velocity distributions calculated from the Doppler-radar-received signals: STFT, wavelet transform, Wigner–Ville distribution, and smoothed pseudo-Wigner–Ville distribution. In addition, the identification accuracies of various parameter settings were also investigated. We observed that the optimally tuned STFT outperformed other high-resolution distributions, and a short length of the window function in the STFT process led to a reasonable accuracy; the best identification accuracy was 99% for the identification of twenty-five test subjects. These results indicate that STFT is the optimal time-velocity distribution for gait-based personal identification using the Doppler radar, although the time and velocity resolutions of the other methods were better than those of the STFT.
GaitMGL: Multi-Scale Temporal Dimension and Global–Local Feature Fusion for Gait Recognition
Gait recognition has received widespread attention due to its non-intrusive recognition mechanism. Currently, most gait recognition methods use appearance-based recognition methods, and such methods are easily affected by occlusions when facing complex environments, which in turn affects the recognition accuracy. With the maturity of pose estimation techniques, model-based gait recognition methods have received more and more attention due to their robustness in complex environments. However, the current model-based gait recognition methods mainly focus on modeling the global feature information in the spatial dimension, ignoring the importance of local features and their influence on recognition accuracy. Meanwhile, in the temporal dimension, these methods usually use single-scale temporal information extraction, which does not take into account the inconsistency of the motion cycles of the limbs when a human body is walking (e.g., arm swing and leg pace), leading to the loss of some limb temporal information. To solve these problems, we propose a gait recognition network based on a Global–Local Graph Convolutional Network, called GaitMGL. Specifically, we introduce a new spatio-temporal feature extraction module, MGL (Multi-scale Temporal and Global–Local Spatial Extraction Module), which consists of GLGCN (Global–Local Graph Convolutional Network) and MTCN (Multi-scale Temporal Convolutional Network). GLGCN models both global and local features, and extracts global–local motion information. MTCN, on the other hand, takes into account the inconsistency of local limb motion cycles, and facilitates multi-scale temporal convolution to capture the temporal information of limb motion. In short, our GaitMGL solves the problems of loss of local information and loss of temporal information at a single scale that exist in existing model-based gait recognition networks. We evaluated our method on three publicly available datasets, CASIA-B, Gait3D, and GREW, and the experimental results show that our method demonstrates surprising performance and achieves an accuracy of 63.12% in the dataset GREW, exceeding all existing model-based gait recognition networks.
Machine-learning-based children’s pathological gait classification with low-cost gait-recognition system
Background Pathological gaits of children may lead to terrible diseases, such as osteoarthritis or scoliosis. By monitoring the gait pattern of a child, proper therapeutic measures can be recommended to avoid the terrible consequence. However, low-cost systems for pathological gait recognition of children automatically have not been on market yet. Our goal was to design a low-cost gait-recognition system for children with only pressure information. Methods In this study, we design a pathological gait-recognition system (PGRS) with an 8 × 8 pressure-sensor array. An intelligent gait-recognition method (IGRM) based on machine learning and pure plantar pressure information is also proposed in static and dynamic sections to realize high accuracy and good real-time performance. To verifying the recognition effect, a total of 17 children were recruited in the experiments wearing PGRS to recognize three pathological gaits (toe-in, toe-out, and flat) and normal gait. Children are asked to walk naturally on level ground in the dynamic section or stand naturally and comfortably in the static section. The evaluation of the performance of recognition results included stratified tenfold cross-validation with recall, precision, and a time cost as metrics. Results The experimental results show that all of the IGRMs have been identified with a practically applicable degree of average accuracy either in the dynamic or static section. Experimental results indicate that the IGRM has 92.41% and 97.79% intra-subject recognition accuracy, and 85.78% and 78.81% inter-subject recognition accuracy, respectively, in the static and dynamic sections. And we find methods in the static section have less recognition accuracy due to the unnatural gesture of children when standing. Conclusions In this study, a low-cost PGRS has been verified and realize feasibility, highly average precision, and good real-time performance of gait recognition. The experimental results reveal the potential for the computer supervision of non-pathological and pathological gaits in the plantar-pressure patterns of children and for providing feedback in the application of gait-abnormality rectification.
An optimized feature selection using bio-geography optimization technique for human walking activities recognition
A bipedal walking robot is a kind of humanoid robot. It mimics human behavior and is devised to perform human-specific tasks. Currently, humanoid robots are not capable to walk properly like human beings. In this paper, a technique to identify different human walking activities using a human gait pattern is suggested. Human locomotion is a manifestation of a change in the joint angle of the hip, knee, and ankle. To achieve the aforementioned objective, firstly, 25 different subject’s data is collected for identification of seven different walking activities, namely, natural walk, walking on toes, walking on heels, walking upstairs, walking downstairs, sit-ups, and jogging. Next, the important features for gait activity recognition are selected using bio-geography based optimization, in which, classification accuracy is considered as a fitness function. Finally, we have explored six machine learning algorithms for the classification of gait activities, namely, support vector machine (SVM), K-nearest neighbor (KNN), random forest (RF), decision tree (DT), gradient boosting (GB), and extra tree classifier (ET). All these algorithms have been tested rigorously and achieve high accuracy of 91.64% in RF, 90.41% in SVM, 82.6% in KNN, 86.51% in DT, 88.34% in ET & 89.97% in GB respectively on our HAG dataset. The proposed technique is also validated on the WISDM data-set for comparative analysis.
Intelligent attendance monitoring system with spatio-temporal human action recognition
This paper proposes an intelligent attendance monitoring system based on spatio-temporal human action recognition, which includes human skeleton gait recognition, multi-action body silhouette recognition and face recognition. Our system solves several problems, for example, when a mask is worn to conceal the face, which leads to a decrease in recognition accuracy performance, and when a 3D face mask is used to fake an identity. The skeleton gait feature of our intelligent attendance monitoring system uses a temporal weighted K-nearest neighbours algorithm to train the recognition model and carry out identification, while the multi-action body silhouette feature uses a multiple K-nearest neighbours algorithm to train the recognition model, identify the person and vote on the outcome. Using the proposed system, which integrates skeleton gait features, action silhouette features and face features, more effective recognition can be achieved. When the system encounters a situation with feature masking, such as when an individual is wearing a mask or has changed their clothes, or when the viewing angle is masked, it can continue to deliver good recognition ability through multi-angle skeleton synthesis gait recognition. Our experimental results show that the recognition accuracy of the system is 83.33% when a specific person wears a mask and passes through a monitored area. The intelligent attendance monitoring system uses a LINE messaging API as the access control notification function and provides a responsive web platform that allows managers to perform follow-up management and monitoring.
Spatio-temporal silhouette sequence reconstruction for gait recognition against occlusion
Gait-based features provide the potential for a subject to be recognized even from a low-resolution image sequence, and they can be captured at a distance without the subject’s cooperation. Person recognition using gait-based features (gait recognition) is a promising real-life application. However, several body parts of the subjects are often occluded because of beams, pillars, cars and trees, or another walking person. Therefore, gait-based features are not applicable to approaches that require an unoccluded gait image sequence. Occlusion handling is a challenging but important issue for gait recognition. In this paper, we propose silhouette sequence reconstruction from an occluded sequence (sVideo) based on a conditional deep generative adversarial network (GAN). From the reconstructed sequence, we estimate the gait cycle and extract the gait features from a one gait cycle image sequence. To regularize the training of the proposed generative network, we use adversarial loss based on triplet hinge loss incorporating Wasserstein GAN (WGAN-hinge). To the best of our knowledge, WGAN-hinge is the first adversarial loss that supervises the generator network during training by incorporating pairwise similarity ranking information. The proposed approach was evaluated on multiple challenging occlusion patterns. The experimental results demonstrate that the proposed approach outperforms the existing state-of-the-art benchmarks.
Human Gait Recognition: A Deep Learning and Best Feature Selection Framework
Background—Human Gait Recognition (HGR) is an approach based on biometric and is being widely used for surveillance. HGR is adopted by researchers for the past several decades. Several factors are there that affect the system performance such as the walking variation due to clothes, a person carrying some luggage, variations in the view angle. Proposed—In this work, a new method is introduced to overcome different problems of HGR. A hybrid method is proposed or efficient HGR using deep learning and selection of best features. Four major steps are involved in this work-preprocessing of the video frames, manipulation of the pre-trained CNN model VGG-16 for the computation of the features, removing redundant features extracted from the CNN model, and classification. In the reduction of irrelevant features Principal Score and Kurtosis based approach is proposed named PSbK. After that, the features of PSbK are fused in one materix. Finally, this fused vector is fed to the One against All Multi Support Vector Machine (OAMSVM) classifier for the final results. Results—The system is evaluated by utilizing the CASIA B database and six angles 00°, 18°, 36°, 54°, 72°, and 90° are used and attained the accuracy of 95.80%, 96.0%, 95.90%, 96.20%, 95.60%, and 95.50%, respectively. Conclusion—The comparison with recent methods show the proposed method work better.
A Region-Aware Deep Learning Model for Dual-Subject Gait Recognition in Occluded Surveillance Scenarios
Surveillance systems can take various forms, but gait-based surveillance is emerging as a powerful approach due to its ability to identify individuals without requiring their cooperation. In the existing studies, several approaches have been suggested for gait recognition; nevertheless, the performance of existing systems is often degraded in real-world conditions due to covariate factors such as occlusions, clothing changes, walking speed, and varying camera viewpoints. Furthermore, most existing research focuses on single-person gait recognition; however, counting, tracking, detecting, and recognizing individuals in dual-subject settings with occlusions remains a challenging task. Therefore, this research proposed a variant of an automated gait model for occluded dual-subject walk scenarios. More precisely, in the proposed method, we have designed a deep learning (DL)-based dual-subject gait model (DSG) involving three modules. The first module handles silhouette segmentation, localization, and counting (SLC) using Mask-RCNN with MobileNetV2. The next stage uses a Convolutional block attention module (CBAM)-based Siamese network for frame-level tracking with a modified gallery setting. Following the last, gait recognition based on region-based deep learning is proposed for dual-subject gait recognition. The proposed method, tested on Shri Mata Vaishno Devi University (SMVDU)-Multi-Gait and Single-Gait datasets, shows strong performance with 94.00% segmentation, 58.36% tracking, and 63.04% gait recognition accuracy in dual-subject walk scenarios.
Beyond view transformation: feature distribution consistent GANs for cross-view gait recognition
Gait recognition systems have shown great potentials in the field of biometric recognition. Unfortunately, the accuracy of gait recognition is easily affected by a large view angle. To address the problem, this study proposes a feature distribution consistent generative adversarial network (FDC-GAN) to transform gait images from arbitrary views to the target view and then perform identity recognition. Besides reconstruction loss, view classification and identity preserving loss are also introduced to guide the generator to produce gait images of the target views and keep identity information simultaneously. To further encourage the network to generate gait images whose feature distribution can well align the true distribution, we also exploit the recently proposed recurrent cycle consistency loss, which can help to remove the unnoticed and useless content preserved in the generated gait images. The experimental results on datasets CASIA-B and OU-MVLP demonstrate the state-of-the-art performance of our model compared to other GAN-based cross-view gait recognition models.