Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
137 result(s) for "two-dimensional face recognition"
Sort by:
RGB‐D face recognition using LBP with suitable feature dimension of depth image
This study proposes a robust method for the face recognition from low‐resolution red, green, and blue‐depth (RGB‐D) cameras acquired images which have a wide range of variations in head pose, illumination, facial expression, and occlusion in some cases. The local binary pattern (LBP) of the RGB‐D images with the suitable feature dimension of Depth image is employed to extract the facial features. On the basis of error correcting output codes, they are fed to multiclass support vector machines (MSVMs) for the off‐line training and validation, and then the online classification. The proposed method is called as the LBP‐RGB‐D‐MSVM with the suitable feature dimension of the depth image. The effectiveness of the proposed method is evaluated by the four databases: Indraprastha Institute of Information Technology, Delhi (IIIT‐D) RGB‐D, visual analysis of people (VAP) RGB‐D‐T, EURECOM, and the authors. In addition, an extended database merged by the first three databases is employed to compare among the proposed method and some existing two‐dimensional (2D) and 3D face recognition algorithms. The proposed method possesses satisfactory performance (as high as 99.10 ± 0.52% for Rank 5 recognition rate in their database) with low computation (62 ms for feature extraction) which is desirable for real‐time applications.
The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking
In this article, we present the Menpo 2D and Menpo 3D benchmarks, two new datasets for multi-pose 2D and 3D facial landmark localisation and tracking. In contrast to the previous benchmarks such as 300W and 300VW, the proposed benchmarks contain facial images in both semi-frontal and profile pose. We introduce an elaborate semi-automatic methodology for providing high-quality annotations for both the Menpo 2D and Menpo 3D benchmarks. In Menpo 2D benchmark, different visible landmark configurations are designed for semi-frontal and profile faces, thus making the 2D face alignment full-pose. In Menpo 3D benchmark, a united landmark configuration is designed for both semi-frontal and profile faces based on the correspondence with a 3D face model, thus making face alignment not only full-pose but also corresponding to the real-world 3D space. Based on the considerable number of annotated images, we organised Menpo 2D Challenge and Menpo 3D Challenge for face alignment under large pose variations in conjunction with CVPR 2017 and ICCV 2017, respectively. The results of these challenges demonstrate that recent deep learning architectures, when trained with the abundant data, lead to excellent results. We also provide a very simple, yet effective solution, named Cascade Multi-view Hourglass Model, to 2D and 3D face alignment. In our method, we take advantage of all 2D and 3D facial landmark annotations in a joint way. We not only capitalise on the correspondences between the semi-frontal and profile 2D facial landmarks but also employ joint supervision from both 2D and 3D facial landmarks. Finally, we discuss future directions on the topic of face alignment.
A Comparison of CNN-based 2D Static Facial Emotion Recognition Techniques
In the field of computer vision, recognizing expressions in 2D static facial images is a crucial aspect of facial emotion recognition (FER), with broad applications in real-world scenarios such as mental health diagnosis and security monitoring. Despite the various convolutional neural network (CNN) architectures proposed for FER, there is still a lack of systematic comparative studies on the classification capabilities of different CNN architectures. In this paper, we systematically compare the classification performance of four commonly used CNN architectures in FER research for 2D static facial expression recognition on the classic FER2013 dataset. Through experiments, we evaluate the classification accuracy of these architectures and analyze the recognition results for different emotion categories.
Automated face recognition using deep learning technique and center symmetric multivariant local binary pattern
Researchers have recently created several deep learning strategies for various tasks, and facial recognition has made remarkable progress in employing these techniques. Face recognition is a noncontact, nonobligatory, acceptable, and harmonious biometric recognition method with a promising national and social security future. The purpose of this paper is to improve the existing face recognition algorithm, investigate extensive data-driven face recognition methods, and propose a unique automated face recognition methodology based on generative adversarial networks (GANs) and the center symmetric multivariable local binary pattern (CS-MLBP). To begin, this paper employs the center symmetric multivariant local binary pattern (CS-MLBP) algorithm to extract the texture features of the face, addressing the issue that C2DPCA (column-based two-dimensional principle component analysis) does an excellent job of removing the global characteristics of the face but struggles to process the local features of the face under large samples. The extracted texture features are combined with the international features retrieved using C2DPCA to generate a multifeatured face. The proposed method, GAN-CS-MLBP, syndicates the power of GAN with the robustness of CS-MLBP, resulting in an accurate and efficient face recognition system. Deep learning algorithms, mainly neural networks, automatically extract discriminative properties from facial images. The learned features capture low-level information and high-level meanings, permitting the model to distinguish among dissimilar persons more successfully. To assess the proposed technique’s GAN-CS-MLBP performance, extensive experiments are performed on benchmark face recognition datasets such as LFW, YTF, and CASIA-WebFace. Giving to the findings, our method exceeds state-of-the-art facial recognition systems in terms of recognition accuracy and resilience. The proposed automatic face recognition system GAN-CS-MLBP provides a solid basis for accurate and efficient face recognition, paving the way for biometric breakthroughs and growing the security and ease of many applications.
Towards Fine-Grained Optimal 3D Face Dense Registration: An Iterative Dividing and Diffusing Method
Dense vertex-to-vertex correspondence (i.e. registration) between 3D faces is a fundamental and challenging issue for 3D &2D face analysis. While the sparse landmarks are definite with anatomically ground-truth correspondence, the dense vertex correspondences on most facial regions are unknown. In this view, the current methods commonly result in reasonable but diverse solutions, which deviate from the optimum to the dense registration problem. In this paper, we revisit dense registration by a dimension-degraded problem, i.e. proportional segmentation of a line, and employ an iterative dividing and diffusing method to reach an optimum solution that is robust to different initializations. We formulate a local registration problem for dividing and a linear least-square problem for diffusing, with constraints on fixed features on a 3D facial surface. We further propose a multi-resolution algorithm to accelerate the computational process. The proposed method is linked to a novel local scaling metric, where we illustrate the physical significance as smooth adaptions for local cells of 3D facial shapes. Extensive experiments on public datasets demonstrate the effectiveness of the proposed method in various aspects. Generally, the proposed method leads to not only significantly better representations of 3D facial data, but also coherent local deformations with elegant grid architecture for fine-grained registrations.
What Does 2D Geometric Information Really Tell Us About 3D Face Shape?
A face image contains geometric cues in the form of configurational information and contours that can be used to estimate 3D face shape. While it is clear that 3D reconstruction from 2D points is highly ambiguous if no further constraints are enforced, one might expect that the face-space constraint solves this problem. We show that this is not the case and that geometric information is an ambiguous cue. There are two sources for this ambiguity. The first is that, within the space of 3D face shapes, there are flexibility modes that remain when some parts of the face are fixed. The second occurs only under perspective projection and is a result of perspective transformation as camera distance varies. Two different faces, when viewed at different distances, can give rise to the same 2D geometry. To demonstrate these ambiguities, we develop new algorithms for fitting a 3D morphable model to 2D landmarks or contours under either orthographic or perspective projection and show how to compute flexibility modes for both cases. We show that both fitting problems can be posed as a separable nonlinear least squares problem and solved efficiently. We demonstrate both quantitatively and qualitatively that the ambiguity is present in reconstructions from geometric information alone but also in reconstructions from a state-of-the-art CNN-based method.
Age-invariant face network (AFN): a discriminative model towards age-invariant face recognition
Age-invariant face recognition (AIFR) is a significant research task in general face recognition, and it aims at eliminating gradual discordance of individual’s facial appearance caused by aging process. Previous discriminative methods decompose facial components over one-dimensional feature vectors which overlooks critical facial information and hypothesize linear relationships over aging process which is inadequate to describe complex correlations. In this paper, we propose an enhanced AIFR model, namely age-invariant face network (AFN), to eliminate the discrepancy of aging process over facial semblance. Specifically, we propose attentive factorization module (AFM) leveraging attention mechanism to decompose facial features into identity-related features and age-related features in two-dimensional space on both local and contextual levels. We take both linear and nonlinear correlation analyses into account for a better reflection of aging/rejuvenation process and hence propose a hybrid correlation regularizer (HCR) to supervise the decorrelation between factorized features. Both identity features and age features are supervised simultaneously in a multi-task learning framework where only identity features are used in test phase for evaluation of AIFR performance. Experiments across common cross-age datasets (e.g., FG-Net, CACD-VS, CALFW, AgeDB-30) show the effectiveness of proposed AFN. Further, our proposed AFN is validated over LFW dataset to demonstrate its effectiveness on general face recognition task.
Research on Current Situation of 3D face reconstruction based on 3D Morphable Models
3D face reconstruction based on 3D morphable models (3DMM) uses single or multiple 2D face RGB images to reconstruct the 3D information of the target and restore the spatial structure of the face, which has important research significance for face recognition, film industry, medical field and so on. This paper introduces the main technical methods and research status of 3D face reconstruction based on 3dmm in recent 20 years, summarizes the advantages and disadvantages and applicability of various researches on 3D face reconstruction using 3dmm, and analyzes the current research hotspot and future development trend of 3D face reconstruction.
Enhancing 2D-3D facial recognition accuracy of truncated-hiden faces using fused multi-model biometric deep features
Facial recognition based on truncated and obscured data is a challenging topic for various issues in computer vision and biometrics. It allows to determine features that truncate or obscure a face based on synchronous or asynchronous facial changes. The issue in facial recognition is how to develop robust algorithms capable of solving various problems related to information flow in images. In this paper, we propose a novel distinct model for 2D and 3D facial recognition based on deep learning, called HResAlex-Net. The proposed model is a hybrid Convolutional Neural Network (CNN) architecture designed for face identification by leveraging multimodal biometric feature fusion. The aim is to enhance the recognition system’s performance. The proposed approach applies feature-level fusion, combining elements from both the ResNet and AlexNet CNN structures. Our novel approach synergistically combines the strengths of AlexNet and ResNet-50 architectures, thereby amplifying their individual advantages while concurrently minimizing the overall computational complexity. The proposed method was assessed on 2D and 3D data. A brand-new 2D YaleFace dataset that includes hidden truncated images, asynchronous face changes, brightness level changes, and lighting condition changes (center light, left light, right light), was generated. Other complex and challenging 2D/3D databases in terms of variations in facial expressions, position, and asynchronous variations, were also used. The experiments conducted on blurred, transcribed, and masked image types, have shown that the proposed method allows achieving high recognition rates of up to 98.31% and 99.99% for 2D and 3D face data, respectively.
Real time face recognition system based on YOLO and InsightFace
Face Recognition is an important research topics in Machine Learning and Artificial Intelligence. It analyses and compares person's facial traits with a database that contains different faces to automatically recognise and verify a person's identity. It has attracted a lot of attention recently due to its non-intrusive nature. Unlike other biometric identification systems, face recognition does not require physical contact with the individual being identified, making it more convenient and hygienic. Although the existing face recognition system has achieved better performance, recognizing the obscured and disguised faces is difficult. Thus, to deal with these problems, this paper reveals a new real time unique face recognition network called YOLO-InsightFace that combines YOLO-V7, a cutting-edge deep learning model and InsightFace, one-of-a-kind 2D & 3D face analysis python module. YOLO-V7 is highly accurate and fast, making it ideal for real-time applications while InsightFace is capable of recognizing faces by generating highly discriminative face embeddings.