Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
121 result(s) for "multimodal biometric recognition"
Sort by:
Comparative Evaluation of ECG and Motion Signals in the Context of Activity Recognition and Human Identification
This study presents a comparative analysis of electrocardiogram (ECG) and accelerometer (ACC) data in the context of unsupervised human activity recognition and subject identification. Recordings were obtained from 30 participants performing activities of daily living such as walking, sitting, lying, cleaning the floor, and climbing stairs. Distance-based signal comparison methods and clustering techniques were employed to evaluate the feasibility of each modality, both individually and in combination, to discriminate between individuals and activities. Results indicate that ACC signals provide superior performance in activity recognition (NMI = 0.728, accuracy = 0.817), while ECG signals show higher discriminative power for subject identification (NMI = 0.641, accuracy = 0.500). In contrast, combining ACC and ECG signals yielded lower scores in both tasks, suggesting that multimodal fusion introduced additional variability. These findings highlight the importance of selecting the most appropriate modality depending on the recognition objective and emphasize the challenges associated with multimodal approaches in unsupervised scenarios.
Multimodal biometric identification based on overlapped fingerprints, palm prints, and finger knuckles using BM-KMA and CS-RBFNN techniques in forensic applications
In several scenarios like forensic and civilian applications, biometric has emerged as a powerful technology for person authentication. Information extracted from different biometric traits is combined by the Multimodal Biometric (MB) solutions, hence showing a high resilience against presentation attacks. Additionally, they offer enhanced biometric performance and increased population coverage that is required for executing larger-scale recognition. By employing Brownian Motion enabled K-Means Algorithm (BM-KMA) and Cosine Swish activation-based Radial Basis Function Neural Network (RBFNN) (CS-RBFNN) methodologies, an MB authentication system centered on overlapped Fingerprints (FPs), Palm Prints (PPs), and finger knuckles (FKs) is proposed here. Primarily, from the publically available datasets, the overlapped FP images and hand images are taken. Next, to separate the PPs and FKs, the Region of Interest (ROI) is estimated for the hand image. Then, pre-processing, feature extraction, and feature reduction are carried out. From the overlapped FP, the noises are removed using BF; after that, the FP’s contrast is enriched using SMF-CLAHE for improving the clarity of the minutiae structure of the ridges. Following this, normalization is performed using the Min–Max operation. Minute features are extracted by separating the overlapped FP using BM-KMA, which makes the system from avoidance of system complexity by separating the overlapping. From this, interest features are selected using KRC-PCA. Next, feature fusion is conducted. Finally, CS-RBFNN is wielded to categorize genuine biometrics from imposter ones. Via performance metrics, the proposed system is further affirmed. The outcomes exhibited that the proposed technique surpasses the other prevailing methodologies.
Deep multimodal biometric recognition using contourlet derivative weighted rank fusion with human face, fingerprint and iris images
The goal of multimodal biometric recognition system is to make a decision by identifying their physiological behavioural traits. Nevertheless, the decision-making process by biometric recognition system can be extremely complex due to high dimension unimodal features in temporal domain. This paper explains a deep multimodal biometric system for human recognition using three traits, face, fingerprint and iris. With the objective of reducing the feature vector dimension in the temporal domain, first pre-processing is performed using Contourlet Transform Model. Next, Local Derivative Ternary Pattern model is applied to the pre-processed features where the feature discrimination power is improved by obtaining the coefficients that has maximum variation across pre-processed multimodality features, therefore improving recognition accuracy. Weighted Rank Level Fusion is applied to the extracted multimodal features, that efficiently combine the biometric matching scores from several modalities (i.e. face, fingerprint and iris). Finally, a deep learning framework is presented for improving the recognition rate of the multimodal biometric system in temporal domain. The results of the proposed multimodal biometric recognition framework were compared with other multimodal methods. Out of these comparisons, the multimodal face, fingerprint and iris fusion offers significant improvements in the recognition rate of the suggested multimodal biometric system.
Implementation of multimodal biometric recognition via multi-feature deep learning networks and feature fusion
Although there is an abundance of current research on facial recognition, it still faces significant challenges that are related to variations in factors such as aging, poses, occlusions, resolution, and appearances. In this paper, we propose a Multi-feature Deep Learning Network (MDLN) architecture that uses modalities from the facial and periocular regions, with the addition of texture descriptors to improve recognition performance. Specifically, MDLN is designed as a feature-level fusion approach that correlates between the multimodal biometrics data and texture descriptor, which creates a new feature representation. Therefore, the proposed MLDN model provides more information via the feature representation to achieve better performance, while overcoming the limitations that persist in existing unimodal deep learning approaches. The proposed model has been evaluated on several public datasets and through our experiments, we proved that our proposed MDLN has improved biometric recognition performances under challenging conditions, including variations in illumination, appearances, and pose misalignments.
Monitoring and Analyzing Driver Physiological States Based on Automotive Electronic Identification and Multimodal Biometric Recognition Methods
In an intelligent driving environment, monitoring the physiological state of drivers is crucial for ensuring driving safety. This paper proposes a method for monitoring and analyzing driver physiological characteristics by combining electronic vehicle identification (EVI) with multimodal biometric recognition. The method aims to efficiently monitor the driver’s heart rate, breathing frequency, emotional state, and fatigue level, providing real-time feedback to intelligent driving systems to enhance driving safety. First, considering the precision, adaptability, and real-time capabilities of current physiological signal monitoring devices, an intelligent cushion integrating MEMSs (Micro-Electro-Mechanical Systems) and optical sensors is designed. This cushion collects heart rate and breathing frequency data in real time without disrupting the driver, while an electrodermal activity monitoring system captures electromyography data. The sensor layout is optimized to accommodate various driving postures, ensuring accurate data collection. The EVI system assigns a unique identifier to each vehicle, linking it to the physiological data of different drivers. By combining the driver physiological data with the vehicle’s operational environment data, a comprehensive multi-source data fusion system is established for a driving state evaluation. Secondly, a deep learning model is employed to analyze physiological signals, specifically combining Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. The CNN extracts spatial features from the input signals, while the LSTM processes time-series data to capture the temporal characteristics. This combined model effectively identifies and analyzes the driver’s physiological state, enabling timely anomaly detection. The method was validated through real-vehicle tests involving multiple drivers, where extensive physiological and driving behavior data were collected. Experimental results show that the proposed method significantly enhances the accuracy and real-time performance of physiological state monitoring. These findings highlight the effectiveness of combining EVI with multimodal biometric recognition, offering a reliable means for assessing driver states in intelligent driving systems. Furthermore, the results emphasize the importance of personalizing adjustments based on individual driver differences for more effective monitoring.
Prognostic evaluation of multimodal biometric traits recognition based human face, finger print and iris images using ensembled SVM classifier
Biometric recognition is an effective method for discovering a person’s identity. Multimodal biometric recognition employs multiple sources of information about a human for authentication. Recently, many research works are designed for multimodal biometric recognition using classification techniques. However, the performance of conventional techniques was not efficient for achieving higher recognition rate. In order to overcome such limitations, an ensembled support vector machine based kernel mapping (ESVM-KM) technique is proposed for multimodal biometric recognition. The ESVM-KM technique is designed for improving the accuracy of multimodal biometric recognition with human face, finger print and iris images. The ESVM-KM technique initially performs the preprocessing in order to remove noise and to improve the image quality for human recognition. After that, ESVM-KM technique carried outs Gabor wavelet transformation based feature extraction process in which features of human face, finger print and iris images are efficiently extorted for classification. Finally, the ESVM-KM technique used ensembled SVM classifier for enhancing the recognition rate of multimodal biometric system. The ESVM-KM technique conducts simulation work on the metrics such as computational time, recognition rate, and true positive rate. The simulation results demonstrate that the ESVM-KM technique is able to improve the recognition rate and also reduces computational time of multimodal biometric recognition system when compared to state-of-the-art works. The results got through ESVM-KM are stored in cloud environment for easy and future access.
Cascaded multimodal biometric recognition framework
A practically viable multi-biometric recognition system should not only be stable, robust and accurate but should also adhere to real-time processing speed and memory constraints. This study proposes a cascaded classifier-based framework for use in biometric recognition systems. The proposed framework utilises a set of weak classifiers to reduce the enrolled users’ dataset to a small list of candidate users. This list is then used by a strong classifier set as the final stage of the cascade to formulate the decision. At each stage, the candidate list is generated by a Mahalanobis distance-based match score quality measure. One of the key features of the authors framework is that each classifier in the ensemble can be designed to use a different modality thus providing the advantages of a truly multimodal biometric recognition system. In addition, it is one of the first truly multimodal cascaded classifier-based approaches for biometric recognition. The performance of the proposed system is evaluated both for single and multimodalities to demonstrate the effectiveness of the approach.
Analysis of Hand Vein Images Using Hybrid Techniques
Multimodal biometric recognition is one of the techniques for recognizing a person with enhanced security, and it utilizes more than one biometric trait. One of the physiological biometric traits is the vein patterns, which are internal features of the body and cannot be simply forged. This chapter proposes a new methodology for biometric authentication using dorsal, palm, finger, and wrist veins of the hand. The analysis of these vein modalities is done both in the spatial domain and frequency domain. In the spatial domain, a modified 2D Gabor filter is used for feature extraction, and these features are fused at both the feature level and score level for further analysis. Similarly, in the frequency domain, a contourlet transform is used for feature extraction, a multiresolution singular value decomposition technique is utilized for fusing these features, and further classification is done with the help of a support vector machine classifier. The experimental results of the proposed system provide higher accuracy (of 96.66%) with lower false acceptance and false rejection rates, showing the efficiency of the system.
Deep Learning Approach for Multimodal Biometric Recognition System Based on Fusion of Iris, Face, and Finger Vein Traits
With the increasing demand for information security and security regulations all over the world, biometric recognition technology has been widely used in our everyday life. In this regard, multimodal biometrics technology has gained interest and became popular due to its ability to overcome a number of significant limitations of unimodal biometric systems. In this paper, a new multimodal biometric human identification system is proposed, which is based on a deep learning algorithm for recognizing humans using biometric modalities of iris, face, and finger vein. The structure of the system is based on convolutional neural networks (CNNs) which extract features and classify images by softmax classifier. To develop the system, three CNN models were combined; one for iris, one for face, and one for finger vein. In order to build the CNN model, the famous pertained model VGG-16 was used, the Adam optimization method was applied and categorical cross-entropy was used as a loss function. Some techniques to avoid overfitting were applied, such as image augmentation and dropout techniques. For fusing the CNN models, different fusion approaches were employed to explore the influence of fusion approaches on recognition performance, therefore, feature and score level fusion approaches were applied. The performance of the proposed system was empirically evaluated by conducting several experiments on the SDUMLA-HMT dataset, which is a multimodal biometrics dataset. The obtained results demonstrated that using three biometric traits in biometric identification systems obtained better results than using two or one biometric traits. The results also showed that our approach comfortably outperformed other state-of-the-art methods by achieving an accuracy of 99.39%, with a feature level fusion approach and an accuracy of 100% with different methods of score level fusion.