Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
176
result(s) for
"Vehicle Face Recognition"
Sort by:
Learning an Evolutionary Embedding via Massive Knowledge Distillation
2020
Knowledge distillation methods aim at transferring knowledge from a large powerful teacher network to a small compact student one. These methods often focus on close-set classification problems and matching features between teacher and student networks from a single sample. However, many real-world classification problems are open-set. This paper proposes an Evolutionary Embedding Learning (EEL) framework to learn a fast and accurate student network for open-set problems via massive knowledge distillation. First, we revisit the formulation of canonical knowledge distillation and make it suitable for the open-set problems with massive classes. Second, by introducing an angular constraint, a novel correlated embedding loss (CEL) is proposed to match embedding spaces between the teacher and student network from a global perspective. Lastly, we propose a simple yet effective paradigm towards a fast and accurate student network development for knowledge distillation. We show the possibility to implement an accelerated student network without sacrificing accuracy, compared with its teacher network. The experimental results are quite encouraging. EEL achieves better performance with other state-of-the-art methods for various large-scale open-set problems, including face recognition, vehicle re-identification and person re-identification.
Journal Article
What influences attitudes about artificial intelligence adoption: Evidence from U.S. local officials
2021
Rapid advances in machine learning and related techniques have increased optimism about self-driving cars, autonomous surgery, and other uses of artificial intelligence (AI). But adoption of these technologies is not simply a matter of breakthroughs in the design and training of algorithms. Regulators around the world will have to make a litany of choices about law and policy surrounding AI. To advance knowledge of how they will make these choices, we draw on a unique survey pool—690 local officials in the United States—a representative sample of U.S. local officials. These officials will make many of the decisions about AI adoption, from government use to regulation, given the decentralized structure of the United States. The results show larger levels of support for autonomous vehicles than autonomous surgery. Moreover, those that used ridesharing apps prior to the COVID-19 pandemic are significantly more supportive of autonomous vehicles. We also find that self-reported familiarity with AI is correlated with increased approval of AI uses in a variety of areas, including facial recognition, natural disaster impact planning, and even military surveillance. Related, those who expressed greater opposition to AI adoption also appear more concerned about trade-offs between privacy and information and bias in algorithms. Finally, the explanatory logic used by respondents varies based on gender and prior experience with AI, which we demonstrate with quantitative text analysis.
Journal Article
Advancing driver fatigue detection in diverse lighting conditions for assisted driving vehicles with enhanced facial recognition technologies
2024
Against the backdrop of increasingly mature intelligent driving assistance systems, effective monitoring of driver alertness during long-distance driving becomes especially crucial. This study introduces a novel method for driver fatigue detection aimed at enhancing the safety and reliability of intelligent driving assistance systems. The core of this method lies in the integration of advanced facial recognition technology using deep convolutional neural networks (CNN), particularly suited for varying lighting conditions in real-world scenarios, significantly improving the robustness of fatigue detection. Innovatively, the method incorporates emotion state analysis, providing a multi-dimensional perspective for assessing driver fatigue. It adeptly identifies subtle signs of fatigue in rapidly changing lighting and other complex environmental conditions, thereby strengthening traditional facial recognition techniques. Validation on two independent experimental datasets, specifically the Yawn and YawDDR datasets, reveals that our proposed method achieves a higher detection accuracy, with an impressive 95.3% on the YawDDR dataset, compared to 90.1% without the implementation of Algorithm 2. Additionally, our analysis highlights the method’s adaptability to varying brightness levels, improving detection accuracy by up to 0.05% in optimal lighting conditions. Such results underscore the effectiveness of our advanced data preprocessing and dynamic brightness adaptation techniques in enhancing the accuracy and computational efficiency of fatigue detection systems. These achievements not only showcase the potential application of advanced facial recognition technology combined with emotional analysis in autonomous driving systems but also pave new avenues for enhancing road safety and driver welfare.
Journal Article
On the use of Action Units and fuzzy explanatory models for facial expression recognition
by
Peregrina-Barreto, Hayde
,
Morales-Vargas, E.
,
Reyes-García, C. A.
in
Biology and Life Sciences
,
Computer applications
,
Decision making
2019
Facial expression recognition is related to the automatic identification of affective states of a subject by computational means. Facial expression recognition is used for many applications, such as security, human-computer interaction, driver safety, and health care. Although many works aim to tackle the problem of facial expression recognition, and the discriminative power may be acceptable, current solutions have limited explicative power, which is insufficient for certain applications, such as facial rehabilitation. Our aim is to alleviate the current limited explicative power by exploiting explainable fuzzy models over sequences of frontal face images. The proposed model uses appearance features to describe facial expressions in terms of facial movements, giving a detailed explanation of what movements are in the face, and why the model is making a decision. The model architecture was selected to keep the semantic meaning of the found facial movements. The proposed model can discriminate between the seven basic facial expressions, obtaining an average accuracy of 90.8±14%, with a maximum value of 92.9±28%.
Journal Article
Artificial Intelligence in Pharmacovigilance: An Introduction to Terms, Concepts, Applications, and Limitations
The tools of artificial intelligence (AI) have enormous potential to enhance activities in pharmacovigilance. Pharmacovigilance experts need not be AI experts, but they should know enough about AI to explore the possibilities of collaboration with those who are. Modern concepts of AI date from Alan Turing's work, especially his paper on \"the imitation game\", in the late 1940s and early 1950s. Its scope today includes computational skills, including the formulation of mathematical proofs; visual perception, including facial recognition and virtual reality; decision making by expert systems; aspects of language, such as language processing, speech recognition, creative composition, and translation; and combinations of these, e.g. in self-driving vehicles. Machines can be programmed with the ability to learn, using neural networks that mimic cognitive actions of the human brain, leading to deep structural learning. Limitations of AI include difficulties with language, arising from the need to understand context and interpret ambiguities, which particularly affect translation, and inadequacies of databases, requiring careful preparation and curation. New techniques may cause unforeseen difficulties via unexpected malfunctioning. Relevant terms and concepts include different types of machine learning, neural networks, natural language programming, ontologies, and expert systems. Adoption of the tools of AI in pharmacovigilance has been slow. Machine learning, in conjunction with natural language processing and data mining, to study adverse drug reactions in databases such as those found in electronic health records, claims databases, and social media, has the potential to enhance the characterization of known adverse effects and reactions and detect new signals.
Journal Article
Deep learning-based face detection and recognition on drones
by
Rostami, Mohsen
,
Parvin, Hashem
,
Farajollahi, Amirhamzeh
in
Accuracy
,
Algorithms
,
Artificial Intelligence
2024
Unmanned aerial vehicles as known as drones, are aircraft that can comfortably search locations which are excessively dangerous or difficult for humans and take data from bird's-eye view. Enabling unmanned aerial vehicles to detect and recognize humans on the ground is essential for various applications, such as remote monitoring, people search, and surveillance. The current face detection and recognition models are able to detect or recognize faces on unmanned aerial vehicles using various limits in height, angle and distance, mainly where drones take images from high altitude or long distance. In the present paper, we proposed a novel face detection and recognition model on drones for improving the performance of face recognition when query images are taken from high altitudes or long distances that do not show much facial information of the humans. Moreover, we aim to employ deep neural network to perform these tasks and reach an enhanced top performance. Experimental evaluation of the proposed framework compared to state-of-the-art models over the DroneFace dataset demonstrates that our method can attain competitive accuracy on both the recognition and detection protocols.
Journal Article
Face mask identification with enhanced cuckoo optimization and deep learning-based faster regional neural network
by
Pandey, Binay Kumar
,
Pandey, Digvijay
,
Lelisho, Mesfin Esayas
in
631/114/1314
,
631/114/1564
,
692/699/255
2024
A mask identification and social distance monitoring system using Unmanned Aerial Vehicles (UAV) in the outdoors has been proposed for a health establishment. The above approach performed surveillance of the surrounding area using cameras installed in UAVs and internet of things technologies, and the captured images seem useful for tracking the entire environment. However, innate images from unmanned aerial vehicles show an adaptable visual effect in an uncontrolled environment, making face-mask detection and recognition harder. The UAV picture first had to be converted to grayscale, then its contrast was amplified. Image contrast was improved using Optimum Wavelet-Based Masking and the Enhanced Cuckoo Methodology (ECM). According to the contrast-enhanced image, Gabor-Transform (GT) and Stroke Width Transform (SWT) methods are used to derive attributes that help categorise mask-wearers and non-mask-wearers. Using the retrieved attributes, a Weighted Naive Bayes Classification (WNBC) detected masks in the images. Additionally, a deep neural network-based, the faster Region-Based Convolutional Neural Networks (R-CNN) algorithm combined with Adaptive Galactic Swarm Optimization (AGSO) is being used to identify appropriate and incorrect face mask wear in images, as well as to monitor social distancing among individuals in crowded areas. When the system recognises unmasked individuals, it sends their information to the doctor and the nearby police station. One unmanned aerial vehicle’s automated system alert people via speakers, ensuring social spacing. The problem involves a large percentage of appropriate and incorrect face mask wear using data from GitHub and Kaggle, including a training repository of 16,000 images and a testing data set of 12,751 images. To enhance the performance of the model’s learning, the methodology of 10-fold cross-validation will be used. Precision, recall, F1-score, and speed are then measured to determine the efficacy of the suggested approach.
Journal Article
Driver Distraction Identification with an Ensemble of Convolutional Neural Networks
by
Eraqi, Hesham M.
,
Moustafa, Mohamed N.
,
Saad, Mohamed H.
in
Accidents
,
Artificial neural networks
,
Cellular telephones
2019
The World Health Organization (WHO) reported 1.25 million deaths yearly due to road traffic accidents worldwide and the number has been continuously increasing over the last few years. Nearly fifth of these accidents are caused by distracted drivers. Existing work of distracted driver detection is concerned with a small set of distractions (mostly, cell phone usage). Unreliable ad hoc methods are often used. In this paper, we present the first publicly available dataset for driver distraction identification with more distraction postures than existing alternatives. In addition, we propose a reliable deep learning-based solution that achieves a 90% accuracy. The system consists of a genetically weighted ensemble of convolutional neural networks; we show that a weighted ensemble of classifiers using a genetic algorithm yields a better classification confidence. We also study the effect of different visual elements in distraction detection by means of face and hand localizations, and skin segmentation. Finally, we present a thinned version of our ensemble that could achieve 84.64% classification accuracy and operate in a real-time environment.
Journal Article
Car expertise does not compete with face expertise during ensemble coding
2021
When objects from two categories of expertise (e.g., faces and cars in dual car/face experts) are processed simultaneously, competition occurs across a variety of tasks. Here, we investigate whether competition between face and car processing also occurs during ensemble coding. The relationship between single object recognition and ensemble coding is debated, but if ensemble coding relies on the same ability as object recognition, we expect cars to interfere with ensemble coding of faces as a function of car expertise. We measured the ability to judge the variability in identity of arrays of faces, in the presence of task-irrelevant distractors (cars or novel objects). On each trial, participants viewed two sequential arrays containing four faces and four distractors, judging which array was the more diverse in terms of face identity. We measured participants’ car expertise, object recognition ability, and face recognition ability. Using Bayesian statistics, we found evidence against competition as a function of car expertise during ensemble coding of faces. Face recognition ability predicted ensemble judgments for faces, regardless of the category of task-irrelevant distractors. The result suggests that ensemble coding is not susceptible to competition between different domains of similar expertise, unlike single-object recognition.
Journal Article