Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
126,525
result(s) for
"COMPUTERS / Computer Vision "
Sort by:
Introduction to EEG- and speech-based emotion recognition
by
Mehrotra, Suresh C.
,
Gawali, Bharti W.
,
Abhang, Priyanka A
in
Brain-computer interfaces
,
Electroencephalography
,
Emotions
2016
Introduction to EEG- and Speech-Based Emotion Recognition Methods examines the background, methods, and utility of using electroencephalograms (EEGs) to detect and recognize different emotions.By incorporating these methods in brain-computer interface (BCI), we can achieve more natural, efficient communication between humans and computers.
Human-AI Teaming
by
National Academies of Sciences, Engineering, and Medicine
,
Laboratory, Committee on Human-System Integration Research Topics for the 711th Human Performance Wing of the Air Force Research
,
Integration, Board on Human-Systems
in
Artificial intelligence
,
Human-computer interaction
,
Technology
2022
Although artificial intelligence (AI) has many potential benefits, it has also been shown to suffer from a number of challenges for successful performance in complex real-world environments such as military operations, including brittleness, perceptual limitations, hidden biases, and lack of a model of causation important for understanding and predicting future events. These limitations mean that AI will remain inadequate for operating on its own in many complex and novel situations for the foreseeable future, and that AI will need to be carefully managed by humans to achieve their desired utility.
Human-AI Teaming: State-of-the-Art and Research Needs examines the factors that are relevant to the design and implementation of AI systems with respect to human operations. This report provides an overview of the state of research on human-AI teaming to determine gaps and future research priorities and explores critical human-systems integration issues for achieving optimal performance.
Artificial Intelligence in Radiation Therapy
2023
This textbook covers a basis of mathematical algorithm in artificial intelligence and clinical adaptation and contribution of AI in radiotherapy. More experienced practitioners and researchers and members of medical physics communities, such as AAPM, ASTRO, and ESTRO, would find this book extremely useful.
Going Deeper than Tracking: A Survey of Computer-Vision Based Recognition of Animal Pain and Emotions
by
Carreira Lencioni, Gabriel
,
Kjellström, Hedvig
,
Salah, Albert Ali
in
Affect (Psychology)
,
Animal behavior
,
Animal welfare
2023
Advances in animal motion tracking and pose recognition have been a game changer in the study of animal behavior. Recently, an increasing number of works go ‘deeper’ than tracking, and address automated recognition of animals’ internal states such as emotions and pain with the aim of improving animal welfare, making this a timely moment for a systematization of the field. This paper provides a comprehensive survey of computer vision-based research on recognition of pain and emotional states in animals, addressing both facial and bodily behavior analysis. We summarize the efforts that have been presented so far within this topic—classifying them across different dimensions, highlight challenges and research gaps, and provide best practice recommendations for advancing the field, and some future directions for research.
Journal Article
The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection
by
Steger Carsten
,
Fauser, Michael
,
Batzner Kilian
in
Annotations
,
Anomalies
,
Artificial neural networks
2021
The detection of anomalous structures in natural image data is of utmost importance for numerous tasks in the field of computer vision. The development of methods for unsupervised anomaly detection requires data on which to train and evaluate new approaches and ideas. We introduce the MVTec anomaly detection dataset containing 5354 high-resolution color images of different object and texture categories. It contains normal, i.e., defect-free images intended for training and images with anomalies intended for testing. The anomalies manifest themselves in the form of over 70 different types of defects such as scratches, dents, contaminations, and various structural changes. In addition, we provide pixel-precise ground truth annotations for all anomalies. We conduct a thorough evaluation of current state-of-the-art unsupervised anomaly detection methods based on deep architectures such as convolutional autoencoders, generative adversarial networks, and feature descriptors using pretrained convolutional neural networks, as well as classical computer vision methods. We highlight the advantages and disadvantages of multiple performance metrics as well as threshold estimation techniques. This benchmark indicates that methods that leverage descriptors of pretrained networks outperform all other approaches and deep-learning-based generative models show considerable room for improvement.
Journal Article
The Atlas of AI
by
Crawford, Kate
in
Artificial intelligence
,
Artificial intelligence -- Moral and ethical aspects
,
Artificial intelligence -- Political aspects
2021
The hidden costs of artificial intelligence, from natural
resources and labor to privacy and freedom What happens
when artificial intelligence saturates political life and depletes
the planet? How is AI shaping our understanding of ourselves and
our societies? In this book Kate Crawford reveals how this
planetary network is fueling a shift toward undemocratic governance
and increased inequality. Drawing on more than a decade of
research, award-winning science, and technology, Crawford reveals
how AI is a technology of extraction: from the energy and minerals
needed to build and sustain its infrastructure, to the exploited
workers behind \"automated\" services, to the data AI collects from
us. Rather than taking a narrow focus on code and algorithms,
Crawford offers us a political and a material perspective on what
it takes to make artificial intelligence and where it goes wrong.
While technical systems present a veneer of objectivity, they are
always systems of power. This is an urgent account of what is at
stake as technology companies use artificial intelligence to
reshape the world.
Feature extraction & image processing for computer vision
by
Aguado, Alberto S.
,
Nixon, Mark S.
in
Computer vision
,
Computer vision -- Mathematics
,
Digital techniques
2012
Feature Extraction and Image Processing for Computer Vision is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab.Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated.
Scene Text Detection and Recognition: The Deep Learning Era
2021
With the rise and development of deep learning, computer vision has been tremendously transformed and reshaped. As an important research area in computer vision, scene text detection and recognition has been inevitably influenced by this wave of revolution, consequentially entering the era of deep learning. In recent years, the community has witnessed substantial advancements in mindset, methodology and performance. This survey is aimed at summarizing and analyzing the major changes and significant progresses of scene text detection and recognition in the deep learning era. Through this article, we devote to: (1) introduce new insights and ideas; (2) highlight recent techniques and benchmarks; (3) look ahead into future trends. Specifically, we will emphasize the dramatic differences brought by deep learning and remaining grand challenges. We expect that this review paper would serve as a reference book for researchers in this field. Related resources are also collected in our Github repository (https://github.com/Jyouhou/SceneTextPapers).
Journal Article
Rain Rendering for Evaluating and Improving Robustness to Bad Weather
by
Tremblay Maxime
,
de Charette Raoul
,
Lalonde Jean-François
in
Algorithms
,
Atmospheric models
,
Computer vision
2021
Rain fills the atmosphere with water particles, which breaks the common assumption that light travels unaltered from the scene to the camera. While it is well-known that rain affects computer vision algorithms, quantifying its impact is difficult. In this context, we present a rain rendering pipeline that enables the systematic evaluation of common computer vision algorithms to controlled amounts of rain. We present three different ways to add synthetic rain to existing images datasets: completely physic-based; completely data-driven; and a combination of both. The physic-based rain augmentation combines a physical particle simulator and accurate rain photometric modeling. We validate our rendering methods with a user study, demonstrating our rain is judged as much as 73% more realistic than the state-of-the-art. Using our generated rain-augmented KITTI, Cityscapes, and nuScenes datasets, we conduct a thorough evaluation of object detection, semantic segmentation, and depth estimation algorithms and show that their performance decreases in degraded weather, on the order of 15% for object detection, 60% for semantic segmentation, and 6-fold increase in depth estimation error. Finetuning on our augmented synthetic data results in improvements of 21% on object detection, 37% on semantic segmentation, and 8% on depth estimation.
Journal Article
VPR-Bench: An Open-Source Visual Place Recognition Evaluation Framework with Quantifiable Viewpoint and Appearance Change
by
Milford, Michael
,
Mubariz, Zaffar
,
Kooij, Julian
in
Autonomous navigation
,
Computer vision
,
Critical components
2021
Visual place recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance conditions and viewpoint changes and with computational constraints. VPR is related to the concepts of localisation, loop closure, image retrieval and is a critical component of many autonomous navigation systems ranging from autonomous vehicles to drones and computer vision systems. While the concept of place recognition has been around for many years, VPR research has grown rapidly as a field over the past decade due to improving camera hardware and its potential for deep learning-based techniques, and has become a widely studied topic in both the computer vision and robotics communities. This growth however has led to fragmentation and a lack of standardisation in the field, especially concerning performance evaluation. Moreover, the notion of viewpoint and illumination invariance of VPR techniques has largely been assessed qualitatively and hence ambiguously in the past. In this paper, we address these gaps through a new comprehensive open-source framework for assessing the performance of VPR techniques, dubbed “VPR-Bench”. VPR-Bench (Open-sourced at: https://github.com/MubarizZaffar/VPR-Bench) introduces two much-needed capabilities for VPR researchers: firstly, it contains a benchmark of 12 fully-integrated datasets and 10 VPR techniques, and secondly, it integrates a comprehensive variation-quantified dataset for quantifying viewpoint and illumination invariance. We apply and analyse popular evaluation metrics for VPR from both the computer vision and robotics communities, and discuss how these different metrics complement and/or replace each other, depending upon the underlying applications and system requirements. Our analysis reveals that no universal SOTA VPR technique exists, since: (a) state-of-the-art (SOTA) performance is achieved by 8 out of the 10 techniques on at least one dataset, (b) SOTA technique in one community does not necessarily yield SOTA performance in the other given the differences in datasets and metrics. Furthermore, we identify key open challenges since: (c) all 10 techniques suffer greatly in perceptually-aliased and less-structured environments, (d) all techniques suffer from viewpoint variance where lateral change has less effect than 3D change, and (e) directional illumination change has more adverse effects on matching confidence than uniform illumination change. We also present detailed meta-analyses regarding the roles of varying ground-truths, platforms, application requirements and technique parameters. Finally, VPR-Bench provides a unified implementation to deploy these VPR techniques, metrics and datasets, and is extensible through templates.
Journal Article