Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
1,455
result(s) for
"Data structures (Computer science) Graphic methods."
Sort by:
gpuRIR: A python library for room impulse response simulation with GPU acceleration
2021
The Image Source Method (ISM) is one of the most employed techniques to calculate acoustic Room Impulse Responses (RIRs), however, its computational complexity grows fast with the reverberation time of the room and its computation time can be prohibitive for some applications where a huge number of RIRs are needed. In this paper, we present a new implementation that dramatically improves the computation speed of the ISM by using Graphic Processing Units (GPUs) to parallelize both the simulation of multiple RIRs and the computation of the images inside each RIR. Additional speedups were achieved by exploiting the mixed precision capabilities of the newer GPUs and by using lookup tables. We provide a Python library under GNU license that can be easily used without any knowledge about GPU programming and we show that it is about 100 times faster than other state of the art CPU libraries. It may become a powerful tool for many applications that need to perform a large number of acoustic simulations, such as training machine learning systems for audio signal processing, or for real-time room acoustics simulations for immersive multimedia systems, such as augmented or virtual reality.
Journal Article
A survey of depth and inertial sensor fusion for human action recognition
2017
A number of review or survey articles have previously appeared on human action recognition where either vision sensors or inertial sensors are used individually. Considering that each sensor modality has its own limitations, in a number of previously published papers, it has been shown that the fusion of vision and inertial sensor data improves the accuracy of recognition. This survey article provides an overview of the recent investigations where both vision and inertial sensors are used together and simultaneously to perform human action recognition more effectively. The thrust of this survey is on the utilization of depth cameras and inertial sensors as these two types of sensors are cost-effective, commercially available, and more significantly they both provide 3D human action data. An overview of the components necessary to achieve fusion of data from depth and inertial sensors is provided. In addition, a review of the publicly available datasets that include depth and inertial data which are simultaneously captured via depth and inertial sensors is presented.
Journal Article
Robust Visual Tracking via Structured Multi-Task Sparse Learning
by
Liu, Si
,
Ahuja, Narendra
,
Zhang, Tianzhu
in
Algorithmics. Computability. Computer arithmetics
,
Algorithms
,
Analysis
2013
In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing
mixed norms
and
we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular
tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259–2272,
2011
) is a special case of our MTT formulation (denoted as the
tracker) when
Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers.
Journal Article
Hand gesture recognition with jointly calibrated Leap Motion and depth sensor
2016
Novel 3D acquisition devices like depth cameras and the Leap Motion have recently reached the market. Depth cameras allow to obtain a complete 3D description of the framed scene while the Leap Motion sensor is a device explicitly targeted for hand gesture recognition and provides only a limited set of relevant points. This paper shows how to jointly exploit the two types of sensors for accurate gesture recognition. An ad-hoc solution for the joint calibration of the two devices is firstly presented. Then a set of novel feature descriptors is introduced both for the Leap Motion and for depth data. Various schemes based on the distances of the hand samples from the centroid, on the curvature of the hand contour and on the convex hull of the hand shape are employed and the use of Leap Motion data to aid feature extraction is also considered. The proposed feature sets are fed to two different classifiers, one based on multi-class SVMs and one exploiting Random Forests. Different feature selection algorithms have also been tested in order to reduce the complexity of the approach. Experimental results show that a very high accuracy can be obtained from the proposed method. The current implementation is also able to run in real-time.
Journal Article
Principles of model checking
by
Katoen, Joost-Pieter
,
Baier, Christel
in
Computer software
,
Computer software -- Verification
,
Computer systems
2008
A comprehensive introduction to the foundations of model checking, a fully automated technique for finding flaws in hardware and software; with extensive examples and both practical and theoretical exercises.
Towards a better understanding of annotation tools for medical imaging: a survey
by
AlGhamdi, Manal
,
Collado-Mesa, Fernando
,
Abdel-Mottaleb, Mohamed
in
Algorithms
,
Annotations
,
Computer Communication Networks
2022
Medical imaging refers to several different technologies that are used to view the human body to diagnose, monitor, or treat medical conditions. It requires significant expertise to efficiently and correctly interpret the images generated by each of these technologies, which among others include radiography, ultrasound, and magnetic resonance imaging. Deep learning and machine learning techniques provide different solutions for medical image interpretation including those associated with detection and diagnosis. Despite the huge success of deep learning algorithms in image analysis, training algorithms to reach human-level performance in these tasks depends on the availability of large amounts of high-quality training data, including high-quality annotations to serve as ground-truth. Different annotation tools have been developed to assist with the annotation process. In this survey, we present the currently available annotation tools for medical imaging, including descriptions of graphical user interfaces (GUI) and supporting instruments. The main contribution of this study is to provide an intensive review of the popular annotation tools and show their successful usage in annotating medical imaging dataset to guide researchers in this area.
Journal Article
A literature review of online handwriting analysis to detect Parkinson’s disease at an early stage
by
Aouraghe, Ibtissame
,
Mrabti, Mostafa
,
Khaissidi, Ghizlane
in
Brain
,
Cost analysis
,
Data acquisition
2023
Parkinson’s disease (PD) affects millions of people worldwide, it dramatically affects the brain areas’ structure and functions. Therefore, it causes a progressive decline of cognitive, functional and behavioral abilities. These changes in the brain result in the degradation of motor skills’ performances. Handwriting is a daily task combining cognitive, kinesthetic and perceptual-motor abilities. Thus, any change in the brain areas affects directly on the aspects of handwriting. For this purpose, many researchers have studied the possibility of using the handwriting alterations caused by PD as diagnostic signs, in order to develop an autonomic and reliable Diagnosis Aid System which could strongly detect this pathology at an early stage. This intelligent system could help in assessing and controlling the evolution of PD, and consequently, in the improvement of the patients’ quality of life. This paper aims at presenting a literature review of the most relevant studies conducted in the area of the on line handwriting analysis, in order to support PD. Starting by the typical followed procedure which consists of handwriting data acquisition, used materiel, proposed tasks, feature extraction, and finally data analysis. Indeed, according to all the investigated studies, dynamic handwriting analysis is a powerful, noninvasive, and low-cost tool to effectively diagnosis PD. In conclusion of the paper, future directions and open issues are highlighted.
Journal Article
Presentation of a recommender system with ensemble learning and graph embedding: a case on MovieLens
by
Rostami Mehrdad
,
ouzandeh Saman
,
Berahmand Kamal
in
Classification
,
Decision making
,
Decision trees
2021
Information technology has spread widely, and extraordinarily large amounts of data have been made accessible to users, which has made it challenging to select data that are in accordance with user needs. For the resolution of the above issue, recommender systems have emerged, which much help users go through the process of decision-making and selecting relevant data. A recommender system predicts users’ behavior to be capable of detecting their interests and needs, and it often uses the classification technique for this purpose. It may not be sufficiently accurate to employ single classification, where not all cases can be examined, which makes the method inappropriate to specific problems. In this research, group classification and the ensemble learning technique were used for increasing prediction accuracy in recommender systems. Another issue that is raised here concerns user analysis. Given the large size of the data and a large number of users, the process of user needs analysis and prediction (using a graph in most cases, representing the relations between users and their selected items) is complicated and cumbersome in recommender systems. Graph embedding was also proposed for resolution of this issue, where all or part of user behavior can be simulated through the generation of several vectors, resolving the problem of user behavior analysis to a large extent while maintaining high efficiency. In this research, individuals most similar to the target user were classified using ensemble learning, fuzzy rules, and the decision tree, and relevant recommendations were then made to each user with a heterogeneous knowledge graph and embedding vectors. This study was performed on the MovieLens datasets, and the obtained results indicated the high efficiency of the presented method.
Journal Article
DRCDN: learning deep residual convolutional dehazing networks
by
He, Fazhi
,
Zhang, Shengdong
in
Artificial Intelligence
,
Artificial neural networks
,
Computer Graphics
2020
Single image dehazing, which is the process of removing haze from a single input image, is an important task in computer vision. This task is extremely challenging because it is massively ill-posed. In this paper, we propose a novel end-to-end deep residual convolutional dehazing network (DRCDN) based on convolutional neural networks for single image dehazing, which consists of two subnetworks: one network is used for recovering a coarse clear image, and the other network is used to refine the result. The DRCDN firstly predicts the coarse clear image via a context aggregation subnetwork, which can capture global structure information. Subsequently, it adopts a novel hierarchical convolutional neural network to further refine the details of the clean image by integrating the local context information. The DRCDN is directly trained using complete images and the corresponding ground-truth haze-free images. Experimental results on synthetic datasets and natural hazy images demonstrate that the proposed method performs favorably against the state-of-the-art methods.
Journal Article