Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
171 result(s) for "Geometry Graphic methods Data processing."
Sort by:
TI-84 Plus graphing calculator for dummies
That TI-84 in your hand is one amazing device. This book will help you unlock all the magic, so that you can graph scatter plots, analyze statistical data, share calculator files with your PC... and much more!
Robust Visual Tracking via Structured Multi-Task Sparse Learning
In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing mixed norms and we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259–2272, 2011 ) is a special case of our MTT formulation (denoted as the tracker) when Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers.
Image generator for tabular data based on non-Euclidean metrics for CNN-based classification
Tabular data is the predominant format for statistical analysis and machine learning across domains such as finance, biomedicine, and environmental sciences. However, conventional methods often face challenges when dealing with high dimensionality and complex nonlinear relationships. In contrast, deep learning models, particularly Convolutional Neural Networks (CNNs), are well-suited for automatic feature extraction and achieve high predictive accuracy, but are primarily designed for image-based inputs. This study presents a comparative evaluation of non-Euclidean distance metrics within the Image Generator for Tabular Data (IGTD) framework, which transforms tabular data into image representations for CNN-based classification. While the original IGTD relies on Euclidean distance, we extend the framework to adopt alternative metrics, including one minus correlation, Geodesic distance, Jensen-Shannon distance, Wasserstein distance, and Tropical distance. These metrics are designed to better capture complex, nonlinear relationships among features. Through systematic experiments on both simulated and real-world genomics datasets, we compare the performance of each distance metric in terms of classification accuracy and structural fidelity of the generated images. The results demonstrate that non-Euclidean metrics can significantly improve the effectiveness of CNN-based classification on tabular data. By enabling a more accurate encoding of feature relationships, this approach broadens the applicability of CNNs and offers a flexible, interpretable solution for high-dimensional, structured data across disciplines.
Creating Large-Scale City Models from 3D-Point Clouds: A Robust Approach with Hybrid Representation
We present a novel and robust method for modeling cities from 3D-point data. Our algorithm provides a more complete description than existing approaches by reconstructing simultaneously buildings, trees and topologically complex grounds. A major contribution of our work is the original way of modeling buildings which guarantees a high generalization level while having semantized and compact representations. Geometric 3D-primitives such as planes, cylinders, spheres or cones describe regular roof sections, and are combined with mesh-patches that represent irregular roof components. The various urban components interact through a non-convex energy minimization problem in which they are propagated under arrangement constraints over a planimetric map. Our approach is experimentally validated on complex buildings and large urban scenes of millions of points, and is compared to state-of-the-art methods.
Graph convolutional networks: a comprehensive review
Graphs naturally appear in numerous application domains, ranging from social analysis, bioinformatics to computer vision. The unique capability of graphs enables capturing the structural relations among data, and thus allows to harvest more insights compared to analyzing data in isolation. However, it is often very challenging to solve the learning problems on graphs, because (1) many types of data are not originally structured as graphs, such as images and text data, and (2) for graph-structured data, the underlying connectivity patterns are often complex and diverse. On the other hand, the representation learning has achieved great successes in many areas. Thereby, a potential solution is to learn the representation of graphs in a low-dimensional Euclidean space, such that the graph properties can be preserved. Although tremendous efforts have been made to address the graph representation learning problem, many of them still suffer from their shallow learning mechanisms. Deep learning models on graphs (e.g., graph neural networks) have recently emerged in machine learning and other related areas, and demonstrated the superior performance in various problems. In this survey, despite numerous types of graph neural networks, we conduct a comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models. First, we group the existing graph convolutional network models into two categories based on the types of convolutions and highlight some graph convolutional network models in details. Then, we categorize different graph convolutional networks according to the areas of their applications. Finally, we present several open challenges in this area and discuss potential directions for future research.
Rain or Snow Detection in Image Sequences Through Use of a Histogram of Orientation of Streaks
The detection of bad weather conditions is crucial for meteorological centers, specially with demand for air, sea and ground traffic management. In this article, a system based on computer vision is presented which detects the presence of rain or snow. To separate the foreground from the background in image sequences, a classical Gaussian Mixture Model is used. The foreground model serves to detect rain and snow, since these are dynamic weather phenomena. Selection rules based on photometry and size are proposed in order to select the potential rain streaks. Then a Histogram of Orientations of rain or snow Streaks (HOS), estimated with the method of geometric moments, is computed, which is assumed to follow a model of Gaussian-uniform mixture. The Gaussian distribution represents the orientation of the rain or the snow whereas the uniform distribution represents the orientation of the noise. An algorithm of expectation maximization is used to separate these two distributions. Following a goodness-of-fit test, the Gaussian distribution is temporally smoothed and its amplitude allows deciding the presence of rain or snow. When the presence of rain or of snow is detected, the HOS makes it possible to detect the pixels of rain or of snow in the foreground images, and to estimate the intensity of the precipitation of rain or of snow. The applications of the method are numerous and include the detection of critical weather conditions, the observation of weather, the reliability improvement of video-surveillance systems and rain rendering.
A survey of graph neural networks in various learning paradigms: methods, applications, and challenges
In the last decade, deep learning has reinvigorated the machine learning field. It has solved many problems in computer vision, speech recognition, natural language processing, and other domains with state-of-the-art performances. In these domains, the data is generally represented in the Euclidean space. Various other domains conform to non-Euclidean space, for which a graph is an ideal representation. Graphs are suitable for representing the dependencies and inter-relationships between various entities. Traditionally, handcrafted features for graphs are incapable of providing the necessary inference for various tasks from this complex data representation. Recently, there has been an emergence of employing various advances in deep learning for graph-based tasks (called Graph Neural Networks (GNNs)). This article introduces preliminary knowledge regarding GNNs and comprehensively surveys GNNs in different learning paradigms—supervised, unsupervised, semi-supervised, self-supervised, and few-shot or meta-learning. The taxonomy of each graph-based learning setting is provided with logical divisions of methods falling in the given learning setting. The approaches for each learning task are analyzed from theoretical and empirical standpoints. Further, we provide general architecture design guidelines for building GNN models. Various applications and benchmark datasets are also provided, along with open challenges still plaguing the general applicability of GNNs.
1-Point-RANSAC Structure from Motion for Vehicle-Mounted Cameras by Exploiting Non-holonomic Constraints
This paper presents a new method to estimate the relative motion of a vehicle from images of a single camera. The computational cost of the algorithm is limited only by the feature extraction and matching process, as the outlier removal and the motion estimation steps take less than a fraction of millisecond with a normal laptop computer. The biggest problem in visual motion estimation is data association; matched points contain many outliers that must be detected and removed for the motion to be accurately estimated. In the last few years, a very established method for removing outliers has been the “5-point RANSAC” algorithm which needs a minimum of 5 point correspondences to estimate the model hypotheses. Because of this, however, it can require up to several hundreds of iterations to find a set of points free of outliers. In this paper, we show that by exploiting the nonholonomic constraints of wheeled vehicles it is possible to use a restrictive motion model which allows us to parameterize the motion with only 1 point correspondence. Using a single feature correspondence for motion estimation is the lowest model parameterization possible and results in the two most efficient algorithms for removing outliers: 1-point RANSAC and histogram voting. To support our method we run many experiments on both synthetic and real data and compare the performance with a state-of-the-art approach. Finally, we show an application of our method to visual odometry by recovering a 3 Km trajectory in a cluttered urban environment and in real-time.
A Database and Evaluation Methodology for Optical Flow
The quantitative evaluation of optical flow algorithms by Barron et al. ( 1994 ) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high frame-rate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by Barron et al., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at http://vision.middlebury.edu/flow/ . Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them.
Hybrid Linear Modeling via Local Best-Fit Flats
We present a simple and fast geometric method for modeling data by a union of affine subspaces. The method begins by forming a collection of local best-fit affine subspaces, i.e., subspaces approximating the data in local neighborhoods. The correct sizes of the local neighborhoods are determined automatically by the Jones’ β 2 numbers (we prove under certain geometric conditions that our method finds the optimal local neighborhoods). The collection of subspaces is further processed by a greedy selection procedure or a spectral method to generate the final model. We discuss applications to tracking-based motion segmentation and clustering of faces under different illuminating conditions. We give extensive experimental evidence demonstrating the state of the art accuracy and speed of the suggested algorithms on these problems and also on synthetic hybrid linear data as well as the MNIST handwritten digits data; and we demonstrate how to use our algorithms for fast determination of the number of affine subspaces.