Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,913 result(s) for "localisation performance"
Sort by:
Machine-learning-based system for multi-sensor 3D localisation of stationary objects
Localisation of objects and people in indoor environments has been widely studied due to security issues and because of the benefits that a localisation system can provide. Indoor positioning systems (IPSs) based on more than one technology can improve localisation performance by leveraging the advantages of distinct technologies. This study proposes a multi-sensor IPS able to estimate the three-dimensional (3D) location of stationary objects using off-the-shelf equipment. By using radio-frequency identification (RFID) technology, machine-learning models based on support vector regression (SVR) and artificial neural networks (ANNs) are proposed. A k-means technique is also applied to improve accuracy. A computer vision (CV) subsystem detects visual markers in the scenario to enhance RFID localisation. To combine the RFID and CV subsystems, a fusion method based on the region of interest is proposed. We have implemented the authors’ system and evaluated it using real experiments. On bi-dimensional scenarios, localisation error is between 9 and 29 cm in the range of 1 and 2.2 m. In a machine-learning approach comparison, ANN performed 31% better than SVR approach. Regarding 3D scenarios, localisation errors in dense environments are 80.7 and 73.7 cm for ANN and SVR models, respectively.
Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique
The increased potential and effectiveness of Real-time Locating Systems (RTLSs) substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA) localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system’s configuration and LS’s relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS’ localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision.
The Performance Evaluation of Hybrid Localization Algorithm in Wireless Sensor Networks
As one of the key techniques in wireless sensor networks (WSN), localization algorithm has been a research hot topic and indispensable function in most wireless applications. In order to promote localization accuracy and efficiency, a lot of localization algorithms with different performances and computation complexities have been proposed. The paper discusses the drawbacks of some typical works on localization, and proposes a hybrid localization algorithm integrated with approximate point in triangle (APIT) and distance vector-hop (DV-HOP). To address the positioning accuracy and coverage rate, the objectives of this paper are three folds: firstly, adopting angle detection to determine the exact direction of unknown nodes. Then, the APIT algorithm is adopted over all unknown nodes within the triangle and its localization error is reduced from 14.7215 m in conventional APIT to 3.2348 m in the considered scenario. Finally, the DV-HOP algorithm is adopted with different weights for the nodes within the minimum hops, and localizes the rest unknown nodes in WSN with localization accuracy increased by 49%.
An ideal-observer model of human sound localization
In recent years, a great deal of research within the field of sound localization has been aimed at finding the acoustic cues that human listeners use to localize sounds and understanding the mechanisms by which they process these cues. In this paper, we propose a complementary approach by constructing an ideal-observer model, by which we mean a model that performs optimal information processing within a Bayesian context. The model considers all available spatial information contained within the acoustic signals encoded by each ear. Parameters for the optimal Bayesian model are determined based on psychoacoustic discrimination experiments on interaural time difference and sound intensity. Without regard as to how the human auditory system actually processes information, we examine the best possible localization performance that could be achieved based only on analysis of the input information, given the constraints of the normal auditory system. We show that the model performance is generally in good agreement with the actual human localization performance, as assessed in a meta-analysis of many localization experiments (Best et al. in Principles and applications of spatial hearing, pp 14–23. World Scientific Publishing, Singapore, 2011 ). We believe this approach can shed new light on the optimality (or otherwise) of human sound localization, especially with regard to the level of uncertainty in the input information. Moreover, the proposed model allows one to study the relative importance of various (combinations of) acoustic cues for spatial localization and enables a prediction of which cues are most informative and therefore likely to be used by humans in various circumstances.
Evaluation of a sectorised antenna in an indoor localisation system
Sectorised antennas (SA) integrated with low-power technologies have lead to significant improvement on localisation systems (LS) performance. A SA specially designed for localisation purposes is the Hive5, a platonic pentagonal patch-excited SA. This study presents the developed firmware and management application of an LS integrated with the Hive5. This LS performance is then compared with a typical wireless sensor network (WSN) LS based on four nodes. Both solutions are analysed within the same localisation environment and compared with the same supporting fingerprinting algorithm, an artificial neural network. Results show that LSs integrated with the Hive5 present clear benefits when compared with the WSN of four nodes in terms of resolution and obvious reduction of required reference units.
Robust individual pig tracking
The locations of pigs in the group housing enable activity monitoring and improve animal welfare. Vision-based methods for tracking individual pigs are noninvasive but have low tracking accuracy owing to long-term pig occlusion. In this study, we developed a vision-based method that accurately tracked individual pigs in group housing. We prepared and labeled datasets taken from an actual pig farm, trained a faster region-based convolutional neural network to recognize pigs’ bodies and heads, and tracked individual pigs across video frames. To quantify the tracking performance, we compared the proposed method with the global optimization (GO) method with the cost function and the simple online and real-time tracking (SORT) method on four additional test datasets that we prepared, labeled, and made publicly available. The predictive model detects pigs’ bodies accurately, with F1-scores of 0.75 to 1.00, on the four test datasets. The proposed method achieves the largest multi-object tracking accuracy (MOTA) values at 0.75, 0.98, and 1.00 for three test datasets. In the remaining dataset, the proposed method has the second-highest MOTA of 0.73. The proposed tracking method is robust to long-term occlusion, outperforms the competitive baselines in most datasets, and has practical utility in helping to track individual pigs accurately.
A Robust and Accurate Indoor Localization Using Learning-Based Fusion of Wi-Fi RTT and RSSI
Great attention has been paid to indoor localization due to its wide range of associated applications and services. Fingerprinting and time-based localization techniques are among the most popular approaches in the field due to their promising performance. However, fingerprinting techniques usually suffer from signal fluctuations and interference, which yields unstable localization performance. On the other hand, the accuracy of time-based techniques is highly affected by multipath propagation errors and non-line-of-sight transmissions. To combat these challenges, this paper presents a hybrid deep-learning-based indoor localization system called RRLoc which fuses fingerprinting and time-based techniques with a view of combining their advantages. RRLoc leverages a novel approach for fusing received signal strength indication (RSSI) and round-trip time (RTT) measurements and extracting high-level features using deep canonical correlation analysis. The extracted features are then used in training a localization model for facilitating the location estimation process. Different modules are incorporated to improve the deep model’s generalization against overtraining and noise. The experimental results obtained at two different indoor environments show that RRLoc improves localization accuracy by at least 267% and 496% compared to the state-of-the-art fingerprinting and ranging-based-multilateration techniques, respectively.
Reference Pose Generation for Long-term Visual Localization via Learned Features and View Synthesis
Visual Localization is one of the key enabling technologies for autonomous driving and augmented reality. High quality datasets with accurate 6 Degree-of-Freedom (DoF) reference poses are the foundation for benchmarking and improving existing methods. Traditionally, reference poses have been obtained via Structure-from-Motion (SfM). However, SfM itself relies on local features which are prone to fail when images were taken under different conditions, e.g., day/night changes. At the same time, manually annotating feature correspondences is not scalable and potentially inaccurate. In this work, we propose a semi-automated approach to generate reference poses based on feature matching between renderings of a 3D model and real images via learned features. Given an initial pose estimate, our approach iteratively refines the pose based on feature matches against a rendering of the model from the current pose estimate. We significantly improve the nighttime reference poses of the popular Aachen Day–Night dataset, showing that state-of-the-art visual localization methods perform better (up to 47%) than predicted by the original reference poses. We extend the dataset with new nighttime test images, provide uncertainty estimates for our new reference poses, and introduce a new evaluation criterion. We will make our reference poses and our framework publicly available upon publication.
Memristor-based analogue computing for brain-inspired sound localization with in situ training
The human nervous system senses the physical world in an analogue but efficient way. As a crucial ability of the human brain, sound localization is a representative analogue computing task and often employed in virtual auditory systems. Different from well-demonstrated classification applications, all output neurons in localization tasks contribute to the predicted direction, introducing much higher challenges for hardware demonstration with memristor arrays. In this work, with the proposed multi-threshold-update scheme, we experimentally demonstrate the in-situ learning ability of the sound localization function in a 1K analogue memristor array. The experimental and evaluation results reveal that the scheme improves the training accuracy by ∼45.7% compared to the existing method and reduces the energy consumption by ∼184× relative to the previous work. This work represents a significant advance towards memristor-based auditory localization system with low energy consumption and high performance. Sound localization is one of the many learning tasks accomplished by the brain based on the binaural signals of the ears. Here, Wu et al demonstrate in-situ learning of sound localization function using a memristor array, with dramatic improvements in energy efficiency.