Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
947 result(s) for "indoor navigation system"
Sort by:
INSUS: Indoor Navigation System Using Unity and Smartphone for User Ambulation Assistance
Currently, outdoor navigation systems have widely been used around the world on smartphones. They rely on GPS (Global Positioning System). However, indoor navigation systems are still under development due to the complex structure of indoor environments, including multiple floors, many rooms, steps, and elevators. In this paper, we present the design and implementation of the Indoor Navigation System using Unity and Smartphone (INSUS). INSUS shows the arrow of the moving direction on the camera view based on a smartphone’s augmented reality (AR) technology. To trace the user location, it utilizes the Simultaneous Localization and Mapping (SLAM) technique with a gyroscope and a camera in a smartphone to track users’ movements inside a building after initializing the current location by the QR code. Unity is introduced to obtain the 3D information of the target indoor environment for Visual SLAM. The data are stored in the IoT application server called SEMAR for visualizations. We implement a prototype system of INSUS inside buildings in two universities. We found that scanning QR codes with the smartphone perpendicular in angle between 60∘ and 100∘ achieves the highest QR code detection accuracy. We also found that the phone’s tilt angles influence the navigation success rate, with 90∘ to 100∘ tilt angles giving better navigation success compared to lower tilt angles. INSUS also proved to be a robust navigation system, evidenced by near identical navigation success rate results in navigation scenarios with or without disturbance. Furthermore, based on the questionnaire responses from the respondents, it was generally found that INSUS received positive feedback and there is support to improve the system.
ARBIN: Augmented Reality Based Indoor Navigation System
Due to the popularity of indoor positioning technology, indoor navigation applications have been deployed in large buildings, such as hospitals, airports, and train stations, to guide visitors to their destinations. A commonly-used user interface, shown on smartphones, is a 2D floor map with a route to the destination. The navigation instructions, such as turn left, turn right, and go straight, pop up on the screen when users come to an intersection. However, owing to the restrictions of a 2D navigation map, users may face mental pressure and get confused while they are making a connection between the real environment and the 2D navigation map before moving forward. For this reason, we developed ARBIN, an augmented reality-based navigation system, which posts navigation instructions on the screen of real-world environments for ease of use. Thus, there is no need for users to make a connection between the navigation instructions and the real-world environment. In order to evaluate the applicability of ARBIN, a series of experiments were conducted in the outpatient area of the National Taiwan University Hospital YunLin Branch, which is nearly 1800 m2, with 35 destinations and points of interests, such as a cardiovascular clinic, x-ray examination room, pharmacy, and so on. Four different types of smartphone were adopted for evaluation. Our results show that ARBIN can achieve 3 to 5 m accuracy, and provide users with correct instructions on their way to the destinations. ARBIN proved to be a practical solution for indoor navigation, especially for large buildings.
Toward Accurate Position Estimation Using Learning to Prediction Algorithm in Indoor Navigation
Internet of Things is advancing, and the augmented role of smart navigation in automating processes is at its vanguard. Smart navigation and location tracking systems are finding increasing use in the area of the mission-critical indoor scenario, logistics, medicine, and security. A demanding emerging area is an Indoor Localization due to the increased fascination towards location-based services. Numerous inertial assessments unit-based indoor localization mechanisms have been suggested in this regard. However, these methods have many shortcomings pertaining to accuracy and consistency. In this study, we propose a novel position estimation system based on learning to the prediction model to address the above challenges. The designed system consists of two modules; learning to prediction module and position estimation using sensor fusion in an indoor environment. The prediction algorithm is attached to the learning module. Moreover, the learning module continuously controls, observes, and enhances the efficiency of the prediction algorithm by evaluating the output and taking into account the exogenous factors that may have an impact on its outcome. On top of that, we reckon a situation where the prediction algorithm can be applied to anticipate the accurate gyroscope and accelerometer reading from the noisy sensor readings. In the designed system, we consider a scenario where the learning module, based on Artificial Neural Network, and Kalman filter are used as a prediction algorithm to predict the actual accelerometer and gyroscope reading from the noisy sensor reading. Moreover, to acquire data, we use the next-generation inertial measurement unit, which contains a 3-axis accelerometer and gyroscope data. Finally, for the performance and accuracy of the proposed system, we carried out numbers of experiments, and we observed that the proposed Kalman filter with learning module performed better than the traditional Kalman filter algorithm in terms of root mean square error metric.
Indoor navigation systems based on data mining techniques in internet of things: a survey
Internet of Things (IoT) is turning into an essential part of daily life, and numerous IoT-based scenarios will be seen in future of modern cities ranging from small indoor situations to huge outdoor environments. In this era, navigation continues to be a crucial element in both outdoor and indoor environments, and many solutions have been provided in both cases. On the other side, recent smart objects have produced a substantial amount of various data which demands sophisticated data mining solutions to cope with them. This paper presents a detailed review of previous studies on using data mining techniques in indoor navigation systems for the loT scenarios. We aim to understand what type of navigation problems exist in different IoT scenarios with a focus on indoor environments and later on we investigate how data mining solutions can provide solutions on those challenges.
Improving Accuracy of the Alpha–Beta Filter Algorithm Using an ANN-Based Learning Mechanism in Indoor Navigation System
The navigation system has been around for the last several years. Recently, the emergence of miniaturized sensors has made it easy to navigate the object in an indoor environment. These sensors give away a great deal of information about the user (location, posture, communication patterns, etc.), which helps in capturing the user’s context. Such information can be utilized to create smarter apps from which the user can benefit. A challenging new area that is receiving a lot of attention is Indoor Localization, whereas interest in location-based services is also rising. While numerous inertial measurement unit-based indoor localization techniques have been proposed, these techniques have many shortcomings related to accuracy and consistency. In this article, we present a novel solution for improving the accuracy of indoor navigation using a learning to perdition model. The design system tracks the location of the object in an indoor environment where the global positioning system and other satellites will not work properly. Moreover, in order to improve the accuracy of indoor navigation, we proposed a learning to prediction model-based artificial neural network to improve the prediction accuracy of the prediction algorithm. For experimental analysis, we use the next generation inertial measurement unit (IMU) in order to acquired sensing data. The next generation IMU is a compact IMU and data acquisition platform that combines onboard triple-axis sensors like accelerometers, gyroscopes, and magnetometers. Furthermore, we consider a scenario where the prediction algorithm is used to predict the actual sensor reading from the noisy sensor reading. Additionally, we have developed an artificial neural network-based learning module to tune the parameter of alpha and beta in the alpha–beta filter algorithm to minimize the amount of error in the current sensor readings. In order to evaluate the accuracy of the system, we carried out a number of experiments through which we observed that the alpha–beta filter with a learning module performed better than the traditional alpha–beta filter algorithm in terms of RMSE.
Autonomous Smart White Cane Navigation System for Indoor Usage
According to the statistics provided by the World Health Organization, the number of people suffering from visual impairment is approximately 1.3 billion. The number of blind and visually impaired people is expected to increase over the coming years, and it is estimated to triple by the end of 2050 which is quite alarming. Keeping the needs and problems faced by the visually impaired people in mind, we have come up with a technological solution that is a “Smart Cane device” that can help people having sight impairment to navigate with ease and to avoid the risk factors surrounding them. Currently, the three main options available for blind people are using a white cane, technological tools and guide dogs. The solution that has been proposed in this article is using various technological tools to come up with a smart solution to the problem to facilitate the users’ life. The designed system mainly aims to facilitate indoor navigation using cloud computing and Internet of things (IoT) wireless scanners. The goal of developing the Smart Cane can be achieved by integrating various hardware and software systems. The proposed solution of a Smart Cane device aims to provide smooth displacement for the visually impaired people from one place to another and to provide them with a tool that can help them to communicate with their surrounding environment.
The Effectiveness of UWB-Based Indoor Positioning Systems for the Navigation of Visually Impaired Individuals
UWB has been in existence for several years, but it was only a few years ago that it transitioned from a specialized niche to more mainstream applications. Recent market data indicate a rapid increase in the popularity of UWB in consumer products, such as smartphones and smart home devices, as well as automotive and industrial real-time location systems. The challenge of achieving accurate positioning in indoor environments arises from various factors such as distance, location, beacon density, dynamic surroundings, and the density and type of obstacles. This research used MFi-certified UWB beacon chipsets and integrated them with a mobile application dedicated to iOS by implementing the near interaction accessory protocol. The analysis covers both static and dynamic cases. Thanks to the acquisition of measurements, two main candidates for indoor localization infrastructure were analyzed and compared in terms of accuracy, namely UWB and LIDAR, with the latter used as a reference system. The problem of achieving accurate positioning in various applications and environments was analyzed, and future solutions were proposed. The results show that the achieved accuracy is sufficient for tracking individuals and may serve as guidelines for achievable accuracy or may provide a basis for further research into a complex sensor fusion-based navigation system. This research provides several findings. Firstly, in dynamic conditions, LIDAR measurements showed higher accuracy than UWB beacons. Secondly, integrating data from multiple sensors could enhance localization accuracy in non-line-of-sight scenarios. Lastly, advancements in UWB technology may expand the availability of competitive hardware, facilitating a thorough evaluation of its accuracy and effectiveness in practical systems. These insights may be particularly useful in designing navigation systems for blind individuals in buildings.
Evaluation of an Accessible, Real-Time, and Infrastructure-Free Indoor Navigation System by Users Who Are Blind in the Mall of America
Introduction: This article describes an evaluation of MagNav, a speech-based, infrastructure-free indoor navigation system. The research was conducted in the Mall of America, the largest shopping mall in the United States, to empirically investigate the impact of memory load on route-guidance performance. Method: Twelve participants who are blind and 12 age-matched sighted controls participated in the study. Comparisons are made for route-guidance performance between use of updated, real-time route instructions (system-aided condition) and a system-unaided (memory-based condition) where the same instructions were only provided in advance of route travel. The sighted controls (who navigated under typical visual perception but used the system for route guidance) represent a best case comparison benchmark with the blind participants who used the system. Results: Results across all three test measures provide compelling behavioral evidence that blind navigators receiving real-time verbal information from the MagNav system performed route travel faster (navigation time), more accurately (fewer errors in reaching the destination), and more confidently (fewer requests for bystander assistance) compared to conditions where the same route information was only available to them in advance of travel. In addition, no statistically reliable differences were observed for any measure in the system-aided conditions between the blind and sighted participants. Posttest survey results corroborate the empirical findings, further supporting the efficacy of the MagNav system. Discussion: This research provides compelling quantitative and qualitative evidence showing the utility of an infrastructure-free, low-memory demand navigation system for supporting route guidance through complex indoor environments and supports the theory that functionally equivalent navigation performance is possible when access to real-time environmental information is available, irrespective of visual status. Implications for designers and practitioners: Findings provide insight for the importance of developers of accessible navigation systems to employ interfaces that minimize memory demands.
A User Location Reset Method through Object Recognition in Indoor Navigation System Using Unity and a Smartphone (INSUS)
To enhance user experiences of reaching destinations in large, complex buildings, we have developed a indoor navigation system using Unity and a smartphone called INSUS. It can reset the user location using a quick response (QR) code to reduce the loss of direction of the user during navigation. However, this approach needs a number of QR code sheets to be prepared in the field, causing extra loads at implementation. In this paper, we propose another reset method to reduce loads by recognizing information of naturally installed signs in the field using object detection and Optical Character Recognition (OCR) technologies. A lot of signs exist in a building, containing texts such as room numbers, room names, and floor numbers. In the proposal, the Sign Image is taken with a smartphone, the sign is detected by YOLOv8, the text inside the sign is recognized by PaddleOCR, and it is compared with each record in the Room Database using Levenshtein distance. For evaluations, we applied the proposal in two buildings in Okayama University, Japan. The results show that YOLOv8 achieved mAP@0.5 0.995 and mAP@0.5:0.95 0.978, and PaddleOCR could extract text in the sign image accurately with an averaged CER% lower than 10%. The combination of both YOLOv8 and PaddleOCR decreases the execution time by 6.71s compared to the previous method. The results confirmed the effectiveness of the proposal.