Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
11
result(s) for
"AR-HUD"
Sort by:
Design and Evaluation of Ecological Interface of Driving Warning System Based on AR-HUD
2024
As the global traffic environment becomes increasingly complex, driving safety issues have become more prominent, making manual-response driving warning systems (DWSs) essential. Augmented reality head-up display (AR-HUD) technology can project information directly, enhancing driver attention; however, improper design may increase cognitive load and affect safety. Thus, the design of AR-HUD driving warning interfaces must focus on improving attention and reducing cognitive load. Currently, systematic research on AR-HUD DWS interfaces is relatively scarce. This paper proposes an ecological interface cognitive balance design strategy for AR-HUD DWS based on cognitive load theory and environmental interface design theory. The research includes developing design models, an integrative framework, and experimental validation suitable for warning scenarios. Research results indicate that the proposed design effectively reduces cognitive load and significantly decreases driver response and comprehension times, outperforming existing interfaces. This design strategy and framework possess promotional value, providing theoretical references and methodological guidance for AR-HUD warning interface research.
Journal Article
Enhancing spatial learning during driving: the role of 3D navigation interface visualization in AR-HUD
by
Xu, Xun
,
Li, Yajun
,
Yang, Jing
in
Augmented Reality Head-Up Display (AR-HUD)
,
Geovisualization style
,
human-machine interface
2025
The style of geovisualization influences drivers’ cognition, decision-making, and spatial learning abilities. With the advancement of in-vehicle navigation technologies, Augmented Reality Head-Up Displays (AR-HUDs) have been widely applied in driving contexts. However, whether AR-HUDs impair drivers’ spatial learning and lead to over-reliance on navigation tools remains unclear. This study evaluates the impact of 2D and 3D Arrow Navigation Interfaces (ANIs) within AR-HUD systems on drivers’ spatial learning, using continuous and objective physiological measures, including Electroencephalography (EEG), Electrodermal Activity (EDA), Heart Rate (HR), and Heart Rate Variability (HRV). A pilot experiment conducted under real-road conditions indicates that the 3D-ANI enhances drivers’ spatial memory and reduces cognitive load. Notably, the depth perception and landmark cues provided by the 3D-ANI facilitate spatial memory encoding under turn-by-turn navigation, mitigating the typical limitations of such systems in supporting spatial knowledge acquisition. These findings offer critical insights into spatial cognitive mechanisms and provide valuable guidance for optimizing navigation interface design.
Journal Article
The impact of AR-HUD intelligent driving on the allocation of cognitive resources under the breakthrough of 5G technology
by
He, Jingjing
,
Hong, Zhicong
,
Huang, Junhong
in
AR-HUD interface design
,
cognitive resources
,
Head-up displays
2021
This paper focuses on the establishment of an ARHUD assisted driving test system based on a VR platform, which has the advantages of high security and immersion, repeatable experiments and the ability to perform eye-movement analysis. This paper first defines and designs the vehicle driving safety icons based on human-computer interaction principles and engineering psychology, supplemented by PS to define and design the AR-HUD interface while combining mental load and other factors, then uses 3Dsmax software to build the 3D model material required for driving, then builds the driving environment and designs various driving emergencies in Unity based on featured technologies such as multi-channel rendering and global illumination, and then combines HTC VIVE Pro eye display. Thirty drivers were then tested on a distraction task. Analysis of the subjects’ eye-movement data revealed that the AR-HUD system improved the cognitive efficiency of the drivers compared to the traditional driving method while allocating cognitive resources to the central driving area, speed module, navigation information, and hazard warnings in a balanced manner, thus improving the ability to react to unexpected driving events.
Journal Article
Reality Head-Up Display Navigation Design in Extreme Weather Conditions: Enhancing Driving Experience in Rain and Fog
2025
This study investigates the impact of extreme weather conditions (specifically heavy rain and fog) on drivers’ situational awareness by analyzing variations in illumination levels. The primary objective is to identify optimal color wavelengths for low-light environments, thereby providing a theoretical foundation for the design of augmented reality head-up display in adverse weather conditions. A within-subjects experimental design was employed with 26 participants in a simulated driving environment. Participants were exposed to different illumination levels and AR-HUD colors. Eye-tracking metrics, including fixation duration, visit duration, and fixation count, were recorded alongside situational awareness ratings to assess cognitive load and information processing efficiency. The results revealed that the yellow AR-HUD significantly enhanced situational awareness and reduced cognitive load in foggy conditions. While subjective assessments indicated no substantial effect of lighting conditions, objective measurements demonstrated the superior effectiveness of the yellow AR-HUD under foggy weather. These findings suggest that yellow AR-HUD navigation icons are more suitable for extreme weather environments, offering potential improvements in driving performance and overall road safety.
Journal Article
An Empirical Study of the Factors Influencing Users’ Intention to Use Automotive AR-HUD
by
Lin, Xiaowu
,
Liu, Tingting
,
Xia, Tiansheng
in
Access to information
,
Attitudes
,
Augmented Reality
2023
An automotive augmented reality head-up display (AR-HUD) can provide an immersive experience for users and is anticipated to become one of the ultimate terminals for human–machine interaction in future intelligent vehicles within the context of smart cities. However, the majority of the current research on AR-HUD is focused on technological implementation and interaction interface design, and there are relatively few studies that examine the psychological factors that may influence the public’s willingness to utilize this technology. Based on the theory of reasoned action (TRA) and the unified theory of acceptance and use of technology (UTAUT), this study constructs a model of users’ willingness to use automotive AR-HUD involving both cognitive and social factors. The study recruited 377 participants and collected data on users’ effort expectation, performance expectation, social influence, perceived trust, personal innovation, and AR-HUD usage intention through a questionnaire. It was found that users’ effort expectation influenced their intention to use AR-HUD through the mediating role of performance expectation. Social influence had an impact on users’ AR-HUD usage intention through the mediating role of perceived trust, and personal innovation moderated the strength of the role of social influence on perceived trust as a moderating variable.
Journal Article
Spatial Plane Positioning of AR-HUD Graphics: Implications for Driver Inattentional Blindness in Navigation and Collision Warning Scenarios
2025
In-vehicle Augmented Reality Head-Up Displays (AR-HUDs) enhance driving performance and experience by presenting critical information such as navigation cues and collision warnings. Although many studies have investigated the efficacy of AR-HUD navigation and collision warning interface designs, existing research has overlooked the critical interplay between graphic spatial positioning and safety risks arising from inattentional blindness. This study employed a single-factor within-subjects design, with Experiment 1 and Experiment 2 separately examining the impact of the spatial planar position (horizontal planar position, vertical planar position, mixed planar position) of AR-HUD navigation graphics and collision warning graphics on drivers’ inattentional blindness. The results revealed that the spatial planar position of AR-HUD navigation graphics has no significant effect on inattentional blindness behavior or reaction time. However, the horizontal planar position yielded the best user experience with low workload, followed by the mixed planar position. For AR-HUD collision warning graphics, their spatial planar position does not significantly influence the frequency of inattentional blindness; From the perspectives of workload and user experience, the vertical planar position of collision warning graphics provides the best experience with the lowest workload, while the mixed planar position demonstrates superior hedonic qualities. Overall, this study offers design guidelines for in-vehicle AR-HUD interfaces.
Journal Article
The Influence of Information Redundancy on Driving Behavior and Psychological Responses Under Different Fog and Risk Conditions: An Analysis of AR-HUD Interface Designs
2025
Adverse road conditions, particularly foggy weather, significantly impair drivers’ abilities to gather information and make judgments in response to unexpected events. To investigate the impact of different Augmented Reality-Head-Up Display (AR-HUD) interfaces (words-only, symbols-only, and words + symbols) on driving behavior, this study simulated driving scenarios under varying visibility and risk levels in foggy conditions, measuring reaction time (RT), time-to-collision (TTC), the maximum lateral acceleration, the maximum longitudinal acceleration, and subjective data. The results indicated that risk levels significantly affected drivers’ RT, TTC, and maximum longitudinal and lateral accelerations. The three interfaces significantly differed in RT and TTC across different risk levels in heavy fog. In light fog, words-only and redundant interfaces significantly affected RT across different risk levels; words-only and symbols-only interfaces significantly affected TTC across different risk levels. In addition, participants responded faster when using text-related interfaces in the subject’s native language. After analyzing data on perceived usability across the three interfaces, the results indicated that under high-risk conditions, both in light fog and heavy fog, participants rated the redundant interface as having higher usability and preferred the redundant interfaces. Based on these findings, this paper proposes the following design strategies for AR-HUD visual interfaces: (1) Under low-risk foggy driving conditions, all three interface types are effective and applicable. (2) Under high-risk foggy driving conditions, redundant interface design is recommended. Although it may not significantly improve driving performance, this interface type was subjectively perceived as more useful and preferred by the subjects. The findings of this study provide support for design of AR-HUD interfaces, contributing to enhanced driving safety and human–machine interaction experience under complex meteorological conditions. This offers practical implications for the development and optimization of intelligent vehicle systems.
Journal Article
Exploring Enhancement of AR-HUD Visual Interaction Design Through Application of Intelligent Algorithms
2023
This study aims to optimize the visual interaction design of AR-HUD and reduce cognitive load in complex driving situations. An immersive driving simulation incorporating eye-tracking technology was utilized to analyze objective physiological indices and measure subjective cognitive load using the NASA-TLX. Additionally, a visual cognitive load index was integrated into a BP-GA neural network model for load prediction, enabling the derivation of an optimal solution for AR-HUD design. The optimized AR-HUD interface demonstrated a significant reduction in cognitive load compared to the previous prototype. The experimental group achieved a mean total score of 25.63 on the WP scale, whereas the control group scored 43.53, indicating a remarkable improvement of 41.1%. This study presents an innovative approach to optimizing AR-HUD design, effectively reducing cognitive load in complex driving situations. The findings demonstrate the potential of the proposed algorithm to enhance user experience and performance.
Journal Article
AR-HUD Optical System Design and Its Multiple Configurations Analysis
2023
The use of augmented reality head-up displays (AR-HUD) in automobile safety driving has drawn more and more interest in recent years. An AR-HUD display system should be developed to fit the vehicle and the complicated traffic environment in order to increase the driver’s driving concentration and improve the man–vehicle synchronization. In this article, we suggest an AR-HUD display system with dual-layer virtual-image displays for the near field and far field, as well as further research and design of the adjustment system for multi-depth displays of far-field images. It also examines the EYEBOX horizontal adjustment margin of the dual light path. The analysis results show that the scale of EYEBOX is 120 × 60 mm2, the modulation transfer function (MTF) of near-field light path > 0.2 @ 6.7 lp/mm, and the MTF of far-field optical path > 0.4 @ 6.7 lp/mm. The distortion of the near-field optical path is less than 0.86%, and that of the far-field optical path is less than 2.2%. By modifying the folding mirror, the far-field optical path creates an 8 m to 24 m multi-depth virtual picture display. Image quality can be maintained when the near-field and far-field optical paths are moved horizontally by 25 mm and 100 mm, respectively. This study offers guidelines for the multi-depth display, EYEBOX horizontal adjustment, and optical layout of augmented reality head-up displays.
Journal Article