Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
5,174
result(s) for
"rendering"
Sort by:
How does light impact on non-uniform surfaces?
2025
To acquire images, light is necessary. In this paper, we focus on the impact of this light on the non-uniform surfaces. We present two scenes with different light settings (reference light like D65 or illuminant A and some specifically poorly designed LED lights). After comparing the light quality, we look at the impact of these lights on the rendering of the scene. Poorly designed lights are able to clearly illustrate how light impacts on the rendering complexity of images.
Journal Article
Improved Rendering of Car Paint Sparkle with Random Tiling Approach
2025
Car paint suppliers and automobile manufacturers show great interest in virtually assessing and designing new automotive paints. Car paint shows a complex visual appearance where the sparkling of the effect pigments plays a big role. Previously bi-directional texture functions (BTF) were proposed to represent a car paint with sparkling. Rendering requires the interpolation of the recorded texture images, that are part of the BTF. Conventional approaches for blending the images use linear interpolation, which causes reduced contrast and intensity, and a static dynamic of the glittering. Here, we propose a new random tiling approach for texture image interpolation where each pixel is interpolated in a short, random interval independently from each other. Results show that sparkling contrast and intensity are preserved, and that the dynamic of the sparkling is more realistic.
Journal Article
Enhancing View Synthesis with Depth-Guided Neural Radiance Fields and Improved Depth Completion
2024
Neural radiance fields (NeRFs) leverage a neural representation to encode scenes, obtaining photorealistic rendering of novel views. However, NeRF has notable limitations. A significant drawback is that it does not capture surface geometry and only renders the object surface colors. Furthermore, the training of NeRF is exceedingly time-consuming. We propose Depth-NeRF as a solution to these issues. Specifically, our approach employs a fast depth completion algorithm to denoise and complete the depth maps generated by RGB-D cameras. These improved depth maps guide the sampling points of NeRF to be distributed closer to the scene’s surface, benefiting from dense depth information. Furthermore, we have optimized the network structure of NeRF and integrated depth information to constrain the optimization process, ensuring that the termination distribution of the ray is consistent with the scene’s geometry. Compared to NeRF, our method accelerates the training speed by 18%, and the rendered images achieve a higher PSNR than those obtained by mainstream methods. Additionally, there is a significant reduction in RMSE between the rendered scene depth and the ground truth depth, which indicates that our method can better capture the geometric information of the scene. With these improvements, we can train the NeRF model more efficiently and achieve more accurate rendering results.
Journal Article
Auralization based on multi-perspective ambisonic room impulse responses
2020
Most often, virtual acoustic rendering employs real-time updated room acoustic simulations to accomplish auralization for a variable listener perspective. As an alternative, we propose and test a technique to interpolate room impulse responses, specifically Ambisonic room impulse responses (ARIRs) available at a grid of spatially distributed receiver perspectives, measured or simulated in a desired acoustic environment. In particular, we extrapolate a triplet of neighboring ARIRs to the variable listener perspective, preceding their linear interpolation. The extrapolation is achieved by decomposing each ARIR into localized sound events and re-assigning their direction, time, and level to what could be observed at the listener perspective, with as much temporal, directional, and perspective context as possible. We propose to undertake this decomposition in two levels: Peaks in the early ARIRs are decomposed into jointly localized sound events, based on time differences of arrival observed in either an ARIR triplet, or all ARIRs observing the direct sound. Sound events that could not be jointly localized are treated as residuals whose less precise localization utilizes direction-of-arrival detection and the estimated time of arrival. For the interpolated rendering, suitable parameter settings are found by evaluating the proposed method in a listening experiment, using both measured and simulated ARIR data sets, under static and time-varying conditions.
Journal Article
Rain Rendering for Evaluating and Improving Robustness to Bad Weather
by
Tremblay Maxime
,
de Charette Raoul
,
Lalonde Jean-François
in
Algorithms
,
Atmospheric models
,
Computer vision
2021
Rain fills the atmosphere with water particles, which breaks the common assumption that light travels unaltered from the scene to the camera. While it is well-known that rain affects computer vision algorithms, quantifying its impact is difficult. In this context, we present a rain rendering pipeline that enables the systematic evaluation of common computer vision algorithms to controlled amounts of rain. We present three different ways to add synthetic rain to existing images datasets: completely physic-based; completely data-driven; and a combination of both. The physic-based rain augmentation combines a physical particle simulator and accurate rain photometric modeling. We validate our rendering methods with a user study, demonstrating our rain is judged as much as 73% more realistic than the state-of-the-art. Using our generated rain-augmented KITTI, Cityscapes, and nuScenes datasets, we conduct a thorough evaluation of object detection, semantic segmentation, and depth estimation algorithms and show that their performance decreases in degraded weather, on the order of 15% for object detection, 60% for semantic segmentation, and 6-fold increase in depth estimation error. Finetuning on our augmented synthetic data results in improvements of 21% on object detection, 37% on semantic segmentation, and 8% on depth estimation.
Journal Article