Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
7,399
result(s) for
"range dynamics"
Sort by:
Analysing and mapping species range dynamics using occupancy models
by
Lahoz-Monfort, José J.
,
Guillera-Arroita, Gurutzeta
,
Kéry, Marc
in
Animal and plant ecology
,
Animal, plant and microbial ecology
,
Annual variations
2013
Aim: Our aims are: (1) to highlight the power of dynamic occupancy models for analysing species range dynamics while accounting for imperfect detection; (2) to emphasize the flexibility to model effects of environmental covariates in the dynamics parameters (extinction and colonization probability); and (3) to illustrate the development of predictive maps of range dynamics by projecting estimated probabilities of occupancy, local extinction and colonization. Location: Switzerland. Methods: We used data from the Swiss breeding bird survey to model the Swiss range dynamics of the European crossbill (Loxia curvirostra) from 2000 to 2007. Within-season replicate surveys at each 1 km 2 sample unit allowed us to fit dynamic occupancy models that account for imperfect detection, and thus estimate the following processes underlying the observed range dynamics: local extinction, colonization and detection. For comparison, we also fitted a model variant where detection was assumed to be perfect. Results: All model parameters were affected by elevation, forest cover and elevation-by-forest cover interactions and exhibited substantial annual variation. Detection probability varied seasonally and among years, highlighting the need for its estimation. Projecting parameter estimates in environmental or geographical space is a powerful means of understanding what the model is telling about covariate relationships. Geographical maps were substantially different between the model where detection was estimated and that where it was not, emphasizing the importance of accounting for imperfect detection in studies of range dynamics, even for high-quality data. Main conclusions: The study of species range dynamics is among the most exciting avenues for species distribution modelling. Dynamic occupancy models offer a robust framework for doing so, by accounting for imperfect detection and directly modelling the effects of covariates on the parameters that govern distributional change. Mapping parameter estimates modelled by spatially indexed covariates is an under-used way to gain insights into dynamic species distributions.
Journal Article
An Adaptive Method to Recover High Dynamic Range Images from Multi-camera Systems in Back-Lighting Scenario
2024
The reconstruction of multi-view high dynamic range images faces several challenges, including holes and artifact in the disparity map in non-linear and large baseline between the involved cameras. Furthermore, these challenges increased when a back-lighting scenario is involved, with the currently industrial growth. To dispense the disparity map estimation and it’s related challenges, and generate back-lighting multi-view high dynamic range images, we propose a method that relays on accurately detected and matched features between the adjustment viewpoints instead of the estimation of the disparity map, thus providing a flexibility on the camera baseline. Next, we estimate exposure gain between the matched features. After that, we restore the multi-view low dynamic range images based on the estimated gain, and generate a final high dynamic range image per view. Experimental results demonstrate superior performance for the proposed method over state-of-the-art methods in both objective and subject comparisons. These results suggest that our method is suitable to improve the visual quality of multi-view low dynamic range images captured in low back-lighting conditions via commercial cameras sparsely located among each other in any general public infrastructure.
Journal Article
Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair
by
Sung-Jea Ko
,
Won Jae Park
,
Seok Kang
in
Cameras
,
Chemical technology
,
high dynamic range imaging
2017
In this paper, a high dynamic range (HDR) imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR) images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV) HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV) HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.
Journal Article
Dual-Attention-Guided Network for Ghost-Free High Dynamic Range Imaging
2022
Ghosting artifacts caused by moving objects and misalignments are a key challenge in constructing high dynamic range (HDR) images. Current methods first register the input low dynamic range (LDR) images using optical flow before merging them. This process is error-prone, and often causes ghosting in the resulting merged image. We propose a novel dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images. Unlike previous methods that directly stack the LDR images or features for merging, we use dual-attention modules to guide the merging according to the reference image. DAHDRNet thus exploits both spatial attention and feature channel attention to achieve ghost-free merging. The spatial attention modules automatically suppress undesired components caused by misalignments and saturation, and enhance the fine details in the non-reference images. The channel attention modules adaptively rescale channel-wise features by considering the inter-dependencies between channels. The dual-attention approach is applied recurrently to further improve feature representation, and thus alignment. A dilated residual dense block is devised to make full use of the hierarchical features and increase the receptive field when hallucinating missing details. We employ a hybrid loss function, which consists of a perceptual loss, a total variation loss, and a content loss to recover photo-realistic images. Although DAHDRNet is not flow-based, it can be applied to flow-based registration to reduce artifacts caused by optical-flow estimation errors. Experiments on different datasets show that the proposed DAHDRNet achieves state-of-the-art quantitative and qualitative results.
Journal Article
Real-time high dynamic range laser scanning microscopy
2016
In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally,
in vivo
real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of
in vivo
fast tracer kinetics during functional imaging.
Confocal and multiphoton fluorescence microscopy often suffers from low dynamic range. Here the authors develop a high dynamic range, laser scanning fluorescence technique by simultaneously recording different light intensity ranges. The method can be adapted to commercial systems.
Journal Article
Multi-Frame Content-Aware Mapping Network for Standard-Dynamic-Range to High-Dynamic-Range Television Artifact Removal
2024
Recently, advancements in image sensor technology have paved the way for the proliferation of high-dynamic-range television (HDRTV). Consequently, there has been a surge in demand for the conversion of standard-dynamic-range television (SDRTV) to HDRTV, especially due to the dearth of native HDRTV content. However, since SDRTV often comes with video encoding artifacts, SDRTV to HDRTV conversion often amplifies these encoding artifacts, thereby reducing the visual quality of the output video. To solve this problem, this paper proposes a multi-frame content-aware mapping network (MCMN), aiming to improve the performance of conversion from low-quality SDRTV to high-quality HDRTV. Specifically, we utilize the temporal spatial characteristics of videos to design a content-aware temporal spatial alignment module for the initial alignment of video features. In the feature prior extraction stage, we innovatively propose a hybrid prior extraction module, including cross-temporal priors, local spatial priors, and global spatial prior extraction. Finally, we design a temporal spatial transformation module to generate an improved tone mapping result. From time to space, from local to global, our method makes full use of multi-frame information to perform inverse tone mapping of single-frame images, while it is also able to better repair coding artifacts.
Journal Article
A dynamic range adjustable inverse tone mapping operator based on human visual system
2023
Conventional inverse tone mapping operator (iTMO) reconstructs high dynamic range (HDR) images with fixed dynamic range and tend to produce extended distortion in both high and low exposure regions of HDR images. This paper proposes a dynamic range adjustable inverse tone mapping algorithm based on single LDR image, which combined photoreceptor response and adaptation. Firstly, the linearized image is converted to retinal response in the LDR environment. Then, it is extended to obtain the retinal response corresponding to HDR scenes. This extension can also be adjusted according to the target dynamic range. Since the different states of light adaptation, expanded retinal response is converted into a set of HDR images with different exposure background intensity. The corresponding processing for different exposure regions can effectively reduce the distortion of high exposure and low exposure regions. Finally, this group of HDR images is synthesized base on the corresponding weighted graph. The efficiency and high visual quality of the proposed algorithm are validated in open-source datasets, and the superior performance of proposed algorithm is also proved by objective evaluations of different types of LDR images.
Journal Article
A high-dynamic-range visual sensing method for feature extraction of welding pool based on adaptive image fusion
by
Cui, Yanxin
,
Shi, Yonghua
,
Zhang, Baori
in
Advanced manufacturing technologies
,
Arc welding
,
CAE) and Design
2021
The high dynamic range existing in arc welding with high energy density challenges most of the industrial cameras, causing badly exposed pixels in the captured images and bringing difficulty to the feature detection from internal weld pool. This paper proposes a novel monitoring method called adaptive image fusion, which increases the amount of information contained in the welding images and can be realized on the common industrial camera with low cost. It combines original images captured rapidly by the camera into one fused image, and the setting of these images is based on the real-time analysis of realistic scene irradiance during the welding process. Experiments are carried out to find out the operating window for the adaptive image fusion method, providing the rules for getting a fused image with as much as information as possible. The comparison between the imaging with or without the proposed method proves that the fused image has a wider dynamic range and includes more useful features from the weld pool. The improvement is also verified by extracting both the internal and external features of weld pool within a same fused image with proposed method. The results show that the proposed method can adaptively expand the dynamic range of visual monitoring system with low cost, which benefits the feature extraction from the internal weld pool.
Journal Article
Dual Frequency Transformer for Efficient SDR-to-HDR Translation
2024
The SDR-to-HDR translation technique can convert the abundant standard-dynamic-range (SDR) media resources to high-dynamic-range (HDR) ones, which can represent high-contrast scenes, providing more realistic visual experiences. While recent vision Transformers have achieved promising performance in many low-level vision tasks, there are few works attempting to leverage Transformers for SDR-to-HDR translation. In this paper, we are among the first to investigate the performance of Transformers for SDR-to-HDR translation. We find that directly using the self-attention mechanism may involve artifacts in the results due to the inappropriate way to model long-range dependencies between the low-frequency and high-frequency components. Taking this into account, we advance the self-attention mechanism and present a dual frequency attention (DFA), which leverages the self-attention mechanism to separately encode the low-frequency structural information and high-frequency detail information. Based on the proposed DFA, we further design a multi-scale feature fusion network, named dual frequency Transformer (DFT), for efficient SDR-to-HDR translation. Extensive experiments on the HDRTV1K dataset demonstrate that our DFT can achieve better quantitative and qualitative performance than the recent state-of-the-art methods. The code of our DFT is made publicly available at https://github.com/CS-GangXu/DFT.
Journal Article
Attention-Edge-Assisted Neural HDRI Based on Registered Extreme-Exposure-Ratio Images
2025
In order to improve image visual quality in high dynamic range (HDR) scenes while avoiding motion ghosting artifacts caused by exposure time differences, innovative image sensors captured two registered extreme-exposure-ratio (EER) image pairs with complementary and symmetric exposure configurations for high dynamic range imaging (HDRI). However, existing multi-exposure fusion (MEF) algorithms suffer from luminance inversion artifacts in overexposed and underexposed regions when directly combining such EER image pairs. This paper proposes a neural network-based framework for HDRI based on attention mechanisms and edge assistance to recover missing luminance information. The framework derives local luminance representations from a convolution kernel perspective, and subsequently refines the global luminance order in the fused image using a Transformer-based residual group. To support the two-stage process, multi-scale channel features are extracted from a double-attention mechanism, while edge cues are incorporated to enhance detail preservation in both highlight and shadow regions. The experimental results validate that the proposed framework can alleviate luminance inversion in HDRI when inputs are two EER images, and maintain fine structural details in complex HDR scenes.
Journal Article