Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
7,749
result(s) for
"infrared image"
Sort by:
An infrared and visible image fusion algorithm based on ResNet-152
by
Du, Ping
,
Zhang, Liming
,
Li, Heng
in
1212: Deep Learning Techniques for Infrared Image/Video Understanding
,
Algorithms
,
Computer Communication Networks
2022
The fusion of infrared and visible images can obtain a combined image with hidden objective and rich visible details. To improve the details of the fusion image from the infrared and visible images by reducing artifacts and noise, an infrared and visible image fusion algorithm based on ResNet-152 is proposed. First, the source images are decomposed into the low-frequency part and the high-frequency part. The low-frequency part is processed by the average weighting strategy. Second, the multi-layer features are extracted from high-frequency part by using the ResNet-152 network. Regularization L1, convolution operation, bilinear interpolation upsampling and maximum selection strategy on the feature layers to obtain the maximum weight layer. Multiplying the maximum weight layer and the high-frequency as new high-frequency. Finally, the fusion image is reconstructed by the low-frequency and the high-frequency. Experiments show that the proposed method can obtain more details from the image texture by retaining the significant features of the images. In addition, this method can effectively reduce artifacts and noise. The consistency in the objective evaluation and visual observation performs superior to the comparative algorithms.
Journal Article
Characteristics of Infrared Radiation of Coal Specimens Under Uniaxial Loading
by
Guo, Jinshuai
,
Zhang, Yao
,
Ma, Liqiang
in
Civil Engineering
,
Coal
,
Earth and Environmental Science
2016
An object in its natural state at temperatures greater than -273.15 C generates electromagnetic waves, including infrared radiation (Wu et al. 2000). Coal and rock under loading also produce detectable electromagnetic radiation anomalies including wavelengths in the infrared band (Brady and Rowell 1986; Luong 1987). Through studying the infrared radiation characteristics on the surface of coal and rock specimens under loading conditions, one can derive the relationship between infrared radiation and the mechanical parameters of the specimens under dynamic stress. This information can be used to predict dynamic phenomena including ground pressure, coal bursting, and rock bursting.
Journal Article
Dead Laying Hens Detection Using TIR-NIR-Depth Images and Deep Learning on a Commercial Farm
2023
In large-scale laying hen farming, timely detection of dead chickens helps prevent cross-infection, disease transmission, and economic loss. Dead chicken detection is still performed manually and is one of the major labor costs on commercial farms. This study proposed a new method for dead chicken detection using multi-source images and deep learning and evaluated the detection performance with different source images. We first introduced a pixel-level image registration method that used depth information to project the near-infrared (NIR) and depth image into the coordinate of the thermal infrared (TIR) image, resulting in registered images. Then, the registered single-source (TIR, NIR, depth), dual-source (TIR-NIR, TIR-depth, NIR-depth), and multi-source (TIR-NIR-depth) images were separately used to train dead chicken detecting models with object detection networks, including YOLOv8n, Deformable DETR, Cascade R-CNN, and TOOD. The results showed that, at an IoU (Intersection over Union) threshold of 0.5, the performance of these models was not entirely the same. Among them, the model using the NIR-depth image and Deformable DETR achieved the best performance, with an average precision (AP) of 99.7% (IoU = 0.5) and a recall of 99.0% (IoU = 0.5). While the IoU threshold increased, we found the following: The model with the NIR image achieved the best performance among models with single-source images, with an AP of 74.4% (IoU = 0.5:0.95) in Deformable DETR. The performance with dual-source images was higher than that with single-source images. The model with the TIR-NIR or NIR-depth image outperformed the model with the TIR-depth image, achieving an AP of 76.3% (IoU = 0.5:0.95) and 75.9% (IoU = 0.5:0.95) in Deformable DETR, respectively. The model with the multi-source image also achieved higher performance than that with single-source images. However, there was no significant improvement compared to the model with the TIR-NIR or NIR-depth image, and the AP of the model with multi-source image was 76.7% (IoU = 0.5:0.95) in Deformable DETR. By analyzing the detection performance with different source images, this study provided a reference for selecting and using multi-source images for detecting dead laying hens on commercial farms.
Journal Article
FCDFusion: A Fast, Low Color Deviation Method for Fusing Visible and Infrared Image Pairs
2025
Visible and infrared image fusion (VIF) aims to combine information from visible and infrared images into a single fused image. Previous VIF methods usually employ a color space transformation to keep the hue and saturation from the original visible image. However, for fast VIF methods, this operation accounts for the majority of the calculation and is the bottleneck preventing faster processing. In this paper, we propose a fast fusion method, FCDFusion, with little color deviation. It preserves color information without color space transformations, by directly operating in RGB color space. It incorporates gamma correction at little extra cost, allowing color and contrast to be rapidly improved. We regard the fusion process as a scaling operation on 3D color vectors, greatly simplifying the calculations. A theoretical analysis and experiments show that our method can achieve satisfactory results in only 7 FLOPs per pixel. Compared to state-of-the-art fast, color-preserving methods using HSV color space, our method provides higher contrast at only half of the computational cost. We further propose a new metric, color deviation, to measure the ability of a VIF method to preserve color. It is specifically designed for VIF tasks with color visible-light images, and overcomes deficiencies of existing VIF metrics used for this purpose. Our code is available at https://github.com/HeasonLee/FCDFusion.
Journal Article
Discriminator guided visible-to-infrared image translation
2025
This paper proposes a discriminator-guided visible-to-infrared image translation algorithm based on a generative adversarial network and designs a multi-scale fusion generative network. The generative network enhances the perception of the image’s fine-grained features by fusing features of different scales in the channel direction. Meanwhile, the discriminator performs the infrared image reconstruction task, which provides additional infrared information to train the generator. This enhances the convergence efficiency of generator training through soft label guidance generated through knowledge distillation. The experimental results show that compared to the existing typical infrared image generation algorithms, the proposed method can generate higher-quality infrared images and achieve better performance in both subjective visual description and objective metric evaluation, and that it has better performance in the downstream tasks of the template matching and image fusion tasks.
Journal Article
Deep Learning-Based Outdoor Object Detection Using Visible and Near-Infrared Spectrum
by
Das, Nibaran
,
Kuiry, Somenath
,
Das, Alaka
in
1212: Deep Learning Techniques for Infrared Image/Video Understanding
,
Accuracy
,
Algorithms
2022
Object detection is one of the essential branches of computer vision. However, detecting objects in the natural scene is challenging due to various reasons, for example, different sizes of objects, overlapping and similarities of colour, the texture of different objects, etc. The visible spectrum is not suited for standard computer vision tasks in many real-life scenarios. In low visibility settings, moving outside the visible spectrum range, such as to the thermal spectrum or near-infrared (NIR) imaging, is significantly more beneficial. For the object detection task in this study, we used photos from both the RGB and NIR spectrums. The purpose of this paper is to see if it's possible to use the information offered by the near-infrared (NIR) spectrum in conjunction with the visible band for object detection because they both have multimodal information. For example, because near-infrared wavelengths are less prone to haze and distortion, some visually indistinguishable things in the RGB spectrum can be spotted in the NIR image. We gathered a well-organized dataset of outdoor scenes in three spectra: visible (RGB), near-infrared (NIR), and thermal to train such a multispectral object recognition system. For the experiments, we use the YOLOv3 algorithm to train and evaluate our object detection models for NIR and RGB images separately, then train the model with four-channel input (3 channels from RGB images and one channel from NIR images) and the corresponding annotations to see if the model's performance improves even more in detecting the underlying objects. To determine the effectiveness of our approach, we conducted trials on YOLOv4 and SSD models and compared our results with existing related state-of-the-art models.
Journal Article
Spacecraft and Asteroid Thermal Image Generation for Proximity Navigation and Detection Scenarios
by
Lavagna, Michèle Roberta
,
Quirino, Matteo
in
Algorithms
,
asteroid thermal infrared image
,
Asteroids
2024
On-orbit autonomous relative navigation performance strongly depends on both sensor suite and state reconstruction selection. Whenever that suite relies on image-based sensors working in the visible spectral band, the illumination conditions strongly affect the accuracy and robustness of the state reconstruction outputs. To cope with that limitation, we investigate the effectiveness of exploiting image sensors active in the IR spectral band, not limited by the lighting conditions. To run effective and comprehensive testing and validation campaigns on navigation algorithms, a large dataset of images is required, either available or easy to obtain in the visible band, not trivial and not accessible for the thermal band. The paper presents an open-source tool that exploits accurate finite volume thermal models of celestial objects and artificial satellites to create thermal images based on the camera dynamic. The thermal model relies on open CFD code (OpenFOAM), pushed to catch the finest details of the terrain or of the target geometries, and then the temperature field is processed to compute the view factors between the camera and each face of the mesh; thus, the radiative flux emitted by each face is extracted. Such data feed the rendering engine (Blender) that, together with the camera position and attitude, outputs the thermal image. The complete pipeline, fed by the orbiting target and the imaging sensor kinematic, outputs a proper synthetic thermal image dataset, exploitable either by a relative navigation block or any other scope of research. Furthermore, in the same framework, the article proposes two different thermal sensor models but any sensor model can be applied, providing full customization of the output. The tool performance is critically discussed and applied for two typical proximity scenarios, asteroid and artificial satellite; for both cases, the challenges and capabilities of the implemented tool for synthetic thermal images are highlighted. In the end, the tool is applied in a phase B mission design sponsored by ESA and in related research works; for such cases, the results are reported in the article.
Journal Article
Fault diagnosis of the bushing infrared images based on mask R‐CNN and improved PCNN joint algorithm
by
Lu, Yuncai
,
Yang, Xiaoping
,
Li, Jiansheng
in
Algorithms
,
Artificial neural networks
,
bushing frame
2021
Bushings are served as an important component of the power transformers; it's of great significance to keep the bushings in good insulation condition. The infrared images of the bushing are proposed to diagnose the fault with the combination of image segmentation and deep learning, including object detection, fault region extraction, and fault diagnosis. By building an object detection system with the frame of Mask Region convolutional neural network (CNN), the bushing frame can be exactly extracted. To distinguish the fault region of bushings and the background, a simple linear iterative clustering‐based pulse coupled neural network is proposed to improve the fault region segmentation performance. Then, two infrared image feature parameters, the relative position and area, are explored to classify fault type effectively based on the K‐means cluster technique. With the proposed joint algorithm on bushing infrared images, the accuracy reaches 98%, compared with 44% by the conventional CNN classification method. The integrated algorithm provides a feasible and advantageous solution for the field application of bushing image‐based diagnosis.
Journal Article
An interactive deep model combined with Retinex for low-light visible and infrared image fusion
by
Nie, Rencan
,
Zang, Yongsheng
,
Zhou, Dongming
in
Algorithms
,
Artificial Intelligence
,
Computational Biology/Bioinformatics
2023
Directly fusing the low-light visible and infrared images is hard to obtain fusion results containing rich structural details and critical infrared targets, due to the limitations of extreme illumination. The generated fusion results are typically not accurate to describe the scene nor are suitable for machine perception. Consequently, a novel image fusion framework combined with Retinex theory, termed as LLVIFusion, is designed for the low-light visible and infrared image fusion. Specifically, the training of LLVIFusion is accomplished via a two-stage training strategy. First, a new interactive fusion network is trained to generate initial fusion results with more informative features. Within the interactive fusion network, features are continuously reused in the same branch network and feature information from different branches are constantly interacted through the designed fusion blocks, which allows the fusion network to not only avoid losing information but also strengthen the information for subsequent processing. Further, an adaptive weight-based loss function is proposed to guide the fusion network for training. Next, a refinement network incorporating Retinex theory is introduced to optimize the visibility of the initial fusion results for obtaining high visual quality fusion results. In contrast to the 14 state-of-the-art comparison methods, LLVIFusion achieves the best values for all six objective measures on the LLVIP and the MF-Net datasets, while obtaining two best values and two second-best values for the objective measures on the TNO dataset. Such experimental results show that LLVIFusion can successfully perform low-light visible and infrared image fusion and produce good fusion results under normal illumination. The codes of LLVIFusion are accessible on
https://github.com/govenda/LLVIFusion
.
Journal Article
A CRDNet‐Based Watermarking Algorithm for Fused Visible–Infrared Images
2026
Infrared images often involve trade secrets. In order to protect the information security of infrared images, using a robust watermarking algorithm based on infrared images is a fitting solution. Recently, many effective visible and infrared image fusion (VIF) algorithms have been proposed in medicine, biology, and geology. Robust watermarking algorithms can resist mild conventional attacks, and with the progress of science and technology, algorithmic robustness against novel attacks such as screen shooting and print shooting has also become a research hotspot. However, VIF‐based image watermarking algorithms are still scarce. It is important to investigate a robust watermarking algorithm that can resist VIF attacks. Herein, an autoencoder against infrared fusion attacks is proposed based on various subnetworks: CRDnet (Convolutional Residual Dense Network). CRDnet includes encoders and decoders based on residual and dense structures, a fusion network robust to 12 VIF algorithms, and predictors for predicting watermarked infrared images. The encoder and decoder also incorporate preprocessing steps, attention mechanisms, and activation functions suitable for infrared images. The experimental results demonstrate that the bit error rate of CRDnet is reduced by at least ≈4% compared to common autoencoders. The peak signal‐to‐noise ratio of watermarked images is also almost always greater than 38 dB. CRDnet includes encoders and decoders based on residual and dense structures, a fusion network robust to 12 visible and infrared image fusion algorithms, and predictors for predicting watermarked infrared images. The encoder and decoder incorporate preprocessing steps, attention mechanisms, and activation functions suitable for infrared images. The fusion network can improve the anti‐infrared fusion robustness of the algorithm.
Journal Article