Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
28,611 result(s) for "Infrared imagery"
Sort by:
Type-printable photodetector arrays for multichannel meta-infrared imaging
Multichannel meta-imaging, inspired by the parallel-processing capability of neuromorphic computing, offers considerable advancements in resolution enhancement and edge discrimination in imaging systems, extending even into the mid- to far-infrared spectrum. Currently typical multichannel infrared imaging systems consist of separating optical gratings or merging multi-cameras, which require complex circuit design and heavy power consumption, hindering the implementation of advanced human-eye-like imagers. Here, we present printable graphene plasmonic photodetector arrays driven by a ferroelectric superdomain for multichannel meta-infrared imaging with enhanced edge discrimination. The fabricated photodetectors exhibited multiple spectral responses with zero-bias operation by directly rescaling the ferroelectric superdomain instead of reconstructing the separated gratings. We also demonstrated enhanced and faster shape classification (98.1%) and edge detection (98.2%) using our multichannel infrared images compared with single-channel detectors. Our proof-of-concept photodetector arrays simplify multichannel infrared imaging systems and offer potential solutions in efficient edge detection in human-brain-type machine vision. Here, the authors report the realization of a multichannel mid-infrared imaging system based on zero-bias type-printed graphene plasmonic photodetector arrays on ferroelectric substrates, showing enhanced infrared image recognition and edge detection accuracy.
Combination of near-infrared and thermal imaging techniques for the remote and simultaneous measurements of breathing and heart rates under sleep situation
To achieve the simultaneous and unobtrusive breathing rate (BR) and heart rate (HR) measurements during nighttime, we leverage a far-infrared imager and an infrared camera equipped with IR-Cut lens and an infrared lighting array to develop a dual-camera imaging system. A custom-built cascade face classifier, containing the conventional Adaboost model and fully convolutional network trained by 32K images, was used to detect the face region in registered infrared images. The region of interest (ROI) inclusive of mouth and nose regions was afterwards confirmed by the discriminative regression and coordinate conversions of three selected landmarks. Subsequently, a tracking algorithm based on spatio-temporal context learning was applied for following the ROI in thermal video, and the raw signal was synchronously extracted. Finally, a custom-made time-domain signal analysis approach was developed for the determinations of BR and HR. A dual-mode sleep video database, including the videos obtained under environment where illumination intensity ranged from 0 to 3 Lux, was constructed to evaluate the effectiveness of the proposed system and algorithms. In linear regression analysis, the determination coefficient (R2) of 0.831 had been observed for the measured BR and reference BR, and this value was 0.933 for HR measurement. In addition, the Bland-Altman plots of BR and HR demonstrated that almost all the data points located within their own 95% limits of agreement. Consequently, the overall performance of the proposed technique is acceptable for BR and HR estimations during nighttime.
Pseudo-Sample Generation and Self-Supervised Framework for Infrared Dim and Small Target Detection
Infrared dim and small target detection is crucial for long-range sensing. However, its deep representation learning is severely constrained by the scarcity of accurately annotated real data, and related research remains underdeveloped. Existing data generation methods based on patch synthesis or geometric transformations fail to incorporate the physical degradation mechanisms of infrared imaging systems and reasonable environmental constraints, leading to significant discrepancies between synthetic data and real-world scenarios. To address this issue, this paper proposes a novel pseudo-sample generation paradigm based on physics-informed degradation modeling and high-order constraints. First, we construct an infrared image degradation model that decouples the degradation processes of targets and backgrounds at the signal level, achieving accurate modeling of real infrared imaging while ensuring the reliability of the degradation process through information fidelity optimization. Second, an online grid-based high-order constraint strategy is designed, which synergistically integrates global semantic, local structural, and grayscale constraints based on statistical distribution consistency to generate a high-fidelity infrared simulation dataset. Finally, we build a complete self-supervised detection framework incorporating classical neural networks, customized loss functions, and two-dimensional information evaluation metrics. Extensive experiments demonstrate that the synthetic data generated by our method significantly outperforms existing simulated datasets on authenticity metrics. It also effectively enhances the generalization performance of various detectors in real-world scenarios, achieving detection accuracy superior to baseline models trained on traditional simulated data.
Research and analysis of infrared image enhancement algorithm based on fractional differentiation
Due to the inherent defects of infrared imaging systems and the influence of the external complex environment, infrared images have low contrast, blurred edge details, low signal-to-noise ratio and poor visual effects compared with visible images, which have a great impact on the subsequent feature extraction, detection and identification and target tracking, and cannot meet the requirements in military, medical and civilian fields. The current technical deficiencies of infrared imaging devices at the hardware level cannot fundamentally solve these problems, so it is especially necessary to enhance infrared images from the perspective of algorithms. We propose an improved fractional differentiation algorithm that enhances the contrast of infrared images and the contrast of the images is controlled by the fractional order. Experiments and analysis show that the proposed method has good feedback for enhancing the contrast of dark images, and it can effectively enhance the edge information and detail information.
An infrared night vision image enhancement algorithm based on cross-level feature fusion
Infrared night vision images are caused by color overflow and coloring discontinuity due to insufficient light at night, resulting in larger halo area and lower PSNR value after enhancement by single feature fusion method. For this reason, an infrared night vision image enhancement algorithm based on cross-level feature fusion is proposed. This method is used to denoise infrared night vision images, based on smooth wavelet decomposition. By labeling image edges and noise, and utilizing neighborhood based wavelet coefficient shrinkage algorithm, the noise interference in the image is effectively reduced; preliminary enhancement was performed on the denoised image, using Retinex algorithm combined with bilateral filtering method to estimate illuminance, and Sigmoid function was used to enhance the reflection area, improving the overall visual effect of the image. Based on the principle of cross-level feature adaptive fusion, a cross-level feature fusion network is constructed to further enhance the feature information of the infrared night vision image through the steps of multi-level feature extraction, feature reconstruction and adaptive cross-level feature fusion, and the output of the model is optimized by using the joint loss function, which realizes the high-quality enhancement of the infrared night vision image. The experimental results show that when the method is utilized for infrared night vision image enhancement, the PSNR is higher than 30dB, the SSIM is higher than 0.73, and the enhancement effect is good and the performance is high.
Real-Time Ground Vehicle Detection in Aerial Infrared Imagery Based on Convolutional Neural Network
An infrared sensor is a commonly used imaging device. Unmanned aerial vehicles, the most promising moving platform, each play a vital role in their own field, respectively. However, the two devices are seldom combined in automatic ground vehicle detection tasks. Therefore, how to make full use of them—especially in ground vehicle detection based on aerial imagery–has aroused wide academic concern. However, due to the aerial imagery’s low-resolution and the vehicle detection’s complexity, how to extract remarkable features and handle pose variations, view changes as well as surrounding radiation remains a challenge. In fact, these typical abstract features extracted by convolutional neural networks are more recognizable than the engineering features, and those complex conditions involved can be learned and memorized before. In this paper, a novel approach towards ground vehicle detection in aerial infrared images based on a convolutional neural network is proposed. The UAV and the infrared sensor used in this application are firstly introduced. Then, a novel aerial moving platform is built and an aerial infrared vehicle dataset is unprecedentedly constructed. We publicly release this dataset (NPU_CS_UAV_IR_DATA), which can be used for the following research in this field. Next, an end-to-end convolutional neural network is built. With large amounts of recognized features being iteratively learned, a real-time ground vehicle model is constructed. It has the unique ability to detect both the stationary vehicles and moving vehicles in real urban environments. We evaluate the proposed algorithm on some low–resolution aerial infrared images. Experiments on the NPU_CS_UAV_IR_DATA dataset demonstrate that the proposed method is effective and efficient to recognize the ground vehicles. Moreover it can accomplish the task in real-time while achieving superior performances in leak and false alarm ratio.
YOLO-IRS: Infrared Ship Detection Algorithm Based on Self-Attention Mechanism and KAN in Complex Marine Background
Infrared ship detection technology plays a crucial role in ensuring maritime transportation and navigation safety. However, infrared ship targets at sea exhibit characteristics such as multi-scale, arbitrary orientation, and dense arrangements, with imaging often influenced by complex sea–sky backgrounds. These factors pose significant challenges for the fast and accurate detection of infrared ships. In this paper, we propose a new infrared ship target detection algorithm, YOLO-IRS (YOLO for infrared ship target), based on YOLOv10, which improves detection accuracy while maintaining detection speed. The model introduces the following optimizations: First, to address the difficulty of detecting weak and small targets, the Swin Transformer is introduced to extract features from infrared ship images. By utilizing a shifted window multi-head self-attention mechanism, the window field of view is expanded, enhancing the model’s ability to focus on global features during feature extraction, thereby improving small target detection. Second, the C3KAN module is designed to improve detection accuracy while also addressing issues of false positives and missed detections in complex backgrounds and dense occlusion scenarios. Finally, extensive experiments were conducted on an infrared ship dataset: compared to the baseline model YOLOv10, YOLO-IRS improves precision by 1.3%, mAP50 by 0.5%, and mAP50–95 by 1.7%. Compared to mainstream detection algorithms, YOLO-IRS achieves higher detection accuracy while requiring relatively fewer computational resources, verifying the superiority of the proposed algorithm and enhancing the detection performance of infrared ship targets.
MWR-Net: An Edge-Oriented Lightweight Framework for Image Restoration in Single-Lens Infrared Computational Imaging
Infrared video imaging is an cornerstone technology for environmental perception, particularly in drone-based remote sensing applications such as disaster assessment and infrastructure inspection. Conventional systems, however, rely on bulky optical architectures that limit deployment on lightweight aerial platforms. Computational imaging offers a promising alternative by integrating optical encoding with algorithmic reconstruction, enabling compact hardware while maintaining imaging performance comparable to sophisticated multi-lens systems. Nonetheless, achieving real-time video-rate computational image restoration on resource-constrained unmanned aerial vehicles (UAVs) remains a critical challenge. To address this, we propose Mobile Wavelet Restoration-Net (MWR-Net), a lightweight deep learning framework tailored for real-time infrared image restoration. Built on a MobileNetV4 backbone, MWR-Net leverages depthwise separable convolutions and an optimized downsampling scheme to minimize parameters and computational overhead. A novel wavelet-domain loss enhances high-frequency detail recovery, while the modulation transfer function (MTF) is adopted as an optics-aware evaluation metric. With only 666.37 K parameters and 6.17 G MACs, MWR-Net achieves a PSNR of 37.10 dB and an SSIM of 0.964 on a custom dataset, outperforming a pruned U-Net baseline. Deployed on an RK3588 chip, it runs at 42 FPS. These results demonstrate MWR-Net’s potential as an efficient and practical solution for UAV-based infrared sensing applications.
Understanding the Formation of Heartwood in Larch Using Synchrotron Infrared Imaging Combined With Multivariate Analysis and Atomic Force Microscope Infrared Spectroscopy
Formation of extractive-rich heartwood is a process in live trees that make them and the wood obtained from them more resistant to fungal degradation. Despite the importance of this natural mechanism, little is known about the deposition pathways and cellular level distribution of extractives. Here we follow heartwood formation in var. by use of synchrotron infrared images analyzed by the unmixing method Multivariate Curve Resolution - Alternating Least Squares (MCR-ALS). A subset of the specimens was also analyzed using atomic force microscopy infrared spectroscopy. The main spectral changes observed in the transition zone when going from sapwood to heartwood was a decrease in the intensity of a peak at approximately 1660 cm and an increase in a peak at approximately 1640 cm . There are several possible interpretations of this observation. One possibility that is supported by the MCR-ALS unmixing is that heartwood formation in larch is a type II or -type of heartwood formation, where phenolic precursors to extractives accumulate in the sapwood rays. They are then oxidized and/or condensed in the transition zone and spread to the neighboring cells in the heartwood.
EGDM-IRSR: Edge-Guided Diffusion Model with State-Space UNet for Super-Resolution Infrared Images
Ensuring infrared images are of super-high resolution is crucial for enhancing thermal imaging systems’ visual perception, yet existing methods struggle to recover sharp edges and textual details. Therefore, in this study, we aimed to address the following issues: over-smoothed edges, distorted radiometric contrast in diffusion-based approaches, and scanning artifacts introduced by efficient state-space models like Mamba. We propose a novel edge-guided diffusion framework named EGDM-IRSR. Its core methodology integrates a multi-modal scanning mechanism employing complementary scan paths with content-aware modulation to mitigate directional artifacts, along with an edge guidance branch with learnable direction-aware convolutions, complemented by edge-frequency composite loss. Extensive experiments conducted on public benchmarks demonstrate that our method significantly outperforms state-of-the-art alternatives in quantitative metrics and exhibits superior visual fidelity by effectively preserving edges and fine structures. Ablation studies validate the effectiveness of each proposed component. We conclude that EGDM-IRSR provides a more robust and detail-enriched solution for acquiring super-resolution infrared images by synergistically integrating edge guidance with enhanced sequential modeling.