Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
212
result(s) for
"Ngo, Dat"
Sort by:
Stability-Enhanced Pseudo-Multiview Learning via Multiscale Grid Feature Extraction
2026
Pseudo-multiview learning improves classification by integrating complementary feature representations, but its performance degrades as the number of psuedo-views increases due to model collapse and ineffective feature scaling. This paper introduces a multiscale grid architecture that extracts structured, scale-adaptive features to stabilize evidence aggregation in pseudo-multiview learning. The proposed design enables efficient handling of difficult classification scenarios by enforcing balanced multiscale representation and reducing redundancy across psuedo-views. Extensive experiments on challenging real-world datasets, including BreakHis (40×, 100×, 200×, 400×), Oxford-IIIT Pet, and Chest X-ray, demonstrate consistent gains in accuracy and stability over the original pseudo-multiview framework and other baseline models. The results confirm that grid-based multiscale feature extraction provides a reliable means to enhance pseudo-multiview learning, particularly in settings where prior methods struggled to generalize.
Journal Article
Pseudo-Multiview Learning Using Subjective Logic for Enhanced Classification Accuracy
2025
Deep learning has significantly advanced image classification by leveraging hierarchical feature representations. A key factor in enhancing classification accuracy is feature concatenation, which integrates diverse feature sets to provide a richer representation of input data. However, this fusion strategy has inherent limitations, including increased computational complexity, susceptibility to redundant or irrelevant features, and challenges in optimally weighting different feature contributions. To address these challenges, this paper presents a pseudo-multiview learning method that dynamically combines different views at the evidence level using a belief-based model known as subjective logic. This approach adaptively assigns confidence levels to each view, ensuring more effective integration of complementary information while mitigating the impact of noisy or less relevant features. Experimental evaluations of datasets from various domains demonstrate that the proposed method enhances classification accuracy and robustness compared with conventional classification techniques.
Journal Article
VinDr-CXR: An open dataset of chest X-rays with radiologist’s annotations
2022
Most of the existing chest X-ray datasets include labels from a list of findings without specifying their locations on the radiographs. This limits the development of machine learning algorithms for the detection and localization of chest abnormalities. In this work, we describe a dataset of more than 100,000 chest X-ray scans that were retrospectively collected from two major hospitals in Vietnam. Out of this raw data, we release 18,000 images that were manually annotated by a total of 17 experienced radiologists with 22 local labels of rectangles surrounding abnormalities and 6 global labels of suspected diseases. The released dataset is divided into a training set of 15,000 and a test set of 3,000. Each scan in the training set was independently labeled by 3 radiologists, while each scan in the test set was labeled by the consensus of 5 radiologists. We designed and built a labeling platform for DICOM images to facilitate these annotation procedures. All images are made publicly available in DICOM format along with the labels of both the training set and the test set.
Measurement(s)
diseases and abnormal findings from chest X-ray scans
Technology Type(s)
AI is used to detect diseases and abnormal findings
Sample Characteristic - Location
Vietnam
Journal Article
Single Image Haze Removal from Image Enhancement Perspective for Real-Time Vision-Based Systems
by
Kang, Bongsoon
,
Lee, Gi-Dong
,
Ngo, Tri Minh
in
adaptive tone remapping
,
Algorithms
,
Atmospheric aerosols
2020
Vision-based systems operating outdoors are significantly affected by weather conditions, notably those related to atmospheric turbidity. Accordingly, haze removal algorithms, actively being researched over the last decade, have come into use as a pre-processing step. Although numerous approaches have existed previously, an efficient method coupled with fast implementation is still in great demand. This paper proposes a single image haze removal algorithm with a corresponding hardware implementation for facilitating real-time processing. Contrary to methods that invert the physical model describing the formation of hazy images, the proposed approach mainly exploits computationally efficient image processing techniques such as detail enhancement, multiple-exposure image fusion, and adaptive tone remapping. Therefore, it possesses low computational complexity while achieving good performance compared to other state-of-the-art methods. Moreover, the low computational cost also brings about a compact hardware implementation capable of handling high-quality videos at an acceptable rate, that is, greater than 25 frames per second, as verified with a Field Programmable Gate Array chip. The software source code and datasets are available online for public use.
Journal Article
Robust Single-Image Haze Removal Using Optimal Transmission Map and Adaptive Atmospheric Light
by
Kang, Bongsoon
,
Lee, Seungmin
,
Ngo, Dat
in
adaptive atmospheric light
,
Algorithms
,
Decomposition
2020
Haze removal is an ill-posed problem that has attracted much scientific interest due to its various practical applications. Existing methods are usually founded upon various priors; consequently, they demonstrate poor performance in circumstances in which the priors do not hold. By examining hazy and haze-free images, we determined that haze density is highly correlated with image features such as contrast energy, entropy, and sharpness. Then, we proposed an iterative algorithm to accurately estimate the extinction coefficient of the transmission medium via direct optimization of the objective function taking into account all of the features. Furthermore, to address the heterogeneity of the lightness, we devised adaptive atmospheric light to replace the homogeneous light generally used in haze removal. A comparative evaluation against other state-of-the-art approaches demonstrated the superiority of the proposed method. The source code and data sets used in this paper are made publicly available to facilitate further research.
Journal Article
VBI-Accelerated FPGA Implementation of Autonomous Image Dehazing: Leveraging the Vertical Blanking Interval for Haze-Aware Local Image Blending
by
Son, Jeonghyeon
,
Kang, Bongsoon
,
Ngo, Dat
in
Adaptability
,
Algorithms
,
autonomous image dehazing
2025
Real-time image dehazing is crucial for remote sensing systems, particularly in applications requiring immediate and reliable visual data. By restoring contrast and fidelity as images are captured, real-time dehazing enhances image quality on the fly. Existing dehazing algorithms often prioritize visual quality and color restoration but rely on computationally intensive methods, making them unsuitable for real-time processing. Moreover, these methods typically perform well under moderate to dense haze conditions but lack adaptability to varying haze levels, limiting their general applicability. To address these challenges, this paper presents an autonomous image dehazing method and its corresponding FPGA-based accelerator, which effectively balance image quality and computational efficiency for real-time processing. Autonomous dehazing is achieved by fusing the input image with its dehazed counterpart, where fusion weights are dynamically determined based on the local haziness degree. The FPGA accelerator performs computations with strict timing requirements during the vertical blanking interval, ensuring smooth and flicker-free processing of input data streams. Experimental results validate the effectiveness of the proposed method, and hardware implementation results demonstrate that the FPGA accelerator achieves a processing rate of 45.34 frames per second at DCI 4K resolution while maintaining efficient utilization of hardware resources.
Journal Article
Haziness Degree Evaluator: A Knowledge-Driven Approach for Haze Density Estimation
2021
Haze is a term that is widely used in image processing to refer to natural and human-activity-emitted aerosols. It causes light scattering and absorption, which reduce the visibility of captured images. This reduction hinders the proper operation of many photographic and computer-vision applications, such as object recognition/localization. Accordingly, haze removal, which is also known as image dehazing or defogging, is an apposite solution. However, existing dehazing algorithms unconditionally remove haze, even when haze occurs occasionally. Therefore, an approach for haze density estimation is highly demanded. This paper then proposes a model that is known as the haziness degree evaluator to predict haze density from a single image without reference to a corresponding haze-free image, an existing georeferenced digital terrain model, or training on a significant amount of data. The proposed model quantifies haze density by optimizing an objective function comprising three haze-relevant features that result from correlation and computation analysis. This objective function is formulated to maximize the image’s saturation, brightness, and sharpness while minimizing the dark channel. Additionally, this study describes three applications of the proposed model in hazy/haze-free image classification, dehazing performance assessment, and single image dehazing. Extensive experiments on both real and synthetic datasets demonstrate its efficacy in these applications.
Journal Article
Edge Intelligence: A Review of Deep Neural Network Inference in Resource-Limited Environments
by
Park, Hyun-Cheol
,
Kang, Bongsoon
,
Ngo, Dat
in
Algorithms
,
Artificial intelligence
,
Artificial neural networks
2025
Deploying deep neural networks (DNNs) in resource-limited environments—such as smartwatches, IoT nodes, and intelligent sensors—poses significant challenges due to constraints in memory, computing power, and energy budgets. This paper presents a comprehensive review of recent advances in accelerating DNN inference on edge platforms, with a focus on model compression, compiler optimizations, and hardware–software co-design. We analyze the trade-offs between latency, energy, and accuracy across various techniques, highlighting practical deployment strategies on real-world devices. In particular, we categorize existing frameworks based on their architectural targets and adaptation mechanisms and discuss open challenges such as runtime adaptability and hardware-aware scheduling. This review aims to guide the development of efficient and scalable edge intelligence solutions.
Journal Article
Autonomous Single-Image Dehazing: Enhancing Local Texture with Haze Density-Aware Image Blending
by
Han, Siyeon
,
Kang, Bongsoon
,
Choi, Yeonggyu
in
Algorithms
,
Artificial intelligence
,
autonomous dehazing
2024
Single-image dehazing is an ill-posed problem that has attracted a myriad of research efforts. However, virtually all methods proposed thus far assume that input images are already affected by haze. Little effort has been spent on autonomous single-image dehazing. Even though deep learning dehazing models, with their widely claimed attribute of generalizability, do not exhibit satisfactory performance on images with various haze conditions. In this paper, we present a novel approach for autonomous single-image dehazing. Our approach consists of four major steps: sharpness enhancement, adaptive dehazing, image blending, and adaptive tone remapping. A global haze density weight drives the adaptive dehazing and tone remapping to handle images with various haze conditions, including those that are haze-free or affected by mild, moderate, and dense haze. Meanwhile, the proposed approach adopts patch-based haze density weights to guide the image blending, resulting in enhanced local texture. Comparative performance analysis with state-of-the-art methods demonstrates the efficacy of our proposed approach.
Journal Article
Design of an FPGA-Based High-Quality Real-Time Autonomous Dehazing System
2022
Image dehazing, as a common solution to weather-related degradation, holds great promise for photography, computer vision, and remote sensing applications. Diverse approaches have been proposed throughout decades of development, and deep-learning-based methods are currently predominant. Despite excellent performance, such computationally intensive methods as these recent advances amount to overkill, because image dehazing is solely a preprocessing step. In this paper, we utilize an autonomous image dehazing algorithm to analyze a non-deep dehazing approach. After that, we present a corresponding FPGA design for high-quality real-time vision systems. We also conduct extensive experiments to verify the efficacy of the proposed design across different facets. Finally, we introduce a method for synthesizing cloudy images (loosely referred to as hazy images) to facilitate future aerial surveillance research.
Journal Article