Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
2,305
result(s) for
"MAP estimation"
Sort by:
Genomics-aided structure prediction
by
Weigt, Martin
,
Onuchic, José N
,
Sułkowska, Joanna I
in
Amino Acid Sequence
,
Amino acids
,
Atoms
2012
We introduce a theoretical framework that exploits the ever-increasing genomic sequence information for protein structure prediction. Structure-based models are modified to incorporate constraints by a large number of non-local contacts estimated from direct coupling analysis (DCA) of co-evolving genomic sequences. A simple hybrid method, called DCA-fold, integrating DCA contacts with an accurate knowledge of local information (e.g., the local secondary structure) is sufficient to fold proteins in the range of 1–3 Å resolution.
Journal Article
Enhancing the Ground Truth Disparity by MAP Estimation for Developing a Neural-Net Based Stereoscopic Camera
2024
This paper presents a novel method to enhance ground truth disparity maps generated by Semi-Global Matching (SGM) using Maximum a Posteriori (MAP) estimation. SGM, while not producing visually appealing outputs like neural networks, offers high disparity accuracy in valid regions and avoids the generalization issues often encountered with neural network-based disparity estimation. However, SGM struggles with occlusions and textureless areas, leading to invalid disparity values. Our approach, though relatively simple, mitigates these issues by interpolating invalid pixels using surrounding disparity information and Bayesian inference, improving both the visual quality of disparity maps and their usability for training neural network-based commercial depth-sensing devices. Experimental results validate that our enhanced disparity maps preserve SGM’s accuracy in valid regions while improving the overall performance of neural networks on both synthetic and real-world datasets. This method provides a robust framework for advanced stereoscopic camera systems, particularly in autonomous applications.
Journal Article
A Power Spectrum Maps Estimation Algorithm Based on Generative Adversarial Networks for Underlay Cognitive Radio Networks
2020
In the underlay cognitive radio networks, the main challenge in detecting the idle radio resources is to estimate the power spectrum maps (PSMs), where the radio propagation characteristics are hard to obtain. For this reason, we propose a novel PSMs estimation algorithm based on the generative adversarial networks (GANs). First, we constructed the PSMs estimation model as a regression model in deep learning. Then, we converted the estimation task into an image reconstruction task by image color mapping. We fulfilled the above task by designing an image generator and an image discriminator in the proposed maps’ estimation GANs (MEGANs). The generator is trained to extract the radio propagation characteristics and generate the PSMs images. However, the discriminator is trained to identify the generated images and help to improve the generator’s performance. With the training process of MEGANs, the abilities of the generator and the discriminator are enhanced continually until reaching a balance, which means a high-accuracy PSMs estimation is achieved. The proposed MEGANs algorithm learns and utilizes accurate radio propagation features from the training process rather than making direct imprecise or biased propagation assumptions as in the traditional methods. Simulation results demonstrate that the MEGANs algorithm provides a more accurate estimation performance than the conventional methods.
Journal Article
Content-based image retrieval through fusion of deep features extracted from segmented neutrosophic using depth map
by
Beheshtifard, Ziaeddin
,
Taheri, Fatemeh
,
Rahbar, Kambiz
in
Algorithms
,
Artificial Intelligence
,
Artificial neural networks
2024
The main challenge of content-based image retrieval systems is the difference between how images are described using algorithms and how humans understand the semantic concepts of an image. To overcome this challenge, many image retrieval methods have focused on scenarios that emphasize important regions of an image. However, losing part of the semantic features of an image is a problem that also exists in these approaches. Therefore, this article introduces a method for image retrieval using the fusion of deep features on a segmented neutrosophic set with the help of the image depth map. By transferring the original image to the neutrosophic domain, the image is decomposed into three levels: true, false, and indeterminate. True and false images have different representations of image brightness. The indeterminate image represents the boundary between the true and false images. It is also a representation of the edges in the image. Convolutional layers of deep neural networks are sensitive to changes in image brightness when extracting feature maps. For this reason, the extracted features from the true and false images are different from each other and can be considered as complementary to each other. In the second step, the image depth map is estimated using a vision transformer. Then the estimated depth map is binarized using a predefined threshold. By applying the binarized depth map to the neutrosophic domain, objects in near and far regions are classified. Effective features of each region are extracted using a pre-trained deep neural network, VGG-16. Important features from each group of images are selected using the Boruta-Shap algorithm. Finally, to reduce redundancy and unify the extracted features, feature fusion is performed in two stages, resulting in the final feature vector for each image. Experimental results confirm that extracting semantic and content features from different regions of an image using the proposed method leads to improved retrieval results and reduces semantic gaps.
Journal Article
Rainfall Map from Attenuation Data Fusion of Satellite Broadcast and Commercial Microwave Links
by
Giannetti, Filippo
,
Lottici, Vincenzo
,
Saggese, Fabio
in
Algorithms
,
Analysis
,
commercial microwave link
2022
The demand for accurate rainfall rate maps is growing ever more. This paper proposes a novel algorithm to estimate the rainfall rate map from the attenuation measurements coming from both broadcast satellite links (BSLs) and commercial microwave links (CMLs). The approach we pursue is based on an iterative procedure which extends the well-known GMZ algorithm to fuse the attenuation data coming from different links in a three-dimensional scenario, while also accounting for the virga phenomenon as a rain vertical attenuation model. We experimentally prove the convergence of the procedures, showing how the estimation error decreases for every iteration. The numerical results show that adding the BSL links to a pre-existent CML network boosts the accuracy performance of the estimated rainfall map, improving up to 50% the correlation metrics. Moreover, our algorithm is shown to be robust to errors concerning the virga parametrization, proving the possibility of obtaining good estimation performance without the need for precise and real-time estimation of the virga parameters.
Journal Article
Counting animals in aerial images with a density map estimation model
by
Qian, Yifei
,
Trathan, Philip N.
,
Lowther, Andrew
in
abundance estimation
,
Accuracy
,
Aerial photography
2023
Animal abundance estimation is increasingly based on drone or aerial survey photography. Manual postprocessing has been used extensively; however, volumes of such data are increasing, necessitating some level of automation, either for complete counting, or as a labour‐saving tool. Any automated processing can be challenging when using such tools on species that nest in close formation such as Pygoscelis penguins. We present here a customized CNN‐based density map estimation method for counting of penguins from low‐resolution aerial photography. Our model, an indirect regression algorithm, performed significantly better in terms of counting accuracy than standard detection algorithm (Faster‐RCNN) when counting small objects from low‐resolution images and gave an error rate of only 0.8 percent. Density map estimation methods as demonstrated here can vastly improve our ability to count animals in tight aggregations and demonstrably improve monitoring efforts from aerial imagery. We present here an adaptation of state‐of‐the‐art crowd‐counting methodologies for counting of penguins (average size is about 5 * 5 pixels) from aerial photography. Our method performed significantly better in terms of model performance and computational efficiency than standard Faster‐RCNN deep learning approaches and gave an error rate of only 0.8 percent.
Journal Article
Lightweight Explicit 3D Human Digitization via Normal Integration
by
Yu, Han
,
Song, Liang
,
Wu, Jingyi
in
a skinned multi-person linear model
,
Algorithms
,
Computer vision
2025
In recent years, generating 3D human models from images has gained significant attention in 3D human reconstruction. However, deploying large neural network models in practical applications remains challenging, particularly on resource-constrained edge devices. This problem is primarily because large neural network models require significantly higher computational power, which imposes greater demands on hardware capabilities and inference time. To address this issue, we can optimize the network architecture to reduce the number of model parameters, thereby alleviating the heavy reliance on hardware resources. We propose a lightweight and efficient 3D human reconstruction model that balances reconstruction accuracy and computational cost. Specifically, our model integrates Dilated Convolutions and the Cross-Covariance Attention mechanism into its architecture to construct a lightweight generative network. This design effectively captures multi-scale information while significantly reducing model complexity. Additionally, we introduce an innovative loss function tailored to the geometric properties of normal maps. This loss function provides a more accurate measure of surface reconstruction quality and enhances the overall reconstruction performance. Experimental results show that, compared with existing methods, our approach reduces the number of training parameters by approximately 80% while maintaining the generated model’s quality.
Journal Article
Unpaired Underwater Image Synthesis with a Disentangled Representation for Underwater Depth Map Prediction
2021
As one of the key requirements for underwater exploration, underwater depth map estimation is of great importance in underwater vision research. Although significant progress has been achieved in the fields of image-to-image translation and depth map estimation, a gap between normal depth map estimation and underwater depth map estimation still remains. Additionally, it is a great challenge to build a mapping function that converts a single underwater image into an underwater depth map due to the lack of paired data. Moreover, the ever-changing underwater environment further intensifies the difficulty of finding an optimal mapping solution. To eliminate these bottlenecks, we developed a novel image-to-image framework for underwater image synthesis and depth map estimation in underwater conditions. For the problem of the lack of paired data, by translating hazy in-air images (with a depth map) into underwater images, we initially obtained a paired dataset of underwater images and corresponding depth maps. To enrich our synthesized underwater dataset, we further translated hazy in-air images into a series of continuously changing underwater images with a specified style. For the depth map estimation, we included a coarse-to-fine network to provide a precise depth map estimation result. We evaluated the efficiency of our framework for a real underwater RGB-D dataset. The experimental results show that our method can provide a diversity of underwater images and the best depth map estimation precision.
Journal Article
Computational Large Field-of-View RGB-D Integral Imaging System
by
Yoon, Sang Min
,
Won, Yong-Yuk
,
Jung, Geunho
in
Aperture
,
Arrays
,
computational integral imaging
2021
The integral imaging system has received considerable research attention because it can be applied to real-time three-dimensional image displays with a continuous view angle without supplementary devices. Most previous approaches place a physical micro-lens array in front of the image, where each lens looks different depending on the viewing angle. A computational integral imaging system with a virtual micro-lens arrays has been proposed in order to provide flexibility for users to change micro-lens arrays and focal length while reducing distortions due to physical mismatches with the lens arrays. However, computational integral imaging methods only represent part of the whole image because the size of virtual lens arrays is much smaller than the given large-scale images when dealing with large-scale images. As a result, the previous approaches produce sub-aperture images with a small field of view and need additional devices for depth information to apply to integral imaging pickup systems. In this paper, we present a single image-based computational RGB-D integral imaging pickup system for a large field of view in real time. The proposed system comprises three steps: deep learning-based automatic depth map estimation from an RGB input image without the help of an additional device, a hierarchical integral imaging system for a large field of view in real time, and post-processing for optimized visualization of the failed pickup area using an inpainting method. Quantitative and qualitative experimental results verify the proposed approach’s robustness.
Journal Article
A Bayesian Framework for Accurate Determination of the Nighttime Ionospheric Parameters from the ICON FUV Observations
by
Kamalabadi, Farzad
,
Liu, Hang
,
Qin, Jianqi
in
Aerospace Technology and Astronautics
,
Astrophysics and Astroparticles
,
Bayesian analysis
2024
Accurate determination of the ionospheric parameters is one of the important objectives of the Ionospheric Connection Explorer (ICON) mission. Recent analyses of the current ICON Level 2.5 (L2.5) data product have shown that the ionospheric parameters (e.g., the peak electron density,
n
m
F
2
, and the peak height,
h
m
F
2
) that are retrieved from the nighttime OI 135.6 nm emission observed by ICON’s Far Ultraviolet (FUV) imager exhibit a systematic bias when compared to external radio measurements. In this study, we demonstrate that the bias was introduced by Tikhonov regularization that was used for the FUV Level 1 data inversion to generate the L2.5 data product. To address the bias, we develop a Bayesian framework for accurate determination of the nighttime ionospheric parameters through the Maximum A Posteriori (MAP) estimation. We show through analysis of synthetic observations that the key to an accurate MAP estimation is to construct a series of prior distributions associated with different
h
m
F
2
using climatological empirical models. Implementation of the MAP estimation with this series of prior distributions to the ICON FUV observations and comparison of the ionospheric retrievals with external radio measurements verify that the Bayesian method can reduce the systematic bias to a negligible level of ∼1% in the retrieved
n
m
F
2
and ∼1 km in the retrieved
h
m
F
2
. Our study provides a novel method for FUV remote sensing data analysis and an improved data set for ionospheric research.
Journal Article