Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Multi-View Stereo Using Perspective-Aware Features and Metadata to Improve Cost Volume
by
Mo, Fan
, Zhou, Yu
, Li, Yuanxiang
, Zuo, Zongcheng
in
3D reconstruction
/ Artificial intelligence
/ Comparative analysis
/ Computer vision
/ deep learning
/ drone remote sensing
/ Embedded systems
/ feature matching
/ Image processing
/ Innovations
/ Machine learning
/ Metadata
/ Methods
/ multi-view stereo
/ MVSNet
/ Neural networks
/ Performance evaluation
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Multi-View Stereo Using Perspective-Aware Features and Metadata to Improve Cost Volume
by
Mo, Fan
, Zhou, Yu
, Li, Yuanxiang
, Zuo, Zongcheng
in
3D reconstruction
/ Artificial intelligence
/ Comparative analysis
/ Computer vision
/ deep learning
/ drone remote sensing
/ Embedded systems
/ feature matching
/ Image processing
/ Innovations
/ Machine learning
/ Metadata
/ Methods
/ multi-view stereo
/ MVSNet
/ Neural networks
/ Performance evaluation
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Multi-View Stereo Using Perspective-Aware Features and Metadata to Improve Cost Volume
by
Mo, Fan
, Zhou, Yu
, Li, Yuanxiang
, Zuo, Zongcheng
in
3D reconstruction
/ Artificial intelligence
/ Comparative analysis
/ Computer vision
/ deep learning
/ drone remote sensing
/ Embedded systems
/ feature matching
/ Image processing
/ Innovations
/ Machine learning
/ Metadata
/ Methods
/ multi-view stereo
/ MVSNet
/ Neural networks
/ Performance evaluation
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Multi-View Stereo Using Perspective-Aware Features and Metadata to Improve Cost Volume
Journal Article
Multi-View Stereo Using Perspective-Aware Features and Metadata to Improve Cost Volume
2025
Request Book From Autostore
and Choose the Collection Method
Overview
Feature matching is pivotal when using multi-view stereo (MVS) to reconstruct dense 3D models from calibrated images. This paper proposes PAC-MVSNet, which integrates perspective-aware convolution (PAC) and metadata-enhanced cost volumes to address the challenges in reflective and texture-less regions. PAC dynamically aligns convolutional kernels with scene perspective lines, while the use of metadata (e.g., camera pose distance) enables geometric reasoning during cost aggregation. In PAC-MVSNet, we introduce feature matching with long-range tracking that utilizes both internal and external focuses to integrate extensive contextual data within individual images as well as across multiple images. To enhance the performance of the feature matching with long-range tracking, we also propose a perspective-aware convolution module that directs the convolutional kernel to capture features along the perspective lines. This enables the module to extract perspective-aware features from images, improving the feature matching. Finally, we crafted a specific 2D CNN that fuses image priors, thereby integrating keyframes and geometric metadata within the cost volume to evaluate depth planes. Our method represents the first attempt to embed the existing physical model knowledge into a network for completing MVS tasks, which achieved optimal performance using multiple benchmark datasets.
This website uses cookies to ensure you get the best experience on our website.