Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
1 result(s) for "structurally flexible occupancy network"
Sort by:
A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images
Driven by deep learning, three-dimensional (3-D) target reconstruction from two-dimensional (2-D) synthetic aperture radar (SAR) images has been developed. However, there is still room for improvement in the reconstruction quality. In this paper, we propose a structurally flexible occupancy network (SFONet) to achieve high-quality reconstruction of a 3-D target using one or more 2-D SAR images. The SFONet consists of a basic network and a pluggable module that allows it to switch between two input modes: one azimuthal image and multiple azimuthal images. Furthermore, the pluggable module is designed to include a complex-valued (CV) long short-term memory (LSTM) submodule and a CV attention submodule, where the former extracts structural features of the target from multiple azimuthal SAR images, and the latter fuses these features. When two input modes coexist, we also propose a two-stage training strategy. The basic network is trained in the first stage using one azimuthal SAR image as the input. In the second stage, the basic network trained in the first stage is fixed, and only the pluggable module is trained using multiple azimuthal SAR images as the input. Finally, we construct an experimental dataset containing 2-D SAR images and 3-D ground truth by utilizing the publicly available Gotcha echo dataset. Experimental results show that once the SFONet is trained, a 3-D target can be reconstructed using one or more azimuthal images, exhibiting higher quality than other deep learning-based 3-D reconstruction methods. Moreover, when the composition of a training sample is reasonable, the number of samples required for the SFONet training can be reduced.