Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
12 result(s) for "terrain visual features"
Sort by:
Highly Accurate Visual Method of Mars Terrain Classification for Rovers Based on Novel Image Features
It is important for Mars exploration rovers to achieve autonomous and safe mobility over rough terrain. Terrain classification can help rovers to select a safe terrain to traverse and avoid sinking and/or damaging the vehicle. Mars terrains are often classified using visual methods. However, the accuracy of terrain classification has been less than 90% in read operations. A high-accuracy vision-based method for Mars terrain classification is presented in this paper. By analyzing Mars terrain characteristics, novel image features, including multiscale gray gradient-grade features, multiscale edges strength-grade features, multiscale frequency-domain mean amplitude features, multiscale spectrum symmetry features, and multiscale spectrum amplitude-moment features, are proposed that are specifically targeted for terrain classification. Three classifiers, K-nearest neighbor (KNN), support vector machine (SVM), and random forests (RF), are adopted to classify the terrain using the proposed features. The Mars image dataset MSLNet that was collected by the Mars Science Laboratory (MSL, Curiosity) rover is used to conduct terrain classification experiments. The resolution of Mars images in the dataset is 256 × 256. Experimental results indicate that the RF classifies Mars terrain at the highest level of accuracy of 94.66%.
D2FLS-Net: Dual-Stage DEM-guided Fusion Transformer for landslide segmentation
Landslide segmentation from remote sensing imagery is crucial for rapid disaster assessment and risk mitigation. Owing to the pronounced heterogeneity of landslide scales and the subtle visual contrast between some landslide bodies and their background, this task remains highly challenging. Although Transformers surpass convolutional neural networks in modeling long-range contextual dependencies, channel-level or feature-level fusion strategies provide only intermittent terrain cues, leading models to underutilize digital elevation model (DEM) information and to lack fine-grained adaptability to terrain variability. To address this, We propose a Swin-Transformer–based framework, Dual-Stage DEM-guided Fusion Transformer for landslide segmentation (D2FLS-Net), which embeds terrain features via two modules: (1) The Dual-Stage DEM-Guided Fusion (DSDF) module that injects DEM cues twice, where the early stage emphasizes DEM related discontinuities before feature extraction, and the late stage coordinates high-level RGB and DEM semantics through a cross-attention mechanism. (2) The Terrain-aware Pixel-wise Adaptive Context Enhancement (T-PACE) module that optimizes intermediate features using a DEM-gated, pixel-adaptive hybrid of multi-dilation atrous convolutions, enabling broader context aggregation within homogeneous landslide interiors and more precise discrimination at boundaries. We evaluate D2FLS-Net on the Bijie and Landslide4Sense 2022 datasets. On Bijie, the mean Intersection over Union (mIoU) reaches 88.77%, Recall 95.27%, and Precision 94.60%, exceeding the best competing model SegFormer by 7.96%, 7.90%, and 4.05%, respectively. On Landslide4Sense2022, mIoU 72.86%, Recall 82.55%, and Precision 93.30%, surpassing SegFormer by 7.06%, 6.56%, and 5.02%, respectively. Ablation studies indicate that DSDF primarily reduces missed detections of landslide traces, whereas T-PACE refines pixel level context selection. Injecting DEM at the Swin-1 and Swin-4 stages consistently outperforms other stage combinations. In summary, the model shows good detection performance and is suitable for fusing DEM and remote sensing imagery for landslide recognition.
FTT: A Frequency-Aware Texture Matching Transformer for Digital Bathymetry Model Super-Resolution
Deep learning has shown significant advantages over traditional spatial interpolation methods in single image super-resolution (SISR). Recently, many studies have applied super-resolution (SR) methods to generate high-resolution (HR) digital bathymetry models (DBMs), but substantial differences between DBM and natural images have been ignored, which leads to serious distortions and inaccuracies. Given the critical role of HR DBM in marine resource exploitation, economic development, and scientific innovation, we propose a frequency-aware texture matching transformer (FTT) for DBM SR, incorporating global terrain feature extraction (GTFE), high-frequency feature extraction (HFFE), and a terrain matching block (TMB). GTFE has the capability to perceive spatial heterogeneity and spatial locations, allowing it to accurately capture large-scale terrain features. HFFE can explicitly extract high-frequency priors beneficial for DBM SR and implicitly refine the representation of high-frequency information in the global terrain feature. TMB improves fidelity of generated HR DBM by generating position offsets to restore warped textures in deep features. Experimental results have demonstrated that the proposed FTT has superior performance in terms of elevation, slope, aspect, and fidelity of generated HR DBM. Notably, the root mean square error (RMSE) of elevation in steep terrain has been reduced by 4.89 m, which is a significant improvement in the accuracy and precision of the reconstruction. This research holds significant implications for improving the accuracy of DBM SR methods and the usefulness of HR bathymetry products for future marine research.
Spatial Semantic Expression of Terrain Viewshed: A Data Mining Method
With the rapid development of geographic information technology, the expression of topographical spatial semantic relationships has become a research hotspot in the field of intelligent geographic information systems. Geographical spatial semantic relationships refer to the spatial relationships and inherent meanings between geographical entities, including topological relationships, metric relationships, etc. This study proposes a novel method of viewshed analysis, which solves the limitation of treating the viewshed as a unified unit in traditional viewshed analysis by decomposing the viewshed into multiple viewsheds and quantifying their spatial semantic relationships. The method uses a DBSCAN clustering algorithm with terrain adaptability to divide a viewshed into spatially different viewsheds and characterizes these viewsheds through a systematic measurement framework, including azimuth, area, and sparsity. The method was applied to a case study of Purple Mountain in Nanjing. The experiment used 12.5 m accuracy topographic data from Purple Mountain, and two observation points were selected. For the first observation point near the mountain park, during the DBSCAN clustering partition of the viewshed, the number of clusters and the number of noise points were compared with determine the neighborhood radius of 18 m and the minimum sample point number of 4. Five viewsheds were successfully generated, with the largest viewshed having 468 visible points and the smallest only 16, located in different locations from the observer, reflecting the spatial variability of terrain features. All viewsheds are basically distributed to the north of the observer, two of which also share the northeast 87° direction with the observer in a straight line distribution but at different distances. In three-dimensional space, the distance between the two viewsheds is 317.298 m. Azimuth angle verification showed significant aggregation in the northeast direction. The second point is near the ridgeline, where one viewshed accounts for 87.52% of the total viewshed, showing significant visual effects. One viewshed is 3121.113 m away from the observer, with only 113 visible points, and is not located at a low altitude, so it is suitable for a long-distance fixed-point intermittent observation. The experimental results of the two observation points reveal the directional dominance and distance stratification of viewshed spatial relationships. This paper proposes a model to express topographical viewshed spatial relationships. The model analyzes and describes the spatial features of the viewshed through quantitative and qualitative methods. These metric features provide a basis for constructing spatial topological relationships between observation points and viewsheds, helping optimize viewpoint selection and enhance landscape planning. Compared with traditional methods, the proposed method significantly improves the resolution of spatial semantic relationship expression and has practical application value in fields such as archaeology, tourism planning, and urban design.
Implicit Extended Kalman Filter for Optical Terrain Relative Navigation Using Delayed Measurements
The exploration of celestial bodies such as the Moon, Mars, or even smaller ones such as comets and asteroids, is the next frontier of space exploration. One of the most interesting and attractive purposes from the scientific point of view in this field, is the capability for a spacecraft to land on such bodies. Monocular cameras are widely adopted to perform this task due to their low cost and system complexity. Nevertheless, image-based algorithms for motion estimation range across different scales of complexities and computational loads. In this paper, a method to perform relative (or local) terrain navigation using frame-to-frame features correspondences and altimeter measurements is presented. The proposed image-based approach relies on the implementation of the implicit extended Kalman filter, which works using nonlinear dynamic models and corrections from measurements that are implicit functions of the state variables. In particular, here, the epipolar constraint, which is a geometric relationship between the feature point position vectors and the camera translation vector, is employed as the implicit measurement fused with altimeter updates. In realistic applications, the image processing routines require a certain amount of time to be executed. For this reason, the presented navigation system entails a fast cycle using altimeter measurements and a slow cycle with image-based updates. Moreover, the intrinsic delay of the feature matching execution is taken into account using a modified extrapolation method.
Identification and Mapping of Soil Erosion Processes Using the Visual Interpretation of LiDAR Imagery
Soil erosion processes are a type of geological hazard. They cause soil loss and sediment production, landscape dissection, and economic damage, which can, in the long term, result in land abandonment. Thus, identification of soil erosion processes is necessary for sustainable land management in an area. This study presents the potential of visual interpretation of high resolution LiDAR (light detection and ranging) imagery for direct and unambiguous identification and mapping of soil erosion processes, which was tested in the study area of the Vinodol Valley (64.57 km2), in Croatia. Eight LiDAR images were derived from the 1 m airborne LiDAR DTM (Digital Terrain Model) and were used to identify and map gully erosion, sheet erosion, and the combined effect of rill and sheet erosion, with the ultimate purpose to create a historical erosion inventory. The two-step procedure in a visual interpretation of LiDAR imagery was performed: preliminary and detailed. In the preliminary step, possibilities and limitations for unambiguous identification of the soil erosion processes were determined for representative portions of the study area, and the exclusive criteria for the accurate and precise manual delineation of different types of erosion phenomena were established. In the detailed step, the findings from the preliminary step were used to map the soil erosion phenomena in the entire studied area. Results determined the highest potential for direct identification and mapping of the gully erosion phenomena. A total of 236 gullies were identified and precisely delineated, although most of them were previously unknown, due to the lack of previous investigations on soil erosion processes in the study area. On the other hand, the used method was proven to be inapplicable for direct identification and accurate mapping of the sheet erosion. Sheet erosion, however, could have been indirectly identified on certain LiDAR imagery, based on recognition of colluvial deposits accumulated at the foot of the eroded slopes. Furthermore, the findings of this study present which of the used LiDAR imagery, and what features of the imagery used, are most effective for identification and mapping of different types of erosion processes.
Monocular Visual Mapping for Obstacle Avoidance on UAVs
An unmanned aerial vehicle requires adequate knowledge of its surroundings in order to operate in close proximity to obstacles. UAVs also have strict payload and power constraints which limit the number and variety of sensors available to gather this information. It is desirable, therefore, to enable a UAV to gather information about potential obstacles or interesting landmarks using common and lightweight sensor systems. This paper presents a method of fast terrain mapping with a monocular camera. Features are extracted from camera images and used to update a sequential extended Kalman filter. The features locations are parameterized in inverse depth to enable fast depth convergence. Converged features are added to a persistent terrain map which can be used for obstacle avoidance and additional vehicle guidance. Simulation results, results from recorded flight test data, and flight test results are presented to validate the algorithm.
A ROBUST METHOD FOR STEREO VISUAL ODOMETRY BASED ON MULTIPLE EUCLIDEAN DISTANCE CONSTRAINT AND RANSAC ALGORITHM
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Visual Odometry for Planetary Exploration Rovers in Sandy Terrains
Abstract Visual odometry provides planetary exploration rovers with accurate knowledge of their position and orientation, which needs effective feature tracking results, especially in barren sandy terrains. In this paper, a stereovision based odometry algorithm is proposed for a lunar rover, which is composed of corner extraction, feature tracking and motion estimation. First, a morphology based image enhancement method is studied to guarantee enough corners are extracted. Second, a Random Sample Consensus (RANSAC) algorithm is proposed to make a robust estimation of the fundamental matrix, which is the basic and critical part of feature matching and tracking. Then, the 6 degrees of freedom rover position and orientation is estimated by the RANSAC algorithm. Finally, experiments are performed in a simulated lunar surface environment using a prototype rover, which have confirmed the feasibility and effectiveness of the proposed method.
Transition Texture Synthesis
Synthesis of transition textures is essential for displaying visually acceptable appearances on a terrain. This investigation presents a modified method for synthesizing the transition texture to be tiled on a terrain. All transition pattern types are recognized for a number of input textures. The proposed modified patch-based sampling texture synthesis approach, using the extra feature map of the input source and target textures for patch matching, can synthesize any transition texture on a succession pattern by initializing the output texture using a portion of the source texture enclosed in a transition cut. The transition boundary is further enhanced to improve the visual effect by tracing out the integral texture elements. Either the Game of Life model or Wang tiles method are exploited to present a good-looking profile of successions on a terrain for tiling transition textures. Experimental results indicate that the proposed method requires few input textures, yet synthesizes numerous tileable transition textures, which are useful for obtaining a vivid appearance of a terrain.