Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
27 result(s) for "Starek, Michael"
Sort by:
Review and Evaluation of Deep Learning Architectures for Efficient Land Cover Mapping with UAS Hyper-Spatial Imagery: A Case Study Over a Wetland
Deep learning has already been proved as a powerful state-of-the-art technique for many image understanding tasks in computer vision and other applications including remote sensing (RS) image analysis. Unmanned aircraft systems (UASs) offer a viable and economical alternative to a conventional sensor and platform for acquiring high spatial and high temporal resolution data with high operational flexibility. Coastal wetlands are among some of the most challenging and complex ecosystems for land cover prediction and mapping tasks because land cover targets often show high intra-class and low inter-class variances. In recent years, several deep convolutional neural network (CNN) architectures have been proposed for pixel-wise image labeling, commonly called semantic image segmentation. In this paper, some of the more recent deep CNN architectures proposed for semantic image segmentation are reviewed, and each model’s training efficiency and classification performance are evaluated by training it on a limited labeled image set. Training samples are provided using the hyper-spatial resolution UAS imagery over a wetland area and the required ground truth images are prepared by manual image labeling. Experimental results demonstrate that deep CNNs have a great potential for accurate land cover prediction task using UAS hyper-spatial resolution images. Some simple deep learning architectures perform comparable or even better than complex and very deep architectures with remarkably fewer training epochs. This performance is especially valuable when limited training samples are available, which is a common case in most RS applications.
Assessing Lodging Severity over an Experimental Maize (Zea mays L.) Field Using UAS Images
Lodging has been recognized as one of the major destructive factors for crop quality and yield, resulting in an increasing need to develop cost-efficient and accurate methods for detecting crop lodging in a routine manner. Using structure-from-motion (SfM) and novel geospatial computing algorithms, this study investigated the potential of high resolution imaging with unmanned aircraft system (UAS) technology for detecting and assessing lodging severity over an experimental maize field at the Texas A&M AgriLife Research and Extension Center in Corpus Christi, Texas, during the 2016 growing season. The method was proposed to not only detect the occurrence of lodging at the field scale, but also to quantitatively estimate the number of lodged plants and the lodging rate within individual rows. Nadir-view images of the field trial were taken by multiple UAS platforms equipped with consumer grade red, green, and blue (RGB), and near-infrared (NIR) cameras on a routine basis, enabling a timely observation of the plant growth until harvesting. Models of canopy structure were reconstructed via an SfM photogrammetric workflow. The UAS-estimated maize height was characterized by polygons developed and expanded from individual row centerlines, and produced reliable accuracy when compared against field measures of height obtained from multiple dates. The proposed method then segmented the individual maize rows into multiple grid cells and determined the lodging severity based on the height percentiles against preset thresholds within individual grid cells. From the analysis derived from this method, the UAS-based lodging results were generally comparable in accuracy to those measured by a human data collector on the ground, measuring the number of lodging plants (R2 = 0.48) and the lodging rate (R2 = 0.50) on a per-row basis. The results also displayed a negative relationship of ground-measured yield with UAS-estimated and ground-measured lodging rate.
Deep Learning-Based Single Image Super-Resolution: An Investigation for Dense Scene Reconstruction with UAS Photogrammetry
The deep convolutional neural network (DCNN) has recently been applied to the highly challenging and ill-posed problem of single image super-resolution (SISR), which aims to predict high-resolution (HR) images from their corresponding low-resolution (LR) images. In many remote sensing (RS) applications, spatial resolution of the aerial or satellite imagery has a great impact on the accuracy and reliability of information extracted from the images. In this study, the potential of a DCNN-based SISR model, called enhanced super-resolution generative adversarial network (ESRGAN), to predict the spatial information degraded or lost in a hyper-spatial resolution unmanned aircraft system (UAS) RGB image set is investigated. ESRGAN model is trained over a limited number of original HR (50 out of 450 total images) and virtually-generated LR UAS images by downsampling the original HR images using a bicubic kernel with a factor × 4 . Quantitative and qualitative assessments of super-resolved images using standard image quality measures (IQMs) confirm that the DCNN-based SISR approach can be successfully applied on LR UAS imagery for spatial resolution enhancement. The performance of DCNN-based SISR approach for the UAS image set closely approximates performances reported on standard SISR image sets with mean peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index values of around 28 dB and 0.85 dB, respectively. Furthermore, by exploiting the rigorous Structure-from-Motion (SfM) photogrammetry procedure, an accurate task-based IQM for evaluating the quality of the super-resolved images is carried out. Results verify that the interior and exterior imaging geometry, which are extremely important for extracting highly accurate spatial information from UAS imagery in photogrammetric applications, can be accurately retrieved from a super-resolved image set. The number of corresponding keypoints and dense points generated from the SfM photogrammetry process are about 6 and 17 times more than those extracted from the corresponding LR image set, respectively.
DisCountNet: Discriminating and Counting Network for Real-Time Counting and Localization of Sparse Objects in High-Resolution UAV Imagery
Recent deep-learning counting techniques revolve around two distinct features of data—sparse data, which favors detection networks, or dense data where density map networks are used. Both techniques fail to address a third scenario, where dense objects are sparsely located. Raw aerial images represent sparse distributions of data in most situations. To address this issue, we propose a novel and exceedingly portable end-to-end model, DisCountNet, and an example dataset to test it on. DisCountNet is a two-stage network that uses theories from both detection and heat-map networks to provide a simple yet powerful design. The first stage, DiscNet, operates on the theory of coarse detection, but does so by converting a rich and high-resolution image into a sparse representation where only important information is encoded. Following this, CountNet operates on the dense regions of the sparse matrix to generate a density map, which provides fine locations and count predictions on densities of objects. Comparing the proposed network to current state-of-the-art networks, we find that we can maintain competitive performance while using a fraction of the computational complexity, resulting in a real-time solution.
Land Subsidence in the Texas Coastal Bend: Locations, Rates, Triggers, and Consequences
Land subsidence and sea level rise are well-known, ongoing problems that are negatively impacting the entire Texas coast. Although ground-based monitoring techniques using long-term global navigation satellite systems (GNSS) records provide accurate subsidence rates, they are labor intensive, expensive, time-consuming, and spatially limited. In this study, interferometric synthetic aperture radar (InSAR) data and techniques were used to map the locations and quantify rates of land subsidence in the Texas Coastal Bend region during the period from October 2016 to July 2019. InSAR-derived land subsidence rates were then validated and calibrated against GNSS-derived rates. The factors controlling the observed land subsidence rates and locations were investigated. The consequences of spatial variability in land subsidence rates in Coastal Bend were also examined. The results indicated that: (1) land subsidence rates in the Texas Coastal Bend exhibited spatial variability, (2) InSAR-derived land subsidence rates were consistent with GNSS-derived deformation rates, (3) land subsidence in the Texas Coastal Bend could be attributed mainly to hydrocarbon and groundwater extraction as well as vertical movements along growth faults, and (4) land subsidence increased both flood frequency and severity in the Texas Coastal Bend. Our results provide valuable information regarding not only land deformation rates in the Texas Coastal Bend region, but also the effectiveness of interferometric techniques for other coastal rural areas around the globe.
A Deep Learning Based Method to Delineate the Wet/Dry Shoreline and Compute Its Elevation Using High-Resolution UAS Imagery
Automatically detecting the wet/dry shoreline from remote sensing imagery has many benefits for beach management in coastal areas by enabling managers to take measures to protect wildlife during high water events. This paper proposes the use of a modified HED (Holistically-Nested Edge Detection) architecture to create a model for automatic feature identification of the wet/dry shoreline and to compute its elevation from the associated DSM (Digital Surface Model). The model is generalizable to several beaches in Texas and Florida. The data from the multiple beaches was collected using UAS (Uncrewed Aircraft Systems). UAS allow for the collection of high-resolution imagery and the creation of the DSMs that are essential for computing the elevations of the wet/dry shorelines. Another advantage of using UAS is the flexibility to choose locations and metocean conditions, allowing to collect a varied dataset necessary to calibrate a general model. To evaluate the performance and the generalization of the AI model, we trained the model on data from eight flights over four locations, tested it on the data from a ninth flight, and repeated it for all possible combinations. The AP and F1-Scores obtained show the success of the model’s prediction for the majority of cases, but the limitations of a pure computer vision assessment are discussed in the context of this coastal application. The method was also assessed more directly, where the average elevations of the labeled and AI predicted wet/dry shorelines were compared. The absolute differences between the two elevations were, on average, 2.1 cm, while the absolute difference of the elevations’ standard deviations for each wet/dry shoreline was 2.2 cm. The proposed method results in a generalizable model able to delineate the wet/dry shoreline in beach imagery for multiple flights at several locations in Texas and Florida and for a range of metocean conditions.
Modeling Wind and Obstacle Disturbances for Effective Performance Observations and Analysis of Resilience in UAV Swarms
UAV swarms have multiple real-world applications but operate in a dynamic environment where disruptions can impede performance or stop mission progress. Ideally, a UAV swarm should be resilient to disruptions to maintain the desired performance and produce consistent outputs. Resilience is the system’s capability to withstand disruptions and maintain acceptable performance levels. Scientists propose novel methods for resilience integration in UAV swarms and test them in simulation scenarios to gauge the performance and observe the system response. However, current studies lack a comprehensive inclusion of modeled disruptions to monitor performance accurately. Existing approaches in compartmentalized research prevent a thorough coverage of disruptions to test resilient responses. Actual resilient systems require robustness in multiple components. The challenge begins with recognizing, classifying, and implementing accurate disruption models in simulation scenarios. This calls for a dedicated study to outline, categorize, and model interferences that can be included in current simulation software, which is provided herein. Wind and in-path obstacles are the two primary disruptions, particularly in the case of aerial vehicles. This study starts a multi-step process to implement these disruptions in simulations accurately. Wind and obstacles are modeled using multiple methods and implemented in simulation scenarios. Their presence in simulations is demonstrated, and suggested scenarios and targeted observations are recommended. The study concludes that introducing previously absent and accurately modeled disruptions, such as wind and obstacles in simulation scenarios, can significantly change how resilience in swarm deployments is recorded and presented. A dedicated section for future work includes suggestions for implementing other disruptions, such as component failure and network intrusion.
Simulation and Characterization of Wind Impacts on sUAS Flight Performance for Crash Scene Reconstruction
Small unmanned aircraft systems (sUASs) have emerged as promising platforms for the purpose of crash scene reconstruction through structure-from-motion (SfM) photogrammetry. However, auto crashes tend to occur under adverse weather conditions that usually pose increased risks of sUAS operation in the sky. Wind is a typical environmental factor that can cause adverse weather, and sUAS responses to various wind conditions have been understudied in the past. To bridge this gap, commercial and open source sUAS flight simulation software is employed in this study to analyze the impacts of wind speed, direction, and turbulence on the ability of sUAS to track the pre-planned path and endurance of the flight mission. This simulation uses typical flight capabilities of quadcopter sUAS platforms that have been increasingly used for traffic incident management. Incremental increases in wind speed, direction, and turbulence are conducted. Average 3D error, standard deviation, battery use, and flight time are used as statistical metrics to characterize the wind impacts on flight stability and endurance. Both statistical and visual analytics are performed. Simulation results suggest operating the simulated quadcopter type when wind speed is less than 11 m/s under light to moderate turbulence levels for optimal flight performance in crash scene reconstruction missions, measured in terms of positional accuracy, required flight time, and battery use. Major lessons learned for real-world quadcopter sUAS flight design in windy conditions for crash scene mapping are also documented.
Application of MLS and UAS-SfM for Beach Management at the North Padre Island Seawall
What are the main findings? Mobile lidar scanning (MLS)-derived digital elevation models (DEMs) were used to monitor beach geomorphology, finding significant seasonal and post-nourishment changes in beach slope and width, shoreline position, and beach volume. A comparative and operational analysis assessed MLS and uncrewed aircraft system (UAS) structure-from-motion (SfM)/multi-view stereo (MVS) photogrammetry for beach management, finding DEM RMSE differences were similar, averaging up to 3 cm, leading to volume differences of up to 3%. What is the implication of the main finding? MLS and UAS-SfM offer efficient, scalable tools for routine beach monitoring, each with unique operational considerations that can inform coastal policy and management. Highest astronomical tide (HAT) shoreline position monitoring helped to identify optimal seasonal bollard placement to restrict vehicular access, thus increasing pedestrian safety. Collecting accurate and reliable beach morphology data is essential for informed coastal management. The beach adjacent to the seawall on North Padre Island, Texas, USA has experienced increased erosion and disrupted natural processes. City ordinance mandates the placement of bollards to restrict vehicular traffic when the beach width from the seawall toe to mean high water (MHW) is less than 45.7 m. To aid the City of Corpus Christi’s understanding of seasonal beach changes, mobile lidar scanning (MLS) surveys with a mapping-grade system were conducted in February, June, September, and November 2023, and post-nourishment in March 2024. Concurrent uncrewed aircraft system (UAS) photogrammetry surveys were performed in February and November 2023, and March 2024 to aid beach monitoring analysis and for comparative assessment to the MLS data. MLS-derived digital elevation models (DEMs) were used to evaluate seasonal geomorphology, including beach slope, width, shoreline position, and volume change. Because MHW was submerged during all surveys, highest astronomical tide (HAT) was used for shoreline analyses. HAT-based results indicated that bollards should be placed from approximately 390 to 560 m from the northern end of the seawall, varying seasonally. The March 2024 post-nourishment survey showed 102,462 m3 of sand was placed on the beach, extending the shoreline by more than 40 m in some locations. UAS photogrammetry-derived DEMs were compared to the MLS-derived DEMs, revealing mean HAT position differences of 0.02 m in February 2023 and 0.98 m in November 2023. Elevation and volume assessments showed variability between the MLS and UAS-SfM DEMs, with neither indicating consistently higher or lower values.
Partial Scene Reconstruction for Close Range Photogrammetry Using Deep Learning Pipeline for Region Masking
3D reconstruction is a beneficial technique to generate 3D geometry of scenes or objects for various applications such as computer graphics, industrial construction, and civil engineering. There are several techniques to obtain the 3D geometry of an object. Close-range photogrammetry is an inexpensive, accessible approach to obtaining high-quality object reconstruction. However, state-of-the-art software systems need a stationary scene or a controlled environment (often a turntable setup with a black background), which can be a limiting factor for object scanning. This work presents a method that reduces the need for a controlled environment and allows the capture of multiple objects with independent motion. We achieve this by creating a preprocessing pipeline that uses deep learning to transform a complex scene from an uncontrolled environment into multiple stationary scenes with a black background that is then fed into existing software systems for reconstruction. Our pipeline achieves this by using deep learning models to detect and track objects through the scene. The detection and tracking pipeline uses semantic-based detection and tracking and supports using available pretrained or custom networks. We develop a correction mechanism to overcome some detection and tracking shortcomings, namely, object-reidentification and multiple detections of the same object. We show detection and tracking are effective techniques to address scenes with multiple motion systems and that objects can be reconstructed with limited or no knowledge of the camera or the environment.