Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
110 result(s) for "Fu, Haohuan"
Sort by:
Deep Learning Based Oil Palm Tree Detection and Counting for High-Resolution Remote Sensing Images
Oil palm trees are important economic crops in Malaysia and other tropical areas. The number of oil palm trees in a plantation area is important information for predicting the yield of palm oil, monitoring the growing situation of palm trees and maximizing their productivity, etc. In this paper, we propose a deep learning based framework for oil palm tree detection and counting using high-resolution remote sensing images for Malaysia. Unlike previous palm tree detection studies, the trees in our study area are more crowded and their crowns often overlap. We use a number of manually interpreted samples to train and optimize the convolutional neural network (CNN), and predict labels for all the samples in an image dataset collected through the sliding window technique. Then, we merge the predicted palm coordinates corresponding to the same palm tree into one palm coordinate and obtain the final palm tree detection results. Based on our proposed method, more than 96% of the oil palm trees in our study area can be detected correctly when compared with the manually interpreted ground truth, and this is higher than the accuracies of the other three tree detection methods used in this study.
Semantic Segmentation-Based Building Footprint Extraction Using Very High-Resolution Satellite Images and Multi-Source GIS Data
Automatic extraction of building footprints from high-resolution satellite imagery has become an important and challenging research issue receiving greater attention. Many recent studies have explored different deep learning-based semantic segmentation methods for improving the accuracy of building extraction. Although they record substantial land cover and land use information (e.g., buildings, roads, water, etc.), public geographic information system (GIS) map datasets have rarely been utilized to improve building extraction results in existing studies. In this research, we propose a U-Net-based semantic segmentation method for the extraction of building footprints from high-resolution multispectral satellite images using the SpaceNet building dataset provided in the DeepGlobe Satellite Challenge of IEEE Conference on Computer Vision and Pattern Recognition 2018 (CVPR 2018). We explore the potential of multiple public GIS map datasets (OpenStreetMap, Google Maps, and MapWorld) through integration with the WorldView-3 satellite datasets in four cities (Las Vegas, Paris, Shanghai, and Khartoum). Several strategies are designed and combined with the U-Net–based semantic segmentation model, including data augmentation, post-processing, and integration of the GIS map data and satellite images. The proposed method achieves a total F1-score of 0.704, which is an improvement of 1.1% to 12.5% compared with the top three solutions in the SpaceNet Building Detection Competition and 3.0% to 9.2% compared with the standard U-Net–based method. Moreover, the effect of each proposed strategy and the possible reasons for the building footprint extraction results are analyzed substantially considering the actual situation of the four cities.
The Sunway TaihuLight supercomputer: system and applications
The Sunway TaihuLight supercomputer is the world's first system with a peak performance greater than 100 PFlops. In this paper, we provide a detailed introduction to the TaihuLight system. In contrast with other existing heterogeneous supercomputers, which include both CPU processors and PCIe-connected many-core accelerators (NVIDIA GPU or Intel Xeon Phi), the computing power of TaihuLight is provided by a homegrown many-core SW26010 CPU that includes both the management processing elements (MPEs) and computing processing elements (CPEs) in one chip. With 260 processing elements in one CPU, a single SW26010 provides a peak performance of over three TFlops. To alleviate the memory bandwidth bottleneck in most applications, each CPE comes with a scratch pad memory, which serves as a user-controlled cache. To support the parallelization of programs on the new many-core architecture, in addition to the basic C/C++ and Fortran compilers, the system provides a customized Sunway OpenACC tool that supports the OpenACC 2.0 syntax. This paper also reports our preliminary efforts on developing and optimizing applications on the TaihuLight system, focusing on key application domains, such as earth system modeling, ocean surface wave modeling, atomistic simulation, and phase-field simulation.
A Prolonged Artificial Nighttime-light Dataset of China (1984-2020)
Nighttime light remote sensing has been an increasingly important proxy for human activities. Despite an urgent need for long-term products and pilot explorations in synthesizing them, the publicly available long-term products are limited. A Night-Time Light convolutional LSTM network is proposed and applied the network to produce a 1-km annual Prolonged Artificial Nighttime-light DAtaset of China (PANDA-China) from 1984 to 2020. Assessments between modeled and original images show that on average the RMSE reaches 0.73, the coefficient of determination (R 2 ) reaches 0.95, and the linear slope is 0.99 at the pixel level, indicating a high confidence in the quality of generated data products. Quantitative and visual comparisons witness PANDA-China’s superiority against other NTL datasets in its significantly longer NTL dynamics, higher temporal consistency, and better correlations with socioeconomics (built-up areas, gross domestic product, population) characterizing the most relevant indicator in different development phases. The PANDA-China product provides an unprecedented opportunity to trace nighttime light dynamics in the past four decades.
Large-Scale Oil Palm Tree Detection from High-Resolution Satellite Images Using Two-Stage Convolutional Neural Networks
Being an important economic crop that contributes 35% of the total consumption of vegetable oil, remote sensing-based quantitative detection of oil palm trees has long been a key research direction for both agriculture and environmental purposes. While existing methods already demonstrate satisfactory effectiveness for small regions, performing the detection for a large region with satisfactory accuracy is still challenging. In this study, we proposed a two-stage convolutional neural network (TS-CNN)-based oil palm detection method using high-resolution satellite images (i.e. Quickbird) in a large-scale study area of Malaysia. The TS-CNN consists of one CNN for land cover classification and one CNN for object classification. The two CNNs were trained and optimized independently based on 20,000 samples collected through human interpretation. For the large-scale oil palm detection for an area of 55 km2, we proposed an effective workflow that consists of an overlapping partitioning method for large-scale image division, a multi-scale sliding window method for oil palm coordinate prediction, and a minimum distance filter method for post-processing. Our proposed approach achieves a much higher average F1-score of 94.99% in our study area compared with existing oil palm detection methods (87.95%, 81.80%, 80.61%, and 78.35% for single-stage CNN, Support Vector Machine (SVM), Random Forest (RF), and Artificial Neural Network (ANN), respectively), and much fewer confusions with other vegetation and buildings in the whole image detection results.
A global map of planting years of plantations
Plantation is an important land use type that differs from natural forests and affects the economy and the environment. Tree age is one of the key factors used to quantify the impact of plantations. However, there is a lack of datasets explicitly documenting the planting years of global plantations. Here we used time-series Landsat archive from 1982 to 2020 and the LandTrendr algorithm to generate global maps of planting years based on the global plantation extent products in Google Earth Engine (GEE) platform. The datasets developed in this study are in a GeoTIFF format with 30-meter spatial resolution by recording gridded specie types and planting years of global plantations. The derived dataset could be used for yield prediction of tree crops and social and ecological cost-benefit analysis of plantations. Measurement(s) planting years of plantations Technology Type(s) remote sensing and LandTrendr algorithm Sample Characteristic - Organism forest
Annual dynamic dataset of global cropping intensity from 2001 to 2019
The cropping intensity has received growing concern in the agriculture field in applications such as harvest area research. Notwithstanding the significant amount of existing literature on local cropping intensities, research considering global datasets appears to be limited in spatial resolution and precision. In this paper, we present an annual dynamic global cropping intensity dataset covering the period from 2001 to 2019 at a 250-m resolution with an average overall accuracy of 89%, exceeding the accuracy of the current annual dynamic global cropping intensity data at a 500-m resolution. We used the enhanced vegetation index (EVI) of MOD13Q1 as the database via a sixth-order polynomial function to calculate the cropping intensity. The global cropping intensity dataset was packaged in the GeoTIFF file type, with the quality control band in the same format. The dataset fills the vacancy of medium-resolution, global-scale annual cropping intensity data and provides an improved map for further global yield estimations and food security analyses. Measurement(s) cropping intensity Technology Type(s) sixth-order polynomial function Factor Type(s) temporal interval • geographic location Sample Characteristic - Environment cultivated environment Sample Characteristic - Location global Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.15128076
The role of sea surface temperature variability in changes to global surface air temperature related to two periods of warming slowdown since 1940
Over the last century, the global mean surface air temperature (SAT) has experienced two periods of warming slowdown (hiatuses), namely 1940–1975 and 1998–2012, as well as showing well-defined interdecadal oscillations. Previous studies have focused mainly on the most recent hiatus, and little is known about the period between 1940 and 1975. From the point of view of the sea surface temperature (SST), there are two aspects of interest; i.e., the climatological SST and SST variability. In this paper, observational and modelling evidence is used to show that, compared with the climatological SST, SST variability has been the main cause of the slowdown in rate of increase in SAT since 1940. In addition, the observational data and simulation results show that SST variability had a greater impact on the slowdown in rate of increase in SAT from 1940 to 1975 (− 1.2 × 10 −3  °C/year) than from 1998 to 2012 (− 5.7 × 10 −3  °C/year). The SAT change over the period 1940–1975 (1.0 × 10 −4  °C/year) was less affected by the climatological SST forcing experiment than that over the period 1998–2012 (− 5.0 × 10 −4  °C/year). Comparing with 1940–1975, the SAT change over the period 1998–2012 was much affected by the global SAT long-term warming. The distributions of wind stress and atmospheric pressure both indicate that, although the eastern Pacific Ocean played an important role in influencing the global SAT trend between 1998 and 2012, it made little contribution to changes in global SAT between 1940 and 1975. In addition, from the perspective of seasonality, the interdecadal variation of SAT over these two periods was a seasonally dependent phenomenon. Over the period 1940–1975, the annual SAT trend essentially followed the summer SAT trend, whereas between 1998 and 2012, winter was the dominant season of annual SAT change.
Deep Learning‐Based Sea Surface Roughness Parameterization Scheme Improves Sea Surface Wind Forecast
Accurate offshore surface wind forecasting is crucial for navigation safety and disaster prevention. However, significant biases exist in forecasting sea surface winds due to the uncertainties in estimating sea surface roughness. In this study, we propose a deep learning‐based scheme (DL2023) for estimating sea surface roughness and integrate it into a regionally coupled ocean‐atmosphere‐wave model. Single‐point experiments demonstrate that DL2023 achieves a remarkable 50% reduction in the Root Mean Square Error (RMSE) compared to the four traditional schemes. During five typhoon cases in August 2020, compared to the four traditional schemes, the RMSEs of forecasted surface winds using DL2023 are reduced by 6.02%–14.75%, 11.17%–18.30%, and 11.91%–19.46% at lead times of 24, 48, and 72 hr, respectively. Thus, the DL2023 scheme, trained using data from the Atlantic Ocean, successfully improves the forecast of surface winds over the Northwest Pacific Ocean. Plain Language Summary A novel sea surface roughness scheme, deep learning‐based scheme (DL2023), has been developed utilizing deep learning techniques to improve the accuracy of surface wind forecasts in the Northwest Pacific Ocean (NWPO). In this study, we implemented the DL2023 scheme into an ocean‐atmosphere‐wave coupled model within NWPO. The results revealed a substantial improvement in the forecasted surface winds of the coupled model during the passage of five typhoons in August 2020. Notably, despite being trained using Atlantic Ocean observations, the DL2023 scheme demonstrated its capability to improve surface wind predictions in NWPO, suggesting its potential applicability in coupled models across other regions. Key Points A deep learning (DL)‐based scheme for estimating sea surface roughness scheme is proposed After using the DL‐based scheme, the accuracy of a coupled model in forecasting surface winds is significantly improved The deep neural network trained using data from the Atlantic Ocean can be transferred to the Northwest Pacific Ocean
Making Low-Resolution Satellite Images Reborn: A Deep Learning Approach for Super-Resolution Building Extraction
Existing methods for building extraction from remotely sensed images strongly rely on aerial or satellite-based images with very high resolution, which are usually limited by spatiotemporally accessibility and cost. In contrast, relatively low-resolution images have better spatial and temporal availability but cannot directly contribute to fine- and/or high-resolution building extraction. In this paper, based on image super-resolution and segmentation techniques, we propose a two-stage framework (SRBuildingSeg) for achieving super-resolution (SR) building extraction using relatively low-resolution remotely sensed images. SRBuildingSeg can fully utilize inherent information from the given low-resolution images to achieve high-resolution building extraction. In contrast to the existing building extraction methods, we first utilize an internal pairs generation module (IPG) to obtain SR training datasets from the given low-resolution images and an edge-aware super-resolution module (EASR) to improve the perceptional features, following the dual-encoder building segmentation module (DES). Both qualitative and quantitative experimental results demonstrate that our proposed approach is capable of achieving high-resolution (e.g., 0.5 m) building extraction results at 2×, 4× and 8× SR. Our approach outperforms eight other methods with respect to the extraction result of mean Intersection over Union (mIoU) values by a ratio of 9.38%, 8.20%, and 7.89% with SR ratio factors of 2, 4, and 8, respectively. The results indicate that the edges and borders reconstructed in super-resolved images serve a pivotal role in subsequent building extraction and reveal the potential of the proposed approach to achieve super-resolution building extraction.