Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
543 result(s) for "neural network data assimilation"
Sort by:
A Unified Neural Background‐Error Covariance Model for Midlatitude and Tropical Atmospheric Data Assimilation
Estimating and modeling background‐error covariances remains a core challenge in variational data assimilation (DA). Operational systems typically approximate these covariances by transformations that separate geostrophically balanced components from unbalanced inertia‐gravity modes—an approach well‐suited for the midlatitudes but less applicable in the tropics, where different physical balances prevail. This study estimates background‐error covariances in a reduced‐dimension latent space learned by a neural‐network autoencoder (AE). The AE was trained using 40 years of ERA5 reanalysis data, enabling it to capture flow‐dependent atmospheric balances from a diverse set of weather states. We demonstrate that performing DA in the latent space yields analysis increments that preserve multivariate horizontal and vertical physical balances in both tropical and midlatitude atmosphere. Assimilating a single 500 hPa geopotential height observation in the midlatitudes produces increments consistent with geostrophic and thermal wind balance, while assimilating a total column water vapor observation with a positive departure in the nearly‐saturated tropical atmosphere generates an increment resembling the tropical response to (latent) heat‐induced perturbations. The resulting increments are localized and flow‐dependent, and shaped by orography and land‐sea contrasts. Forecasts initialized from these analyses exhibit realistic weather evolution, including the excitation of an eastward‐propagating Kelvin wave in the tropics. Finally, we explore the transition from using synthetic ensembles and a climatology‐based background error covariance matrix to an operational ensemble of data assimilations. Despite significant compression‐induced variance loss in some variables, latent‐space assimilation produces balanced, flow‐dependent increments—highlighting its potential for ensemble‐based latent‐space 4D‐Var. Plain Language Summary Accurately estimating the current state of the atmosphere is essential for reliable weather forecasting. This estimate, called the initial condition, is produced through data assimilation (DA)—a process that combines previous short forecast with new observations. An important part of this process involves describing how forecast errors relate across space and between atmospheric variables. This relationship determines how the influence of each new observation is spread in a physically consistent way. Traditional weather models rely on statistical or theoretical assumptions to describe these error relationships. While effective in the midlatitudes, these assumptions often fail in the tropics, where different physical processes dominate. In this study, we explore a new approach that learns a simplified low‐dimensional representation of the atmosphere using a neural network trained on 40 years of reconstructed weather data. We show that performing DA of new observations in this learned “latent space” produces realistic updates that respect known atmospheric balances both in the tropics and midlatitudes and adapt to the current weather situation. It also works with forecast ensembles used in operational weather centers. These results suggest that DA in latent space could offer a more flexible and efficient way to improve weather forecasts. Key Points The background‐error covariances in a machine learning‐based variational data assimilation framework are studied The method captures both tropical and midlatitude atmospheric balances in the background‐error covariance model The approach works with both climatological and ensemble‐based background‐error covariance matrices
Convcast: An embedded convolutional LSTM based architecture for precipitation nowcasting using satellite data
Nowcasting of precipitation is a difficult spatiotemporal task because of the non-uniform characterization of meteorological structures over time. Recently, convolutional LSTM has been shown to be successful in solving various complex spatiotemporal based problems. In this research, we propose a novel precipitation nowcasting architecture 'Convcast' to predict various short-term precipitation events using satellite data. We train Convcast with ten consecutive NASA's IMERG precipitation data sets each at intervals of 30 minutes. We use the trained neural network model to predict the eleventh precipitation data of the corresponding ten precipitation sequence. Subsequently, the predicted precipitation data are used iteratively for precipitation nowcasting of up to 150 minutes lead time. Convcast achieves an overall accuracy of 0.93 with an RMSE of 0.805 mm/h for 30 minutes lead time, and an overall accuracy of 0.87 with an RMSE of 1.389 mm/h for 150 minutes lead time. Experiments on the test dataset demonstrate that Convcast consistently outperforms other state-of-the-art optical flow based nowcasting algorithms. Results from this research can be used for nowcasting of weather events from satellite data as well as for future on-board processing of precipitation data.
Development and Interpretation of a Neural-Network-Based Synthetic Radar Reflectivity Estimator Using GOES-R Satellite Observations
The objective of this research is to develop techniques for assimilating GOES-R series observations in precipitating scenes for the purpose of improving short-term convective-scale forecasts of high-impact weather hazards. Whereas one approach is radiance assimilation, the information content of GOES-R radiances from its Advanced Baseline Imager saturates in precipitating scenes, and radiance assimilation does not make use of lightning observations from the GOES Lightning Mapper. Here, a convolutional neural network (CNN) is developed to transform GOES-R radiances and lightning into synthetic radar reflectivity fields to make use of existing radar assimilation techniques. We find that the ability of CNNs to utilize spatial context is essential for this application and offers breakthrough improvement in skill compared to traditional pixel-by-pixel based approaches. To understand the improved performance, we use a novel analysis method that combines several techniques, each providing different insights into the network’s reasoning. Channel-withholding experiments and spatial information–withholding experiments are used to show that the CNN achieves skill at high reflectivity values from the information content in radiance gradients and the presence of lightning. The attribution method, layerwise relevance propagation, demonstrates that the CNN uses radiance and lightning information synergistically, where lightning helps the CNN focus on which neighboring locations are most important. Synthetic inputs are used to quantify the sensitivity to radiance gradients, showing that sharper gradients produce a stronger response in predicted reflectivity. Lightning observations are found to be uniquely valuable for their ability to pinpoint locations of strong radar echoes.
Temperature Prediction Using the Missing Data Refinement Model Based on a Long Short-Term Memory Neural Network
In this paper, we propose a new temperature prediction model based on deep learning by using real observed weather data. To this end, a huge amount of model training data is needed, but these data should not be defective. However, there is a limitation in collecting weather data since it is not possible to measure data that have been missed. Thus, the collected data are apt to be incomplete, with random or extended gaps. Therefore, the proposed temperature prediction model is used to refine missing data in order to restore missed weather data. In addition, since temperature is seasonal, the proposed model utilizes a long short-term memory (LSTM) neural network, which is a kind of recurrent neural network known to be suitable for time-series data modeling. Furthermore, different configurations of LSTMs are investigated so that the proposed LSTM-based model can reflect the time-series traits of the temperature data. In particular, when a part of the data is detected as missing, it is restored by using the proposed model’s refinement function. After all the missing data are refined, the LSTM-based model is retrained using the refined data. Finally, the proposed LSTM-based temperature prediction model can predict the temperature through three time steps: 6, 12, and 24 h. Furthermore, the model is extended to predict 7 and 14 day future temperatures. The performance of the proposed model is measured by its root-mean-squared error (RMSE) and compared with the RMSEs of a feedforward deep neural network, a conventional LSTM neural network without any refinement function, and a mathematical model currently used by the meteorological office in Korea. Consequently, it is shown that the proposed LSTM-based model employing LSTM-refinement achieves the lowest RMSEs for 6, 12, and 24 h temperature prediction as well as for 7 and 14 day temperature prediction, compared to other DNN-based and LSTM-based models with either no refinement or linear interpolation. Moreover, the prediction accuracy of the proposed model is higher than that of the Unified Model (UM) Local Data Assimilation and Prediction System (LDAPS) for 24 h temperature predictions.
Towards hybrid modeling of the global hydrological cycle
State-of-the-art global hydrological models (GHMs) exhibit large uncertainties in hydrological simulations due to the complexity, diversity, and heterogeneity of the land surface and subsurface processes, as well as the scale dependency of these processes and associated parameters. Recent progress in machine learning, fueled by relevant Earth observation data streams, may help overcome these challenges. But machine learning methods are not bound by physical laws, and their interpretability is limited by design. In this study, we exemplify a hybrid approach to global hydrological modeling that exploits the data adaptivity of neural networks for representing uncertain processes within a model structure based on physical principles (e.g., mass conservation) that form the basis of GHMs. This combination of machine learning and physical knowledge can potentially lead to data-driven, yet physically consistent and partially interpretable hybrid models. The hybrid hydrological model (H2M), extended from Kraft et al. (2020), simulates the dynamics of snow, soil moisture, and groundwater storage globally at 1∘ spatial resolution and daily time step. Water fluxes are simulated by an embedded recurrent neural network. We trained the model simultaneously against observational products of terrestrial water storage variations (TWS), grid cell runoff (Q), evapotranspiration (ET), and snow water equivalent (SWE) with a multi-task learning approach. We find that the H2M is capable of reproducing key patterns of global water cycle components, with model performances being at least on par with four state-of-the-art GHMs which provide a necessary benchmark for H2M. The neural-network-learned hydrological responses of evapotranspiration and grid cell runoff to antecedent soil moisture states are qualitatively consistent with our understanding and theory. The simulated contributions of groundwater, soil moisture, and snowpack variability to TWS variations are plausible and within the ranges of traditional GHMs. H2M identifies a somewhat stronger role of soil moisture for TWS variations in transitional and tropical regions compared to GHMs. With the findings and analysis, we conclude that H2M provides a new data-driven perspective on modeling the global hydrological cycle and physical responses with machine-learned parameters that is consistent with and complementary to existing global modeling frameworks. The hybrid modeling approaches have a large potential to better leverage ever-increasing Earth observation data streams to advance our understandings of the Earth system and capabilities to monitor and model it.
Evaluating the streamflow simulation capability of PERSIANN-CDR daily rainfall products in two river basins on the Tibetan Plateau
On the Tibetan Plateau, the limited ground-based rainfall information owing to a harsh environment has brought great challenges to hydrological studies. Satellite-based rainfall products, which allow for a better coverage than both radar network and rain gauges on the Tibetan Plateau, can be suitable alternatives for studies on investigating the hydrological processes and climate change. In this study, a newly developed daily satellite-based precipitation product, termed Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks – Climate Data Record (PERSIANN-CDR), is used as input for a hydrologic model to simulate streamflow in the upper Yellow and Yangtze River basins on the Tibetan Plateau. The results show that the simulated streamflows using PERSIANN-CDR precipitation and the Global Land Data Assimilation System (GLDAS) precipitation are closer to observation than that using limited gauge-based precipitation interpolation in the upper Yangtze River basin. The simulated streamflow using gauge-based precipitation are higher than the streamflow observation during the wet season. In the upper Yellow River basin, gauge-based precipitation, GLDAS precipitation, and PERSIANN-CDR precipitation have similar good performance in simulating streamflow. The evaluation of streamflow simulation capability in this study partly indicates that the PERSIANN-CDR rainfall product has good potential to be a reliable dataset and an alternative information source of a limited gauge network for conducting long-term hydrological and climate studies on the Tibetan Plateau.
Learning Regionalization Using Accurate Spatial Cost Gradients Within a Differentiable High-Resolution Hydrological Model: Application to the French Mediterranean Region
Estimating spatially distributed hydrological parameters in ungauged catchments poses a challenging regionalization problem and requires imposing spatial constraints given the sparsity of discharge data. A possible approach is to search for a transfer function that quantitatively relates physical descriptors to conceptual model parameters. This paper introduces a Hybrid Data Assimilation and Parameter Regionalization (HDA-PR) approach incorporating learnable regionalization mappings, based on either multi-linear regression or artificial neural networks (ANNs), into a differentiable hydrological model. This approach demonstrates how two differentiable codes can be linked and their gradients chained, enabling the exploitation of heterogeneous data sets across extensive spatio-temporal computational domains within a high-dimensional regionalization context, using accurate adjoint-based gradients. The inverse problem is tackled with a multi-gauge calibration cost function accounting for information from multiple observation sites. HDA-PR was tested on high-resolution, hourly and kilometric regional modeling of 126 flash-flood-prone catchments in the French Mediterranean region. The results highlight a strong regionalization performance of HDA-PR especially in the most challenging upstream-to-downstream extrapolation scenario with ANN, achieving median Nash-Sutcliffe efficiency (NSE) scores from 0.6 to 0.71 for spatial, temporal, spatio-temporal validations, and improving NSE by up to 30% on average compared to the baseline model calibrated with lumped parameters. Multiple evaluation metrics based on flood-oriented hydrological signatures also indicate that the use of an ANN leads to better performances than a multi-linear regression in a validation context. ANN enables to learn a non-linear descriptors-to-parameters mapping which provides better model controllability than a linear mapping for complex calibration cases.
Temperature Prediction Based on Bidirectional Long Short-Term Memory and Convolutional Neural Network Combining Observed and Numerical Forecast Data
Weather is affected by a complex interplay of factors, including topography, location, and time. For the prediction of temperature in Korea, it is necessary to use data from multiple regions. To this end, we investigate the use of deep neural-network-based temperature prediction model time-series weather data obtained from an automatic weather station and image data from a regional data assimilation and prediction system (RDAPS). To accommodate such different types of data into a single model, a bidirectional long short-term memory (BLSTM) model and a convolutional neural network (CNN) model are chosen to represent the features from the time-series observed data and the RDAPS image data. The two types of features are combined to produce temperature predictions for up to 14 days in the future. The performance of the proposed temperature prediction model is evaluated by objective measures, including the root mean squared error and mean bias error. The experiments demonstrated that the proposed model combining both the observed and RDAPS image data is better in all performance measures for all prediction periods compared with the BLSTM-based model using observed data and the CNN-BLSTM-based model using RDAPS image data alone.
Machine Learning for Online Sea Ice Bias Correction Within Global Ice‐Ocean Simulations
In this study, we perform online sea ice bias correction within a Geophysical Fluid Dynamics Laboratory global ice‐ocean model. For this, we use a convolutional neural network (CNN) which was developed in a previous study (Gregory et al., 2023, https://doi.org/10.1029/2023ms003757) for the purpose of predicting sea ice concentration (SIC) data assimilation (DA) increments. An initial implementation of the CNN shows systematic improvements in SIC biases relative to the free‐running model, however large summertime errors remain. We show that these residual errors can be significantly improved with a novel sea ice data augmentation approach. This approach applies sequential CNN and DA corrections to a new simulation over the training period, which then provides a new training data set to refine the weights of the initial network. We propose that this machine‐learned correction scheme could be utilized for generating improved initial conditions, and also for real‐time sea ice bias correction within seasonal‐to‐subseasonal sea ice forecasts. Plain Language Summary Climate models contain errors which often lead to predictions which are consistently out of agreement with what we observe in reality. In some cases we know the origin of these errors, for example, predicting too much sea ice as a result of consistently cool ocean temperatures. In reality, however, there are typically numerous model errors interacting across the atmosphere, ocean and sea ice, and to manually parse through large volumes of climate model data in an attempt to isolate these errors in time and space is highly impractical. Machine learning on the other hand is a framework which is well‐suited to this task. In this work we take a machine learning model which, at any given moment, ingests information about a climate model's atmosphere, ocean and sea ice conditions, and predicts how much error there is in the climate model's representation of sea ice, without seeing any actual sea ice observations. We use this to adjust the sea ice conditions in one particular climate model as it is running forward in time making predictions, and we find that this significantly reduces the model's sea ice errors globally. Key Points We use a convolutional neural network (CNN) to perform online sea ice bias correction within global ice‐ocean simulations The CNN systematically reduces the free‐running model bias in both the Arctic and Antarctic The online performance can be improved by combining CNN and data assimilation corrections in order to iteratively augment the training data
A New Method for Reconstruction of Regional Three‐Dimensional Electron Density Distributions Using AI‐Based Data Assimilation Method and Incoherent Scatter Radar Measurements
The ionosphere's dynamic structure affects electromagnetic radiation by altering radio wave propagation, impacting daily communications. The characteristics of the ionosphere are primarily characterized by electron density parameters. This paper proposes a method to construct Three‐Dimensional (3‐D) electron density distributions with arbitrary spatiotemporal resolution in ISR observational regions. The method, termed Artificial Intelligence‐based data assimilation (AI‐Assim), integrates data assimilation directly into a neural network. It assimilates electron density from the IRI‐2020 model to fill ISR observation gaps. Experiments conducted using the Sanya Incoherent Scatter Radar (SYISR) in Hainan, China, successfully constructed a 3‐D electron density structure over the region, with a 0.2° latitude/longitude resolution and 1 km height resolution. The method's effectiveness was validated by calculating the mean square error and comparing the results with digisonde measurements. Plain Language Summary This study leverages the most powerful ionospheric observation tool, the ISR, to construct 3‐D electron density distributions with arbitrary spatial resolution. Relying solely on empirical models often leads to accuracy issues, while 3‐D electron density models based purely on observational methods typically suffer from low resolution. All observational methods encounter difficulties in achieving continuous, high spatial resolution monitoring of the entire sky, and ISR is one of the most effective techniques available. However, even with interpolation methods, the coverage area of ISR remains limited. Therefore, this study explores a method that uses the neural network to assimilate electron density values from the IRI‐2020 model, aiming to fill the gaps in ISR detection. By assimilating International Reference Ionosphere values to approximate observed values, the accuracy of the 3‐D electron density results is enhanced. Multiple iterations of AI‐ Assim enable the construction of 3‐D electron density distributions with arbitrary spatial resolution. Key Points We developed a method for constructing 3‐D electron density distributions with arbitrary spatiotemporal resolution at ISR stations The method termed AI‐Assim, continuously assimilating electron density from the IRI‐2020 model to fill ISR observation gaps Experiments using SYISR data achieved a 3‐D electron density model with 0.2° map resolution and 1 km height resolution