Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,616 result(s) for "Space and time Statistical methods."
Sort by:
Robust Bayesian Inference via Coarsening
The standard approach to Bayesian inference is based on the assumption that the distribution of the data belongs to the chosen model class. However, even a small violation of this assumption can have a large impact on the outcome of a Bayesian procedure. We introduce a novel approach to Bayesian inference that improves robustness to small departures from the model: rather than conditioning on the event that the observed data are generated by the model, one conditions on the event that the model generates data close to the observed data, in a distributional sense. When closeness is defined in terms of relative entropy, the resulting \"coarsened\" posterior can be approximated by simply tempering the likelihood-that is, by raising the likelihood to a fractional power-thus, inference can usually be implemented via standard algorithms, and one can even obtain analytical solutions when using conjugate priors. Some theoretical properties are derived, and we illustrate the approach with real and simulated data using mixture models and autoregressive models of unknown order. Supplementary materials for this article are available online.
Global Increasing Trends in Annual Maximum Daily Precipitation
This study investigates the presence of trends in annual maximum daily precipitation time series obtained from a global dataset of 8326 high-quality land-based observing stations with more than 30 years of record over the period from 1900 to 2009. Two complementary statistical techniques were adopted to evaluate the possible nonstationary behavior of these precipitation data. The first was a Mann–Kendall nonparametric trend test, and it was used to evaluate the existence of monotonic trends. The second was a nonstationary generalized extreme value analysis, and it was used to determine the strength of association between the precipitation extremes and globally averaged near-surface temperature. The outcomes are that statistically significant increasing trends can be detected at the global scale, with close to two-thirds of stations showing increases. Furthermore, there is a statistically significant association with globally averaged near-surface temperature, with the median intensity of extreme precipitation changing in proportion with changes in global mean temperature at a rate of between 5.9% and 7.7% K−1, depending on the method of analysis. This ratio was robust irrespective of record length or time period considered and was not strongly biased by the uneven global coverage of precipitation data. Finally, there is a distinct meridional variation, with the greatest sensitivity occurring in the tropics and higher latitudes and the minima around 13°S and 11°N. The greatest uncertainty was near the equator because of the limited number of sufficiently long precipitation records, and there remains an urgent need to improve data collection in this region to better constrain future changes in tropical precipitation.
OpenMx 2.0: Extended Structural Equation and Statistical Modeling
The new software package OpenMx 2.0 for structural equation and other statistical modeling is introduced and its features are described. OpenMx is evolving in a modular direction and now allows a mix-and-match computational approach that separates model expectations from fit functions and optimizers. Major backend architectural improvements include a move to swappable open-source optimizers such as the newly written CSOLNP. Entire new methodologies such as item factor analysis and state space modeling have been implemented. New model expectation functions including support for the expression of models in LISREL syntax and a simplified multigroup expectation function are available. Ease-of-use improvements include helper functions to standardize model parameters and compute their Jacobian-based standard errors, access to model components through standard R $ mechanisms, and improved tab completion from within the R Graphical User Interface.
Unsupervised Anomaly Detection in Spatio‐Temporal Stream Network Sensor Data
The use of in‐situ digital sensors for water quality monitoring is becoming increasingly common worldwide. While these sensors provide near real‐time data for science, the data are prone to technical anomalies that can undermine the trustworthiness of the data and the accuracy of statistical inferences, particularly in spatial and temporal analyses. Here we propose a framework for detecting anomalies in sensor data recorded in stream networks, which takes advantage of spatial and temporal autocorrelation to improve detection rates. The proposed framework involves the implementation of effective data imputation to handle missing data, alignment of time‐series to address temporal disparities, and the identification of water quality events. We explore the effectiveness of a suite of state‐of‐the‐art statistical methods including posterior predictive distributions, finite mixtures, and Hidden Markov Models (HMM). We showcase the practical implementation of automated anomaly detection in near‐real time by employing a Bayesian recursive approach. This demonstration is conducted through a comprehensive simulation study and a practical application to a substantive case study situated in the Herbert River, located in Queensland, Australia, which flows into the Great Barrier Reef. We found that methods such as posterior predictive distributions and HMM produce the best performance in detecting multiple types of anomalies. Utilizing data from multiple sensors deployed relatively near one another enhances the ability to distinguish between water quality events and technical anomalies, thereby significantly improving the accuracy of anomaly detection. Thus, uncertainty and biases in water quality reporting, interpretation, and modeling are reduced, and the effectiveness of subsequent management actions improved. Plain Language Summary Digital sensors are commonly used to monitor water quality in rivers and streams, providing real‐time data for scientific purposes. However, these sensors are prone to technical anomalies that can affect data reliability and statistical analyses. We propose a framework for detecting anomalies in sensor data from stream networks by leveraging spatial and temporal relationships to improve detection rates. Our framework includes effective methods for handling missing data, aligning time‐series, and identifying water quality events. We evaluate advanced statistical methods demonstrating the practical implementation of automated anomaly detection in near‐real time. We validate our framework through simulations and a case study in the Herbert River, Queensland, Australia. Results show the effectiveness of the suggested methods in detecting various anomalies. This reduction in uncertainty and biases improves water quality reporting, interpretation, and management actions. Key Points Technical anomalies in water quality data from in‐situ sensors are common, posing challenges in their detection that often require substantial time and effort We present a flexible anomaly detection framework used to automatically detect multiple anomaly types using empirical model residuals To facilitate real‐time monitoring, we have developed a Bayesian recursive stream‐network model that effectively captures spatio‐temporal dependencies within the data The timely identification of anomalies plays a crucial role in reducing uncertainties and biases, thereby enabling a more accurate understanding and management of water resources
Approximate Bayesian computation with the Wasserstein distance
A growing number of generative statistical models do not permit the numerical evaluation of their likelihood functions. Approximate Bayesian computation has become a popular approach to overcome this issue, in which one simulates synthetic data sets given parameters and compares summaries of these data sets with the corresponding observed values. We propose to avoid the use of summaries and the ensuing loss of information by instead using the Wasserstein distance between the empirical distributions of the observed and synthetic data. This generalizes the well-known approach of using order statistics within approximate Bayesian computation to arbitrary dimensions. We describe how recently developed approximations of the Wasserstein distance allow the method to scale to realistic data sizes, and we propose a new distance based on the Hilbert space filling curve. We provide a theoretical study of the method proposed, describing consistency as the threshold goes to 0 while the observations are kept fixed, and concentration properties as the number of observations grows. Various extensions to time series data are discussed. The approach is illustrated on various examples, including univariate and multivariate g-and-k distributions, a toggle switch model from systems biology, a queuing model and a Lévy-driven stochastic volatility model.
Complex Systems Methods Characterizing Nonlinear Processes in the Near-Earth Electromagnetic Environment: Recent Advances and Open Challenges
Learning from successful applications of methods originating in statistical mechanics, complex systems science, or information theory in one scientific field (e.g., atmospheric physics or climatology) can provide important insights or conceptual ideas for other areas (e.g., space sciences) or even stimulate new research questions and approaches. For instance, quantification and attribution of dynamical complexity in output time series of nonlinear dynamical systems is a key challenge across scientific disciplines. Especially in the field of space physics, an early and accurate detection of characteristic dissimilarity between normal and abnormal states (e.g., pre-storm activity vs. magnetic storms) has the potential to vastly improve space weather diagnosis and, consequently, the mitigation of space weather hazards. This review provides a systematic overview on existing nonlinear dynamical systems-based methodologies along with key results of their previous applications in a space physics context, which particularly illustrates how complementary modern complex systems approaches have recently shaped our understanding of nonlinear magnetospheric variability. The rising number of corresponding studies demonstrates that the multiplicity of nonlinear time series analysis methods developed during the last decades offers great potentials for uncovering relevant yet complex processes interlinking different geospace subsystems, variables and spatiotemporal scales.
Adaptive Bayesian Time-Frequency Analysis of Multivariate Time Series
This article introduces a nonparametric approach to multivariate time-varying power spectrum analysis. The procedure adaptively partitions a time series into an unknown number of approximately stationary segments, where some spectral components may remain unchanged across segments, allowing components to evolve differently over time. Local spectra within segments are fit through Whittle likelihood-based penalized spline models of modified Cholesky components, which provide flexible nonparametric estimates that preserve positive definite structures of spectral matrices. The approach is formulated in a Bayesian framework, in which the number and location of partitions are random, and relies on reversible jump Markov chain and Hamiltonian Monte Carlo methods that can adapt to the unknown number of segments and parameters. By averaging over the distribution of partitions, the approach can approximate both abrupt and slowly varying changes in spectral matrices. Empirical performance is evaluated in simulation studies and illustrated through analyses of electroencephalography during sleep and of the El Niño-Southern Oscillation. Supplementary materials for this article are available online.
Forecasting Time Series With Complex Seasonal Patterns Using Exponential Smoothing
An innovations state space modeling framework is introduced for forecasting complex seasonal time series such as those with multiple seasonal periods, high-frequency seasonality, non-integer seasonality, and dual-calendar effects. The new framework incorporates Box-Cox transformations, Fourier representations with time varying coefficients, and ARMA error correction. Likelihood evaluation and analytical expressions for point forecasts and interval predictions under the assumption of Gaussian errors are derived, leading to a simple, comprehensive approach to forecasting complex seasonal time series. A key feature of the framework is that it relies on a new method that greatly reduces the computational burden in the maximum likelihood estimation. The modeling framework is useful for a broad range of applications, its versatility being illustrated in three empirical studies. In addition, the proposed trigonometric formulation is presented as a means of decomposing complex seasonal time series, and it is shown that this decomposition leads to the identification and extraction of seasonal components which are otherwise not apparent in the time series plot itself.
Forecasting COVID-19 confirmed cases, deaths and recoveries: Revisiting established time series modeling through novel applications for the USA and Italy
The novel coronavirus (COVID-19) is an emergent disease that initially had no historical data to guide scientists on predicting/ forecasting its global or national impact over time. The ability to predict the progress of this pandemic has been crucial for decision making aimed at fighting this pandemic and controlling its spread. In this work we considered four different statistical/time series models that are readily available from the ‘forecast’ package in R. We performed novel applications with these models, forecasting the number of infected cases (confirmed cases and similarly the number of deaths and recovery) along with the corresponding 90% prediction interval to estimate uncertainty around pointwise forecasts. Since the future may not repeat the past for this pandemic, no prediction model is certain. However, any prediction tool with acceptable prediction performance (or prediction error) could still be very useful for public-health planning to handle spread of the pandemic, and could policy decision-making and facilitate transition to normality. These four models were applied to publicly available data of the COVID-19 pandemic for both the USA and Italy. We observed that all models reasonably predicted the future numbers of confirmed cases, deaths, and recoveries of COVID-19. However, for the majority of the analyses, the time series model with autoregressive integrated moving average (ARIMA) and cubic smoothing spline models both had smaller prediction errors and narrower prediction intervals, compared to the Holt and Trigonometric Exponential smoothing state space model with Box-Cox transformation (TBATS) models. Therefore, the former two models were preferable to the latter models. Given similarities in performance of the models in the USA and Italy, the corresponding prediction tools can be applied to other countries grappling with the COVID-19 pandemic, and to any pandemics that can occur in future.