Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
11,449 result(s) for "Statistical functionals."
Sort by:
The Oxford handbook of functional data analysis
\"As technology progresses, we are able to handle larger and larger datasets. At the same time, monitoring devices such as electronic equipment and sensors (for registering images, temperature, etc.) have become more and more sophisticated. This high-tech revolution offers the opportunity to observe phenomena in an increasingly accurate way by producing statistical units sampled over a finer and finer grid, with the measurement points so close that the data can be considered as observations varying over a continuum. Such continuous (or functional) data may occur in biomechanics (e.g. human movements), chemometrics (e.g. spectrometric curves), econometrics (e.g. the stock market index), geophysics (e.g. spatio-temporal events such as El Nino or time series of satellite images), or medicine (electro-cardiograms/electro-encephalograms). It is well known that standard multivariate statistical analyses fail with functional data. However, the great potential for applications has encouraged new methodologies able to extract relevant information from functional datasets. This Handbook aims to present a state of the art exploration of this high-tech field, by gathering together most of major advances in this area. Leading international experts have contributed to this volume with each chapter giving the key original ideas and comprehensive bibliographical information. The main statistical topics (classification, inference, factor-based analysis, regression modelling, resampling methods, time series, random processes) are covered in the setting of functional data. The twin challenges of the subject are the practical issues of implementing new methodologies and the theoretical techniques needed to expand the mathematical foundations and toolbox. The volume therefore mixes practical, methodological and theoretical aspects of the subject, sometimes within the same chapter. As a consequence, this book should appeal to a wide audience of engineers, practitioners and graduate students, as well as academic researchers, not only in statistics and probability but also in the numerous related application areas\"-- Provided by publisher.
Modeling, measuring and managing risk
This book is the first in the market to treat single- and multi-period risk measures (risk functionals) in a thorough, comprehensive manner. It combines the treatment of properties of the risk measures with the related aspects of decision making under risk.
Making and Evaluating Point Forecasts
Typically, point forecasting methods are compared and assessed by means of an error measure or scoring function, with the absolute error and the squared error being key examples. The individual scores are averaged over forecast cases, to result in a summary measure of the predictive performance, such as the mean absolute error or the mean squared error. I demonstrate that this common practice can lead to grossly misguided inferences, unless the scoring function and the forecasting task are carefully matched. Effective point forecasting requires that the scoring function be specified ex ante, or that the forecaster receives a directive in the form of a statistical functional, such as the mean or a quantile of the predictive distribution. If the scoring function is specified ex ante, the forecaster can issue the optimal point forecast, namely, the Bayes rule. If the forecaster receives a directive in the form of a functional, it is critical that the scoring function be consistent for it, in the sense that the expected score is minimized when following the directive. A functional is elicitable if there exists a scoring function that is strictly consistent for it. Expectations, ratios of expectations and quantiles are elicitable. For example, a scoring function is consistent for the mean functional if and only if it is a Bregman function. It is consistent for a quantile if and only if it is generalized piecewise linear. Similar characterizations apply to ratios of expectations and to expectiles. Weighted scoring functions are consistent for functionals that adapt to the weighting in peculiar ways. Not all functionals are elicitable; for instance, conditional value-at-risk is not, despite its popularity in quantitative finance.
Theoretical foundations of functional data analysis, with an introduction to linear operators
Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators provides a uniquely broad compendium of the key mathematical concepts and results that are relevant for the theoretical development of functional data analysis (FDA). The self–contained treatment of selected topics of functional analysis and operator theory includes reproducing kernel Hilbert spaces, singular value decomposition of compact operators on Hilbert spaces and perturbation theory for both self–adjoint and non self–adjoint operators. The probabilistic foundation for FDA is described from the perspective of random elements in Hilbert spaces as well as from the viewpoint of continuous time stochastic processes. Nonparametric estimation approaches including kernel and regularized smoothing are also introduced. These tools are then used to investigate the properties of estimators for the mean element, covariance operators, principal components, regression function and canonical correlations. A general treatment of canonical correlations in Hilbert spaces naturally leads to FDA formulations of factor analysis, regression, MANOVA and discriminant analysis. This book will provide a valuable reference for statisticians and other researchers interested in developing or understanding the mathematical aspects of FDA. It is also suitable for a graduate level special topics course.
The group of statistical precursors of 7.3 and 6.6 magnitude earthquakes in the region of Indonesia
Based on the methodology for studying variations in the properties of small-scale probability density fluctuations when measuring the X-component of the magnetic field, short-term precursors were identified for two earthquakes with magnitudes 7.3 and 6.6 that occurred in the Indonesia region on December 14, 2021, and January 14, 2022, respectively. In both cases, the applied approach made it possible to identify an ensemble of these precursors over an interval of the order of one day to one hour before the indicated events. For the earthquakes under consideration, phenomena of simultaneous or quasi-simultaneous occurrence of groups of precursors were discovered, and time points were identified that could correspond to critical phenomena during the process of “final preparation” of the earthquake.
Estimating Age in Short Utterances Based on Multi-Class Classification Approach
Age estimation in short speech utterances finds many applications in daily life like human-robot interaction, custom call routing, targeted marketing, user-profiling, etc. Despite the comprehensive studies carried out to extract descriptive features, the estimation errors (i.e. years) are still high. In this study, an automatic system is proposed to estimate age in short speech utterances without depending on the text as well as the speaker. Firstly, four groups of features are extracted from each utterance frame using hybrid techniques and methods. After that, 10 statistical functionals are measured for each extracted feature dimension. Then, the extracted feature dimensions are normalized and reduced using the Quantile method and the Linear Discriminant Analysis (LDA) method, respectively. Finally, the speaker’s age is estimated based on a multi-class classification approach by using the Extreme Gradient Boosting (XGBoost) classifier. Experiments have been carried out on the TIMIT dataset to measure the performance of the proposed system. The Mean Absolute Error (MAE) of the suggested system is 4.68 years, and 4.98 years, the Root Mean Square Error (RMSE) is 8.05 and 6.97, respectively, for female and male speakers. The results show a clear relative improvement in terms of MAE up to 28% and 10% for female and male speakers, respectively, in comparison to related works that utilized the TIMIT dataset.
Statistical precursors of the 6.4 magnitude earthquake on December 29, 2020, near Petrinja town, Croatia
In this work, it is shown that early warning signals were recorded prior to a 6.4 magnitude earthquake that took place on December 29, 2020, near the Croatian city of Petrinja. The study relied on analyzing property changes in small-scale probability density fluctuations in three parameters of the Earth’s magnetic field: X, Y and Z. The applied technique made it possible to identify a set of these precursors in intervals ranging from two and a half days to one day to less than one hour before this event. It has been observed that the three magnetic variation stations located at distances of approximately 300, 1000, and 1500 km from the epicenter exhibit significant differences in the occurrence of early warning signs and critical phenomena during an impending earthquake. These differences are related to the intensity and frequency of the effects observed at each station.
Prediction of unfavorable outcome of acute decompensation of diabetes mellitus
The aim of the study Using the method of spectral-probability analysis, to evaluate the possibility of predicting an unfavorable outcome of acute decompensation of diabetes mellitus in patients hospitalized in the intensive care unit using a mathematical model. In relation to clinical practice, the implementation of the proposed algorithm for mathematical processing of a set of test data provides the physician with an additional significant criterion for assessing the probability of a tendency to develop type 1 diabetes in healthy children being examined whose brothers or sisters suffer from this disease. Materials and methods A retrospective analysis of 103 medical records of patients hospitalized in the intensive care unit for acute decompensation of diabetes mellitus was conducted. Results With regard to the set of analyses of patients with acute decompensation of diabetes mellitus, carried out at the time of admission to hospital, a group of mathematical criteria has been defined that makes it possible to identify patients with a high risk of an unfavorable course of the disease.
One-Sided Inference about Functionals of a Density
This paper discusses the possibility of truly nonparametric inference about functionals of an unknown density. Examples considered include: discrete functionals, such as the number of modes of a density and the number of terms in the true model; and continuous functionals, such as the optimal bandwidth for kernel density estimates or the widths of confidence intervals for adaptive location estimators. For such functionals it is not generally possible to make two-sided nonparametric confidence statements. However, one-sided nonparametric confidence statements are possible: e.g., \"I say with 95% confidence that the underlying distribution has at least three modes.\" Roughly, this is because the functionals of interest are semicontinuous with respect to the topology induced by a distribution-free metric. Then a neighborhood procedure can be used. The procedure is to find the minimum value of the functional over a neighborhood of the empirical distribution in function space. If this neighborhood is a nonparametric 1 - α confidence region for the true distribution, the resulting minimum value lowerbounds the true value with a probability of at least 1 - α. This lower bound has good asymptotic properties in the high-confidence setting α close to 0.
Generalised information criteria in model selection
The problem of evaluating the goodness of statistical models is investigated from an information-theoretic point of view. Information criteria are proposed for evaluating models constructed by various estimation procedures when the specified family of probability distributions does not contain the distribution generating the data. The proposed criteria are applied to the evaluation of models estimated by maximum likelihood, robust, penalised likelihood, Bayes procedures, etc. We also discuss the use of the bootstrap in model evaluation problems and present a variance reduction technique in the bootstrap simulation.