Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
74,213
result(s) for
"Estimation methods"
Sort by:
A Multi-Layer Techno-Economic-Environmental Energy Management Optimization in Cooperative Multi-Microgrids with Demand Response Program and Uncertainties Consideration
by
Kamel, Salah
,
Megahed, Tamer F.
,
Abdelkader, Sobhy M.
in
2m + 1 point estimation method
,
639/166/987
,
639/4077/909
2024
This paper presents a multi-layer, multi-objective (MLMO) optimization model for techno-economic-environmental energy management in cooperative multi-Microgrids (MMGs) that incorporates a Demand Response Program (DRP). The proposed MLMO approach simultaneously optimizes operating costs, MMG operator benefits, environmental emissions, and MMG dependency. This paper proposed a new hybrid ε-lexicography–weighted-sum that eliminates the need to normalize or scalarize objectives. The first layer of the model schedules MMG resources with DRP to minimize operating costs (local generation and power transactions with the utility grid) and maximize MMG profit. The second layer achieves the environmental operation of the MMG, while the third layer maximizes MMG reliability. This paper also proposed a new application of a recently developed enhanced equilibrium optimizer (EEO) for solving the three-layer EM problem. In addition, the uncertainties of solar power generation, wind power generation, load demand, and energy prices are considered based on the probabilistic 2m + 1 Point estimation method (PEM) approach. Three case studies are presented to verify the proposed MLMO approach on an MMG test system. In Case I, a deterministic EM is solved to simulate the MMG as a single layer to minimize costs and maximize benefits through DRP, while Case II solves the MLMO optimization problem. Simulation results show that the proposed MLMO technique reduces environmental emissions by 2.45% and 3.5% in its optimization layer and at the final layer, respectively. The independence index is also enhanced by 2.49% and 4.8% in its layer only and as a total increase, respectively. Case III is for the probabilistic EM simulation; due to the uncertain variables effect, the mean value in this case is increased by about 2.6% over Case I.
Journal Article
Nonparametric Methods for Inference in the Presence of Instrumental Variables
2005
We suggest two nonparametric approaches, based on kernel methods and orthogonal series to estimating regression functions in the presence of instrumental variables. For the first time in this class of problems, we derive optimal convergence rates, and show that they are attained by particular estimators. In the presence of instrumental variables the relation that identifies the regression function also defines an ill-posed inverse problem, the \"difficulty\" of which depends on eigenvalues of a certain integral operator which is determined by the joint density of endogenous and instrumental variables. We delineate the role played by problem difficulty in determining both the optimal convergence rate and the appropriate choice of smoothing parameter.
Journal Article
Limit theorems for local polynomial estimation of regression for functional dependent data
2024
Local polynomial fitting exhibits numerous compelling statistical properties, particularly within the intricate realm of multivariate analysis. However, as functional data analysis gains prominence as a dynamic and pertinent field in data science, the exigency arises for the formulation of a specialized theory tailored to local polynomial fitting. We explored the intricate task of estimating the regression function operator and its partial derivatives for stationary mixing random processes, denoted as$ (Y_i, X_i) $ , using local higher-order polynomial fitting. Our key contributions include establishing the joint asymptotic normality of the estimates for both the regression function and its partial derivatives, specifically in the context of strongly mixing processes. Additionally, we provide explicit expressions for the bias and the variance-covariance matrix of the asymptotic distribution. Demonstrating uniform strong consistency over compact subsets, along with delineating the rates of convergence, we substantiated these results for both the regression function and its partial derivatives. Importantly, these findings rooted in reasonably broad conditions that underpinned the underlying models. To demonstrate practical applicability, we leveraged our results to compute pointwise confidence regions. Finally, we extended our ideas to the nonparametric conditional distribution, and obtained its limiting distribution.
Journal Article
Convergence rate in structural equation models – analysis of estimation methods and implications in the number of observations
2022
Structural Equation Modeling (SEM) is used to analyze the causal relationships between observable and unobservable variables. Among the assumptions considered, but not essential, for the application of the SEM are the presence of multivariate normality between the data, and the need for a large number of observations, in order to obtain the variances and covariances between the variables. It is not always possible to have access to a sufficiently large number of observations to enable the calculation of parameters, and the convergence of the iterative algorithm is one of the problems in obtaining the results. This work investigates the convergence of iterative algorithms, which minimize the variation of parameters, through a stipulated convergence rate, using the Maximum Likelihood (ML) and Generalized Least Squares (GLS) estimation methods on structural equation models using confirmatory factor analysis (CFA) and regression models. Convergences were evaluated in relation to the number of observations, in order to obtain a minimum quantity sufficient for a convergence rate above 50%. The calculations were performed in the statistical environment R® version 3.4.4, and the results obtained showed a convergence rate above 50% for models estimated by GLS, even with the data showing lack of multivariate normality, kurtosis and accentuated asymmetry. Thus, it was possible to define a minimum number of observations necessary for an adequate convergence of the iterative algorithms in obtaining the necessary parameters.
Journal Article
Efficient Estimation of a Semiparametric Partially Linear Varying Coefficient Model
2005
In this paper we propose a general series method to estimate a semiparametric partially linear varying coefficient model. We establish the consistency and √n-normality property of the estimator of the finitedimensional parameters of the model. We further show that, when the error is conditionally homoskedastic, this estimator is semiparametrically efficient in the sense that the inverse of the asymptotic variance of the estimator of the finite-dimensional parameter reaches the semiparametric efficiency bound of this model. A small-scale simulation is reported to examine the finite sample performance of the proposed estimator, and an empirical application is presented to illustrate the usefulness of the proposed method in practice. We also discuss how to obtain an efficient estimation result when the error is conditional heteroskedastic.
Journal Article
PROGRAM EVALUATION AND CAUSAL INFERENCE WITH HIGH-DIMENSIONAL DATA
2017
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function-valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced-form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for postregularization and post-selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reducedform functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment-condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function-valued) parameters within this general framework where any high-quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high-dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsitybased estimation of regression functions for function-valued outcomes.
Journal Article
Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models
2011
Recent work by Reiss and Ogden provides a theoretical basis for sometimes preferring restricted maximum likelihood (REML) to generalized cross-validation (GCV) for smoothing parameter selection in semiparametric regression. However, existing REML or marginal likelihood (ML) based methods for semiparametric generalized linear models (GLMs) use iterative REML or ML estimation of the smoothing parameters of working linear approximations to the GLM. Such indirect schemes need not converge and fail to do so in a non-negligible proportion of practical analyses. By contrast, very reliable prediction error criteria smoothing parameter selection methods are available, based on direct optimization of GCV, or related criteria, for the GLM itself. Since such methods directly optimize properly defined functions of the smoothing parameters, they have much more reliable convergence properties. The paper develops the first such method for REML or ML estimation of smoothing parameters. A Laplace approximation is used to obtain an approximate REML or ML for any GLM, which is suitable for efficient direct optimization. This REML or ML criterion requires that Newton-Raphson iteration, rather than Fisher scoring, be used for GLM fitting, and a computationally stable approach to this is proposed. The REML or ML criterion itself is optimized by a Newton method, with the derivatives required obtained by a mixture of implicit differentiation and direct methods. The method will cope with numerical rank deficiency in the fitted model and in fact provides a slight improvement in numerical robustness on the earlier method of Wood for prediction error criteria based smoothness selection. Simulation results suggest that the new REML and ML methods offer some improvement in mean-square error performance relative to GCV or Akaike's information criterion in most cases, without the small number of severe undersmoothing failures to which Akaike's information criterion and GCV are prone. This is achieved at the same computational cost as GCV or Akaike's information criterion. The new approach also eliminates the convergence failures of previous REML- or ML-based approaches for penalized GLMs and usually has lower computational cost than these alternatives. Example applications are presented in adaptive smoothing, scalar on function regression and generalized additive model selection.
Journal Article
Handling Endogenous Regressors by Joint Estimation Using Copulas
2012
We propose a new statistical instrument-free method to tackle the endogeneity problem. The proposed method models the joint distribution of the endogenous regressor and the error term in the structural equation of interest (the structural error) using a copula method, and it makes inferences on the model parameters by maximizing the likelihood derived from the joint distribution. Similar to the \"exclusion restriction\" in instrumental variable methods, extant instrument-free methods require the assumption that the unobserved instruments are exogenous, a requirement that is difficult to meet. The proposed method does not require such an assumption. Other benefits of the proposed method are that it allows the modeling of discrete endogenous regressors and offers a new solution to the slope endogeneity problem. In addition to linear models, the method is applicable to the popular random coefficient logit model with either aggregate-level or individual-level data. We demonstrate the performance of the proposed method via a series of simulation studies and an empirical example.
Journal Article
Estimation and Accuracy After Model Selection
2014
Classical statistical theory ignores model selection in assessing estimation accuracy. Here we consider bootstrap methods for computing standard errors and confidence intervals that take model selection into account. The methodology involves bagging, also known as bootstrap smoothing, to tame the erratic discontinuities of selection-based estimators. A useful new formula for the accuracy of bagging then provides standard errors for the smoothed estimators. Two examples, nonparametric and parametric, are carried through in detail: a regression model where the choice of degree (linear, quadratic, cubic, …) is determined by the C ₚ criterion and a Lasso-based estimation problem.
Journal Article
SPARSE MODELS AND METHODS FOR OPTIMAL INSTRUMENTS WITH AN APPLICATION TO EMINENT DOMAIN
2012
We develop results for the use of Lasso and post-Lasso methods to form first-stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post-Lasso in the first stage is root-n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well-approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic \"beta-min\" conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso-based IV estimator with a data-driven penalty performs well compared to recently advocated many-instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso-based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post-Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non-Gaussian, heteroscedastic disturbances that uses a data-weighted 𝓁₁-penalty function. By innovatively using moderate deviation theory for self-normalized sums, we provide convergence rates for the resulting Lasso and post-Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that log p = o(n 1/3 ). We also provide a data-driven method for choosing the penalty level that must be specified in obtaining Lasso and post-Lasso estimates and establish its asymptotic validity under non-Gaussian, heteroscedastic disturbances.
Journal Article