Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
12,453
result(s) for
"Linear approximation"
Sort by:
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
2014
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, that is, sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
Journal Article
Approximations and inference for envelopment estimators of production frontiers
2024
Nonparametric methods have been commonly used to assess the performance of both private and public organizations. Among them, the most popular ones are envelopment estimators such as Free Disposal Hull (FDH) or Data Envelopment Analysis (DEA), which estimate the attainable sets and their efficient boundaries by enveloping the cloud of observed units in the appropriate input-output space. However, these nonparametric envelopment techniques do not provide estimates of marginal products and other coefficients of economic interest. This paper presents a new approach that provides local estimates of all the desired partial derivatives and economic coefficients, which complement and complete the analysis based on nonparametric envelopment estimators. We improve nonparametric estimators by estimating nonparametrically smoothed efficient boundaries and providing derivatives and other coefficients without having to assume any parametric structure for the frontier and the inefficiency distribution. Our approach offers several advantages, such as a flexible nonparametric adjustment of the efficient frontier based on local linear models; a general multivariate efficiency model based on directional distances where one can choose the desired benchmark direction; the possibility of assessing the impact of external-environmental variables; a bootstrap-based statistical inference for deriving confidence intervals on the estimated coefficients for nonparametric and robust frontier approximations; the possibility of including factors aggregating inputs or outputs and recovering the estimated coefficients in the original units. To demonstrate the usefulness of the proposed approach, we provide an illustration in the field of education, where economic coefficients are important but the parametric assumptions have been questioned.
Journal Article
Warm‐start piecewise linear approximation‐based solution for load pick‐up problem in electrical distribution system
2020
As the core sub‐problem of both network reconfiguration and service restoration of the electrical distribution system (EDS), the load pick‐up (LPP) problem in EDS searches the optimal configuration of the EDS, aiming to minimise the power loss or provide as much power as possible to load ends. The piecewise linearisation (PWL) approximation method can be used to tackle the network power flow constraints’ non‐linearity in the LPP problem model, and transform the LPP model into a mixed‐integer linear programming model (LPP‐MILP model). The errors in the PWL approximation of the network power flow constraints may affect the feasibility of the solving results of the LPP‐MILP model. The single method to reduce the PWL approximation errors by increasing the number of discretisations in PWL function is not stable. Moreover, the solution efficiency of the LPP‐MILP model is sacrificed severely. In this study, a warm‐start PWL approximation‐based solution for the LPP problem is proposed. The variable upper bounds in the PWL approximation functions are renewed dynamically in the warm‐start solution procedure to reduce the PWL approximation errors with higher computational efficiency. Modified IEEE 33‐bus radial distribution test system and a real and large distribution system, 1066‐bus system, are used to test and verify the effectiveness and robustness of the proposed methodology.
Journal Article
Linear approximation filter strategy for collaborative optimization with combination of linear approximations
by
Liu, Ji-Hong
,
Yang, Hai-Cheng
,
Meng, Xin-Jia
in
Approximation
,
Collaboration
,
Computational Mathematics and Numerical Analysis
2016
An alternative formulation of collaborative optimization (CO) combined with linear approximations (CLA-CO) is recently developed to improve the computational efficiency of CO. However, for optimization problems with nonconvex constraints, conflicting linear approximations may be added into the system level in the CLA-CO iteration process. In this case, CLA-CO is inapplicable because the conflicting constraints lead to a problem that does not have any feasible region. In this paper, a linear approximation filter (LAF) strategy for CLA-CO is proposed to address the application difficulty with nonconvex constraints. In LAF strategy, whether conflict exists is first identified through transforming the identification problem into the existence problem of feasible region of linear programming; then, the conflicting linear approximations are coordinated by eliminating the larger violated linear approximations. Thereafter, the minimum violated linear approximation replaces the accumulative linear approximations as the system-level constraint. To evaluate the violation of linear approximation, a quantification of the violation is introduced based on the CO process. By using the proposed LAF strategy, CLA-CO can solve the optimization problems with nonconvex constraints. The verification of CLA-CO with LAF strategy to three optimizations, a numerical test problem, a speed reducer design problem, and a compound cylinder design problem, illustrates the capabilities of the proposed LAF strategy.
Journal Article
Stochastic enzyme kinetics and the quasi-steady-state reductions: Application of the slow scale linear noise approximation à la Fenichel
by
Schnell, Santiago
,
Eilertsen, Justin
,
Srivastava, Kashvi
in
Approximation
,
Chemical reactions
,
Enzyme kinetics
2022
The linear noise approximation models the random fluctuations from the mean-field model of a chemical reaction that unfolds near the thermodynamic limit. Specifically, the fluctuations obey a linear Langevin equation up to order Ω-1/2, where Ω is the size of the chemical system (usually the volume). In the presence of disparate timescales, the linear noise approximation admits a quasi-steady-state reduction referred to as the slow scale linear noise approximation (ssLNA). Curiously, the ssLNAs reported in the literature are slightly different. The differences in the reported ssLNAs lie at the mathematical heart of the derivation. In this work, we derive the ssLNA directly from geometric singular perturbation theory and explain the origin of the different ssLNAs in the literature. Moreover, we discuss the loss of normal hyperbolicity and we extend the ssLNA derived from geometric singular perturbation theory to a non-classical singularly perturbed problem. In so doing, we disprove a commonly-accepted qualifier for the validity of stochastic quasi-steady-state approximation of the Michaelis –Menten reaction mechanism.
Journal Article
Log-robust portfolio management with parameter ambiguity
2017
We present a robust optimization approach to portfolio management under uncertainty when randomness is modeled using uncertainty sets for the continuously compounded rates of return, which empirical research argues are the true drivers of uncertainty, but the parameters needed to define the uncertainty sets, such as the drift and standard deviation, are not known precisely. Instead, a finite set of scenarios is available for the input data, obtained either using different time horizons or assumptions in the estimation process. Our objective is to maximize the worst-case portfolio value (over a set of allowable deviations of the uncertain parameters from their nominal values, using the worst-case nominal values among the possible scenarios) at the end of the time horizon in a one-period setting. Short sales are not allowed. We consider both the independent and correlated assets models. For the independent assets case, we derive a convex reformulation, albeit involving functions with singular Hessians. Because this slows computation times, we also provide lower and upper linear approximation problems and devise an algorithm that gives the decision maker a solution within a desired tolerance from optimality. For the correlated assets case, we suggest a tractable heuristic that uses insights derived in the independent assets case.
Journal Article
Quasisymmetry and rectifiability of quasispheres
2014
We obtain Dini conditions that guarantee that an asymptotically conformal quasisphere is rectifiable. In particular, we show that for any ϵ>0\\epsilon >0 integrability of (esssup1−t>|x|>1+tKf(x)−1)2−ϵdt/t( \\textrm {ess}\\sup _{1-t>|x|>1+t} K_f(x)-1 )^{2-\\epsilon } dt/t implies that the image of the unit sphere under a global quasiconformal homeomorphism ff is rectifiable. We also establish estimates for the weak quasisymmetry constant of a global KK-quasiconformal map in neighborhoods with maximal dilatation close to 1.
Journal Article
A Novel Hybrid Sequential Design Strategy for Global Surrogate Modeling of Computer Experiments
2011
Many complex real-world systems can be accurately modeled by simulations. However, high-fidelity simulations may take hours or even days to compute. Because this can be impractical, a surrogate model is often used to approximate the dynamic behavior of the original simulator. This model can then be used as a cheap, drop-in replacement for the simulator. Because simulations can be very expensive, the data points, which are required to build the model, must be chosen as optimally as possible. Sequential design strategies offer a huge advantage over one-shot experimental designs because they can use information gathered from previous data points in order to determine the location of new data points. Each sequential design strategy must perform a trade-off between exploration and exploitation, where the former involves selecting data points in unexplored regions of the design space, while the latter suggests adding data points in regions which were previously identified to be interesting (for example, highly nonlinear regions). In this paper, a novel hybrid sequential design strategy is proposed which uses a Monte Carlo-based approximation of a Voronoi tessellation for exploration and local linear approximations of the simulator for exploitation. The advantage of this method over other sequential design methods is that it is independent of the model type, and can therefore be used in heterogeneous modeling environments, where multiple model types are used at the same time. The new method is demonstrated on a number of test problems, showing that it is a robust, competitive, and efficient sequential design strategy.
Journal Article
Progressive Bounded Error Piecewise Linear Approximation with Resolution Reduction for Time Series Data Compression
2025
Today, huge amounts of time series data are sensed continuously by AIoT devices, transmitted to edge nodes, and to data centers. It costs a lot of energy to transmit these data, store them, and process them. Data compression technologies are commonly used to reduce the data size and thus save energy. When a certain level of data accuracy is sacrificed, lossy compression technologies can achieve better compression ratios. However, different applications may have different requirements for data accuracy. Instead of keeping multiple compressed versions of a time series w.r.t. different error bounds, HIRE hierarchically maintains a tree, where the root records a constant function to approximate the whole time series, and each other node records a constant function to approximate a part of the residual function of its parent for a particular time period. To retrieve data w.r.t. a specific error bound, it traverses the tree from the root down to certain levels according to the requested error bound and aggregates the constant functions on the visited nodes to generate a new bounded error compressed version dynamically. However, the number of nodes to be visited is unknown before the tree traversal completes, and thus the data size of the new version. In this paper, a time series is progressively decomposed into multiple piecewise linear functions. The first function is an approximation of the original time series w.r.t. the largest error bound. The second function is an approximation of the residual function between the original time series and the first function w.r.t. the second largest error bound, and so forth. The sum of the first, second, …, and m-th functions is an approximation of the original time series w.r.t. the m-th error bound. For each iteration, Swing-RR is used to generate a Bounded Error Piecewise Linear Approximation (BEPLA). Resolution Reduction (RR) plays an important role. Eight real-world datasets are used to evaluate the proposed method. For each dataset, approximations w.r.t. three typical error bounds, 5%, 1%, and 0.5%, are requested. Three BEPLAs are generated accordingly, which can be summed up to form three approximations w.r.t. the three error bounds. For all datasets, the total data size of the three BEPLAs is almost the same with the size used to store just one version w.r.t. the smallest error bound and significantly smaller than the size used to keep three independent versions. The experiment result shows that the proposed method, referred to as PBEPLA-RR, can achieve very good compression ratios and provide multiple approximations w.r.t. different error bounds.
Journal Article