Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
8
result(s) for
"Smooth convex regression"
Sort by:
A Computational Framework for Multivariate Convex Regression and Its Variants
by
Choudhury, Arkopal
,
Sen, Bodhisattva
,
Mazumder, Rahul
in
Algorithms
,
Augmented Lagrangian method
,
Convergence
2019
We study the nonparametric least squares estimator (LSE) of a multivariate convex regression function. The LSE, given as the solution to a quadratic program with O(n
2
) linear constraints (n being the sample size), is difficult to compute for large problems. Exploiting problem specific structure, we propose a scalable algorithmic framework based on the augmented Lagrangian method to compute the LSE. We develop a novel approach to obtain smooth convex approximations to the fitted (piecewise affine) convex LSE and provide formal bounds on the quality of approximation. When the number of samples is not too large compared to the dimension of the predictor, we propose a regularization scheme-Lipschitz convex regression-where we constrain the norm of the subgradients, and study the rates of convergence of the obtained LSE. Our algorithmic framework is simple and flexible and can be easily adapted to handle variants: estimation of a nondecreasing/nonincreasing convex/concave (with or without a Lipschitz bound) function. We perform numerical studies illustrating the scalability of the proposed algorithm-on some instances our proposal leads to more than a 10,000-fold improvement in runtime when compared to off-the-shelf interior point solvers for problems with n = 500.
Journal Article
Anomaly Detection in Images With Smooth Background via Smooth-Sparse Decomposition
2017
In various manufacturing applications such as steel, composites, and textile production, anomaly detection in noisy images is of special importance. Although there are several methods for image denoising and anomaly detection, most of these perform denoising and detection sequentially, which affects detection accuracy and efficiency. Additionally, the low computational speed of some of these methods is a limitation for real-time inspection. In this article, we develop a novel methodology for anomaly detection in noisy images with smooth backgrounds. The proposed method, named smooth-sparse decomposition, exploits regularized high-dimensional regression to decompose an image and separate anomalous regions by solving a large-scale optimization problem. To enable the proposed method for real-time implementation, a fast algorithm for solving the optimization model is proposed. Using simulations and a case study, we evaluate the performance of the proposed method and compare it with existing methods. Numerical results demonstrate the superiority of the proposed method in terms of the detection accuracy as well as computation time. This article has supplementary materials that includes all the technical details, proofs, MATLAB codes, and simulated images used in the article.
Journal Article
Penalized likelihood regression for generalized linear models with non-quadratic penalties
by
Gijbels, Irène
,
Nikolova, Mila
,
Antoniadis, Anestis
in
Acquired immune deficiency syndrome
,
Acquired immunodeficiency syndrome
,
AIDS
2011
One of the popular method for fitting a regression function is regularization: minimizing an objective function which enforces a roughness penalty in addition to coherence with the data. This is the case when formulating penalized likelihood regression for exponential families. Most of the smoothing methods employ quadratic penalties, leading to linear estimates, and are in general incapable of recovering discontinuities or other important attributes in the regression function. In contrast, non-linear estimates are generally more accurate. In this paper, we focus on non-parametric penalized likelihood regression methods using splines and a variety of
non-quadratic
penalties, pointing out common basic principles. We present an asymptotic analysis of convergence rates that justifies the approach. We report on a simulation study including comparisons between our method and some existing ones. We illustrate our approach with an application to Poisson non-parametric regression modeling of frequency counts of reported acquired immune deficiency syndrome (AIDS) cases in the UK.
Journal Article
A STOCHASTIC MOVING BALLS APPROXIMATION METHOD OVER A SMOOTH INEQUALITY CONSTRAINT
by
Zhang, Leiwu
2020
We consider the problem of minimizing the average of a large numberof smooth component functions over one smooth inequality constraint. We propose and analyze a stochastic Moving Balls Approximation (SMBA) method. Like stochastic gradient (SG) methods, the SMBA method’s iteration cost is independent of the number of component functions and by exploiting the smoothness of the constraint function, our method can be easily implemented. Theoretical and computational properties of SMBA are studied, and convergence results are established. Numerical experiments indicate that our algorithm dramatically outperforms the existing Moving Balls Approximation algorithm (MBA) for the structure of our problem.
Journal Article
Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression
by
Lu, Zhaosong
,
Yuan, Ming
,
Monteiro, Renato D. C.
in
Applied sciences
,
Calculus of Variations and Optimal Control; Optimization
,
Combinatorics
2012
In this paper, we study convex optimization methods for computing the nuclear (or, trace) norm regularized least squares estimate in multivariate linear regression. The so-called factor estimation and selection method, recently proposed by Yuan et al. (J Royal Stat Soc Ser B (Statistical Methodology) 69(3):329–346, 2007) conducts parameter estimation and factor selection simultaneously and have been shown to enjoy nice properties in both large and finite samples. To compute the estimates, however, can be very challenging in practice because of the high dimensionality and the nuclear norm constraint. In this paper, we explore a variant due to Tseng of Nesterov’s smooth method and interior point methods for computing the penalized least squares estimate. The performance of these methods is then compared using a set of randomly generated instances. We show that the variant of Nesterov’s smooth method generally outperforms the interior point method implemented in SDPT3 version 4.0 (beta) (Toh et al. On the implementation and usage of sdpt3—a matlab software package for semidefinite-quadratic-linear programming, version 4.0. Manuscript, Department of Mathematics, National University of Singapore (2006)) substantially. Moreover, the former method is much more memory efficient.
Journal Article
Two smooth support vector machines for ε -insensitive regression
2018
In this paper, we propose two new smooth support vector machines for ε-insensitive regression. According to these two smooth support vector machines, we construct two systems of smooth equations based on two novel families of smoothing functions, from which we seek the solution to ε-support vector regression (ε-SVR). More specifically, using the proposed smoothing functions, we employ the smoothing Newton method to solve the systems of smooth equations. The algorithm is shown to be globally and quadratically convergent without any additional conditions. Numerical comparisons among different values of parameter are also reported.
Journal Article
An Efficient SMO Algorithm for Solving Non-smooth Problem Arising in ε-Insensitive Support Vector Regression
2019
Classical support vector regression (C-SVR) is a powerful function approximation method, which is robust against noise and performs a good generalization, since it is formulated by a regularized error function employing the
ε
-insensitiveness property. To exploit the kernel trick, C-SVR generally solves the Lagrangian dual problem. In this paper, an efficient sequential minimal optimization (SMO) algorithm with a novel easy to compute working set selection (WSS) based on the minimization of an upper bound on the difference between consecutive loss function values for solving a convex non-smooth dual optimization problem obtained by reformulating the dual problem of C-SVR with
l
2
error loss function which is equivalent to the
ε
-insensitive version of the LSSVR, is proposed. The asymptotic convergence to the optimum of the proposed SMO algorithm is also proved. This proposed SMO algorithm for solving non-smooth problem comprises both SMO algorithms for solving LSSVR and C-SVR. Indeed, it becomes equivalent to the SMO algorithm with second-order WSS for solving LSSVR when
ε
=
0
. The proposed algorithm has the advantage of dealing with the optimization variables half the number of the ones in C-SVR, which results in lesser number of kernel related matrix evaluations than the standard SMO algorithm developed for C-SVR and improves the probability of the matrix outputs to have been precomputed and cached. Therefore, the proposed SMO algorithm results better training time than the standard SMO algorithm for solving C-SVR, especially with caching process. Moreover, the superiority of the proposed WSS over its first-order counterpart for solving the non-smooth optimization problem is presented.
Journal Article
Training primal twin support vector regression via unconstrained convex minimization
2016
In this paper, we propose a new unconstrained twin support vector regression model in the primal space (UPTSVR). With the addition of a regularization term in the formulation of the problem, the structural risk is minimized. The proposed formulation solves two smaller sized unconstrained minimization problems having continues, piece-wise quadratic objective functions by gradient based iterative methods. However, since their objective functions contain the non-smooth ‘plus’ function, two approaches are taken: (i) replace the non-smooth ‘plus’ function with their smooth approximate functions; (ii) apply a generalized derivative of the non-smooth ‘plus’ function. They lead to five algorithms whose pseudo-codes are also given. Experimental results obtained on a number of interesting synthetic and real-world benchmark datasets using these algorithms in comparison with the standard support vector regression (SVR) and twin SVR (TSVR) clearly demonstrates the effectiveness of the proposed method.
Journal Article