Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
ADAPTIVE LEARNING RATES FOR SUPPORT VECTOR MACHINES WORKING ON DATA WITH LOW INTRINSIC DIMENSION
by
Steinwart, Ingo
, Hamm, Thomas
in
Adaptive learning
/ Classification
/ Learning
/ Minimax technique
/ Normal distribution
/ Regression
/ Regression analysis
/ Support vector machines
2021
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
ADAPTIVE LEARNING RATES FOR SUPPORT VECTOR MACHINES WORKING ON DATA WITH LOW INTRINSIC DIMENSION
by
Steinwart, Ingo
, Hamm, Thomas
in
Adaptive learning
/ Classification
/ Learning
/ Minimax technique
/ Normal distribution
/ Regression
/ Regression analysis
/ Support vector machines
2021
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
ADAPTIVE LEARNING RATES FOR SUPPORT VECTOR MACHINES WORKING ON DATA WITH LOW INTRINSIC DIMENSION
Journal Article
ADAPTIVE LEARNING RATES FOR SUPPORT VECTOR MACHINES WORKING ON DATA WITH LOW INTRINSIC DIMENSION
2021
Request Book From Autostore
and Choose the Collection Method
Overview
We derive improved regression and classification rates for support vector machines using Gaussian kernels under the assumption that the data has some low-dimensional intrinsic structure that is described by the box-counting dimension. Under some standard regularity assumptions for regression and classification, we prove learning rates, in which the dimension of the ambient space is replaced by the box-counting dimension of the support of the data generating distribution. In the regression case, our rates are in some cases minimax optimal up to logarithmic factors, whereas in the classification case our rates are minimax optimal up to logarithmic factors in a certain range of our assumptions and otherwise of the form of the best known rates. Furthermore, we show that a training validation approach for choosing the hyperparameters of a SVM in a data dependent way achieves the same rates adaptively, that is, without any knowledge on the data generating distribution.
Publisher
Institute of Mathematical Statistics
This website uses cookies to ensure you get the best experience on our website.