Abstract
We have recently proposed a new approach to control the number of basis functions and the accuracy in Support Vector Machines. The lat- ter is transferred to a linear programming set- ting, which inherently enforces sparseness of the solution. The algorithm computes a nonlinear estimate in terms of kernel functions and an with the property that at most a fraction of the train- ing set has an error exceeding . The algorithm is robust to local perturbations of these points' target values. We give an explicit formulation of the opti- mization equations needed to solve the linear program and point out which modifications of the standard optimization setting are necessary to take advantage of the particular structure of the equations in the regression case.

This publication has 0 references indexed in Scilit: