Paper abstractSupport Vector Machines, Data Reduction, and Approximate Kernel MatricesXuanLong Nguyen - SAMSI & Duke University, USALing Huang - Intel Research, USA Anthony D. Joseph - Intel Research & UC Berkeley, USA Session: Kernel Methods Springer Link: http://dx.doi.org/10.1007/978-3-540-87481-2_10 The computational and/or communication constraints associated with processing large-scale data sets using support vector machines (SVM) in contexts such as distributed networking systems are often prohibitively high, resulting in practitioners of SVM learning algorithms having to apply the algorithm on approximate versions of the kernel matrix induced by a certain degree of data reduction. In this paper, we study the tradeoffs between data reduction and the loss in an algorithm's classification performance. We introduce and analyze a consistent estimator of the SVM's achieved classification error, and then derive approximate upper bounds on the perturbation on our estimator. The bound is shown to be empirically tight in a wide range of domains, making it practical for the practitioner to determine the amount of data reduction given a permissible loss in the classification performance. |