Parallel Support Vector Machine Training on a Budget
Final Report Abstract
We have designed and implemented an extremely fast GPU-ready approximate SVM training algorithm. To the best of our knowledge, it is among the fastest solvers available today, while offering high-precision solutions. With the development and publication of the software, all essential project goals were achieved. The algorithm consists of two stages: the reduction of the problem to a linear training problem, and linear SVM training using the SMO approach. For the first stage, our algorithm meets all project goals. For the second stage, we did not manage to fully utilize the GPU to the extent we would have wished for. Still, there mere fact that the problem is reduced to a kernel-free problem of manageable dimension results in a tremendous speed-up over existing CPU-based solvers, including our own prior work. Overall, we therefore consider the project a success. The implementation of our software is published under an open source license.
Publications
-
Recipe for Fast Large-scale SVM Training: Polishing, Parallelism, and more RAM! Proceedings of the 34th Benelux Conference on Artificial Intelligence and the 30th Belgian-Dutch Conference on Machine Learning BNAIC/BeneLearn, 2022.
Tobias Glasmachers
