Statistical learning theory: a tutorial
Article first published online: 10 JUN 2011
Copyright © 2011 John Wiley & Sons, Inc.
Wiley Interdisciplinary Reviews: Computational Statistics
Volume 3, Issue 6, pages 543–556, November/December 2011
How to Cite
Kulkarni, S. R. and Harman, G. (2011), Statistical learning theory: a tutorial. WIREs Comp Stat, 3: 543–556. doi: 10.1002/wics.179
- Issue published online: 5 OCT 2011
- Article first published online: 10 JUN 2011
- statistical learning;
- pattern recognition;
- supervised learning;
- kernel methods
In this article, we provide a tutorial overview of some aspects of statistical learning theory, which also goes by other names such as statistical pattern recognition, nonparametric classification and estimation, and supervised learning. We focus on the problem of two-class pattern classification for various reasons. This problem is rich enough to capture many of the interesting aspects that are present in the cases of more than two classes and in the problem of estimation, and many of the results can be extended to these cases. Focusing on two-class pattern classification simplifies our discussion, and yet it is directly applicable to a wide range of practical settings. We begin with a description of the two-class pattern recognition problem. We then discuss various classical and state-of-the-art approaches to this problem, with a focus on fundamental formulations, algorithms, and theoretical results. In particular, we describe nearest neighbor methods, kernel methods, multilayer perceptrons, Vapnik–Chervonenkis theory, support vector machines, and boosting. WIREs Comp Stat 2011 3 543–556 DOI: 10.1002/wics.179
For further resources related to this article, please visit the WIREs website.