TY - BOOK AU - Shalev-Shwartz,Shai AU - Ben-David,Shai TI - Understanding machine learning: from theory to algorithms SN - 9781107057135 U1 - 006.31 23 PY - 2014/// CY - New York PB - Cambridge University Press KW - Machine learning KW - Algorithms KW - Computer algorithms N1 - Includes bibliographical references (pages 385-393) and index; 1. Introduction -- Part 1. Foundations -- 2. A gentle start -- 3. A formal learning model -- 4. Learning via uniform convergence -- 5. The bias-complexity tradeoff -- 6. The VC-dimension -- 7. Non-uniform learnability -- 8. The runtime of learning -- Part 2. From Theory to Algorithms -- 9. Linear predictors -- 10. Boosting -- 11. Model selection and validation -- 12. Convex learning problems -- 13. Regularization and stability -- 14. Stochastic gradient descent -- 15. Support vector machines -- 16. Kernel methods -- 17. Multiclass, ranking, and complex prediction problems -- 18. Decision trees -- 19. Nearest neighbor -- 20. Neural networks -- Part 3. Additional Learning Models -- 21. Online learning -- 22. Clustering -- 23. Dimensionality reduction -- 24. Generative models -- 25. Feature selection and generation -- Part 5. Advanced Theory -- 26. Rademacher complexities -- 27. Covering numbers -- 28. Proof of the fundamental theorem of learning theory -- 29. Multiclass learnability -- 30. Compression bounds -- 31. PAC-Bayes -- Appendix A. Technical lemmas -- Appendix B. Measure concentration -- Appendix C. Linear algebra N2 - Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering ER -