| 000 | 03355nam a2200337 i 4500 | ||
|---|---|---|---|
| 003 | MIUC | ||
| 005 | 20200219142835.0 | ||
| 008 | 170908s2009 nyua 001 | eng | ||
| 020 | _a9780387848570 | ||
| 040 |
_aMIUC _beng _cMIUC |
||
| 082 | 0 | _a519.5 | |
| 100 | 1 |
_93158 _aHastie, Trevor |
|
| 245 | 1 | 4 |
_aThe elements of statistical learning : _bdata mining, inference, and prediction / _cTrevor Hastie, Robert Tibshirani, Jerome Friedman. |
| 250 | _a2nd ed. | ||
| 260 |
_aNew York : _bSpringer, _c2009. |
||
| 300 |
_a745 p. : _bill. b&w and col. ; _c25 cm. |
||
| 336 |
_2rdacontent _atext |
||
| 490 | 1 |
_aSpringer series in statistics, _x01727397 ; _v692 |
|
| 504 | _aIncludes bibliographical references (p. [699]-727) and indexes. | ||
| 505 | 0 | _aCh. 1. Introduction -- Ch. 2. Overview of supervised learning -- Ch. 3. Linear method for regression -- Ch. 4. Linear methods for classification -- Ch. 5. Basis expansions and regularization -- Ch. 6 Kernel smoothing methods -- Ch. 7. Model assessment and selection -- Ch. 8. Model inference and averaging -- Ch. 9. Additive model, trees and related methods -- Ch. 10. Boosting and additive trees -- Ch. 11. Neural networks -- Ch. 12. Support vector machines and flexible discriminants -- Ch. 13. Prototype methods and nearest-neighbors -- Ch. 14. Unsupervised learning -- Ch. 15. Random forests -- Ch. 16. Ensemble learning -- Ch. 17. Undirected graphical models -- Ch. 18. High-dimensional problems: p >> N. | |
| 520 | _aDuring the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting--the first comprehensive treatment of this topic in any book. This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression and path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for "wide'' data (p bigger than n), including multiple testing and false discovery rates. | ||
| 650 | 0 |
_9555 _aStatistics |
|
| 650 | 0 |
_93159 _aMathematical statistics |
|
| 650 | 0 |
_9794 _aData mining |
|
| 650 | 0 |
_93160 _aBioinformatics |
|
| 650 | 0 |
_93161 _aComputational intelligence |
|
| 700 | 1 |
_4aut _93162 _aTibshirani, Robert |
|
| 700 | 1 |
_4aut _93163 _aFriedman, J. H. _q(Jerome H.) |
|
| 830 | 0 |
_93164 _aSpringer texts in statistics _v692 |
|
| 942 |
_2ddc _cBK |
||