Coverart for item
The Resource Ensemble methods : foundations and algorithms, Zhi-Hua Zhou

Ensemble methods : foundations and algorithms, Zhi-Hua Zhou

Label
Ensemble methods : foundations and algorithms
Title
Ensemble methods
Title remainder
foundations and algorithms
Statement of responsibility
Zhi-Hua Zhou
Creator
Subject
Language
eng
Summary
"This comprehensive book presents an in-depth and systematic introduction to ensemble methods for researchers in machine learning, data mining, and related areas. It helps readers solve modem problems in machine learning using these methods. The author covers the spectrum of research in ensemble methods, including such famous methods as boosting, bagging, and rainforest, along with current directions and methods not sufficiently addressed in other books. Chapters explore cutting-edge topics, such as semi-supervised ensembles, cluster ensembles, and comprehensibility, as well as successful applications"--
Member of
Assigning source
Provided by publisher
Cataloging source
DLC
http://library.link/vocab/creatorName
Zhou, Zhi-Hua
Dewey number
006.3/1
Index
index present
LC call number
QA278.4
LC item number
.Z47 2012
Literary form
non fiction
Nature of contents
bibliography
Series statement
Chapman & Hall/CRC machine learning & pattern recognition series
http://library.link/vocab/subjectName
  • Multiple comparisons (Statistics)
  • Set theory
  • Mathematical analysis
  • BUSINESS & ECONOMICS / Statistics
  • COMPUTERS / Database Management / Data Mining
  • COMPUTERS / Machine Theory
Label
Ensemble methods : foundations and algorithms, Zhi-Hua Zhou
Instantiates
Publication
Bibliography note
Includes bibliographical references and index
Carrier category
volume
Carrier category code
nc
Carrier MARC source
rdacarrier
Content category
text
Content type code
txt
Content type MARC source
rdacontent
Contents
1. Introduction -- 1.1. Basic Concepts -- 1.2. Popular Learning Algorithms -- 1.2.1. Linear Discriminant Analysis -- 1.2.2. Decision Trees -- 1.2.3. Neural Networks -- 1.2.4. Naive Bayes Classifier -- 1.2.5.k-Nearest Neighbor -- 1.2.6. Support Vector Machines and Kernel Methods -- 1.3. Evaluation and Comparison -- 1.4. Ensemble Methods -- 1.5. Applications of Ensemble Methods -- 1.6. Further Readings -- 2. Boosting -- 2.1.A General Boosting Procedure -- 2.2. The AdaBoost Algorithm -- 2.3. Illustrative Examples -- 2.4. Theoretical Issues -- 2.4.1. Initial Analysis -- 2.4.2. Margin Explanation -- 2.4.3. Statistical View -- 2.5. Multiclass Extension -- 2.6. Noise Tolerance -- 2.7. Further Readings -- 3. Bagging -- 3.1. Two Ensemble Paradigms -- 3.2. The Bagging Algorithm -- 3.3. Illustrative Examples -- 3.4. Theoretical Issues -- 3.5. Random Tree Ensembles -- 3.5.1. Random Forest -- 3.5.2. Spectrum of Randomization -- 3.5.3. Random Tree Ensembles for Density Estimation -- 3.5.4. Random Tree Ensembles for Anomaly Detection -- 3.6. Further Readings -- 4.Combination Methods -- 4.1. Benefits of Combination -- 4.2. Averaging -- 4.2.1. Simple Averaging -- 4.2.2. Weighted Averaging -- 4.3. Voting -- 4.3.1. Majority Voting -- 4.3.2. Plurality Voting -- 4.3.3. Weighted Voting -- 4.3.4. Soft Voting -- 4.3.5. Theoretical Issues -- 4.4.Combining by Learning -- 4.4.1. Stacking -- 4.4.2. Infinite Ensemble -- 4.5. Other Combination Methods -- 4.5.1. Algebraic Methods -- 4.5.2. Behavior Knowledge Space Method -- 4.5.3. Decision Template Method -- 4.6. Relevant Methods -- 4.6.1. Error-Correcting Output Codes -- 4.6.2. Dynamic Classifier Selection -- 4.6.3. Mixture of Experts -- 4.7. Further Readings -- 5. Diversity -- 5.1. Ensemble Diversity -- 5.2. Error Decomposition -- 5.2.1. Error-Ambiguity Decomposition -- 5.2.2. Bias-Variance-Covariance Decomposition -- 5.3. Diversity Measures -- 5.3.1. Pairwise Measures -- 5.3.2. Non-Pairwise Measures -- 5.3.3. Summary and Visualization -- 5.3.4. Limitation of Diversity Measures -- 5.4. Information Theoretic Diversity -- 5.4.1. Information Theory and Ensemble -- 5.4.2. Interaction Information Diversity -- 5.4.3. Multi-Information Diversity -- 5.4.4. Estimation Method -- 5.5. Diversity Generation -- 5.6. Further Readings -- 6. Ensemble Pruning -- 6.1. What Is Ensemble Pruning -- 6.2. Many Could Be Better Than All -- 6.3. Categorization of Pruning Methods -- 6.4. Ordering-Based Pruning -- 6.5. Clustering-Based Pruning -- 6.6. Optimization-Based Pruning -- 6.6.1. Heuristic Optimization Pruning -- 6.6.2. Mathematical Programming Pruning -- 6.6.3. Probabilistic Pruning -- 6.7. Further Readings -- 7. Clustering Ensembles -- 7.1. Clustering -- 7.1.1. Clustering Methods -- 7.1.2. Clustering Evaluation -- 7.1.3. Why Clustering Ensembles -- 7.2. Categorization of Clustering Ensemble Methods -- 7.3. Similarity-Based Methods -- 7.4. Graph-Based Methods -- 7.5. Relabeling-Based Methods -- 7.6. Transformation-Based Methods -- 7.7. Further Readings -- 8. Advanced Topics -- 8.1. Semi-Supervised Learning -- 8.1.1. Usefulness of Unlabeled Data -- 8.1.2. Semi-Supervised Learning with Ensembles -- 8.2. Active Learning -- 8.2.1. Usefulness of Human Intervention -- 8.2.2. Active Learning with Ensembles -- 8.3. Cost-Sensitive Learning -- 8.3.1. Learning with Unequal Costs -- 8.3.2. Ensemble Methods for Cost-Sensitive Learning -- 8.4. Class-Imbalance Learning -- 8.4.1. Learning with Class Imbalance -- 8.4.2. Performance Evaluation with Class Imbalance -- 8.4.3. Ensemble Methods for Class-Imbalance Learning -- 8.5. Improving Comprehensibility -- 8.5.1. Reduction of Ensemble to Single Model -- 8.5.2. Rule Extraction from Ensembles -- 8.5.3. Visualization of Ensembles -- 8.6. Future Directions of Ensembles -- 8.7. Further Readings
Control code
449889669
Dimensions
24 cm
Extent
xiv, 222 pages
Isbn
9781439830031
Lccn
2012014555
Media category
unmediated
Media MARC source
rdamedia
Media type code
n
Other physical details
illustrations
Label
Ensemble methods : foundations and algorithms, Zhi-Hua Zhou
Publication
Bibliography note
Includes bibliographical references and index
Carrier category
volume
Carrier category code
nc
Carrier MARC source
rdacarrier
Content category
text
Content type code
txt
Content type MARC source
rdacontent
Contents
1. Introduction -- 1.1. Basic Concepts -- 1.2. Popular Learning Algorithms -- 1.2.1. Linear Discriminant Analysis -- 1.2.2. Decision Trees -- 1.2.3. Neural Networks -- 1.2.4. Naive Bayes Classifier -- 1.2.5.k-Nearest Neighbor -- 1.2.6. Support Vector Machines and Kernel Methods -- 1.3. Evaluation and Comparison -- 1.4. Ensemble Methods -- 1.5. Applications of Ensemble Methods -- 1.6. Further Readings -- 2. Boosting -- 2.1.A General Boosting Procedure -- 2.2. The AdaBoost Algorithm -- 2.3. Illustrative Examples -- 2.4. Theoretical Issues -- 2.4.1. Initial Analysis -- 2.4.2. Margin Explanation -- 2.4.3. Statistical View -- 2.5. Multiclass Extension -- 2.6. Noise Tolerance -- 2.7. Further Readings -- 3. Bagging -- 3.1. Two Ensemble Paradigms -- 3.2. The Bagging Algorithm -- 3.3. Illustrative Examples -- 3.4. Theoretical Issues -- 3.5. Random Tree Ensembles -- 3.5.1. Random Forest -- 3.5.2. Spectrum of Randomization -- 3.5.3. Random Tree Ensembles for Density Estimation -- 3.5.4. Random Tree Ensembles for Anomaly Detection -- 3.6. Further Readings -- 4.Combination Methods -- 4.1. Benefits of Combination -- 4.2. Averaging -- 4.2.1. Simple Averaging -- 4.2.2. Weighted Averaging -- 4.3. Voting -- 4.3.1. Majority Voting -- 4.3.2. Plurality Voting -- 4.3.3. Weighted Voting -- 4.3.4. Soft Voting -- 4.3.5. Theoretical Issues -- 4.4.Combining by Learning -- 4.4.1. Stacking -- 4.4.2. Infinite Ensemble -- 4.5. Other Combination Methods -- 4.5.1. Algebraic Methods -- 4.5.2. Behavior Knowledge Space Method -- 4.5.3. Decision Template Method -- 4.6. Relevant Methods -- 4.6.1. Error-Correcting Output Codes -- 4.6.2. Dynamic Classifier Selection -- 4.6.3. Mixture of Experts -- 4.7. Further Readings -- 5. Diversity -- 5.1. Ensemble Diversity -- 5.2. Error Decomposition -- 5.2.1. Error-Ambiguity Decomposition -- 5.2.2. Bias-Variance-Covariance Decomposition -- 5.3. Diversity Measures -- 5.3.1. Pairwise Measures -- 5.3.2. Non-Pairwise Measures -- 5.3.3. Summary and Visualization -- 5.3.4. Limitation of Diversity Measures -- 5.4. Information Theoretic Diversity -- 5.4.1. Information Theory and Ensemble -- 5.4.2. Interaction Information Diversity -- 5.4.3. Multi-Information Diversity -- 5.4.4. Estimation Method -- 5.5. Diversity Generation -- 5.6. Further Readings -- 6. Ensemble Pruning -- 6.1. What Is Ensemble Pruning -- 6.2. Many Could Be Better Than All -- 6.3. Categorization of Pruning Methods -- 6.4. Ordering-Based Pruning -- 6.5. Clustering-Based Pruning -- 6.6. Optimization-Based Pruning -- 6.6.1. Heuristic Optimization Pruning -- 6.6.2. Mathematical Programming Pruning -- 6.6.3. Probabilistic Pruning -- 6.7. Further Readings -- 7. Clustering Ensembles -- 7.1. Clustering -- 7.1.1. Clustering Methods -- 7.1.2. Clustering Evaluation -- 7.1.3. Why Clustering Ensembles -- 7.2. Categorization of Clustering Ensemble Methods -- 7.3. Similarity-Based Methods -- 7.4. Graph-Based Methods -- 7.5. Relabeling-Based Methods -- 7.6. Transformation-Based Methods -- 7.7. Further Readings -- 8. Advanced Topics -- 8.1. Semi-Supervised Learning -- 8.1.1. Usefulness of Unlabeled Data -- 8.1.2. Semi-Supervised Learning with Ensembles -- 8.2. Active Learning -- 8.2.1. Usefulness of Human Intervention -- 8.2.2. Active Learning with Ensembles -- 8.3. Cost-Sensitive Learning -- 8.3.1. Learning with Unequal Costs -- 8.3.2. Ensemble Methods for Cost-Sensitive Learning -- 8.4. Class-Imbalance Learning -- 8.4.1. Learning with Class Imbalance -- 8.4.2. Performance Evaluation with Class Imbalance -- 8.4.3. Ensemble Methods for Class-Imbalance Learning -- 8.5. Improving Comprehensibility -- 8.5.1. Reduction of Ensemble to Single Model -- 8.5.2. Rule Extraction from Ensembles -- 8.5.3. Visualization of Ensembles -- 8.6. Future Directions of Ensembles -- 8.7. Further Readings
Control code
449889669
Dimensions
24 cm
Extent
xiv, 222 pages
Isbn
9781439830031
Lccn
2012014555
Media category
unmediated
Media MARC source
rdamedia
Media type code
n
Other physical details
illustrations

Library Locations

    • Ellis LibraryBorrow it
      1020 Lowry Street, Columbia, MO, 65201, US
      38.944491 -92.326012
Processing Feedback ...