Most ebook files are in PDF format, so you can easily read them using various software such as Foxit Reader or directly on the Google Chrome browser.
Some ebook files are released by publishers in other formats such as .awz, .mobi, .epub, .fb2, etc. You may need to install specific software to read these formats on mobile/PC, such as Calibre.
Please read the tutorial at this link. https://ebooknice.com/page/post?id=faq
We offer FREE conversion to the popular formats you request; however, this may take some time. Therefore, right after payment, please email us, and we will try to provide the service as quickly as possible.
For some exceptional file formats or broken links (if any), please refrain from opening any disputes. Instead, email us first, and we will try to assist within a maximum of 6 hours.
EbookNice Team
Status:
Available4.4
21 reviewsISBN 13: 9781040099056
Author: Taskin Kavzoglu, Brandt Tso, Paul M Mather
The third edition of the bestselling Classification Methods for Remotely Sensed Data covers current state-of-the-art machine learning algorithms and developments in the analysis of remotely sensed data. This book is thoroughly updated to meet the needs of readers today and provides six new chapters on deep learning, feature extraction and selection, multisource image fusion, hyperparameter optimization, accuracy assessment with model explainability, and object-based image analysis, which is relatively a new paradigm in image processing and classification. It presents new AI-based analysis tools and metrics together with ongoing debates on accuracy assessment strategies and XAI methods. New in this edition: Provides comprehensive background on the theory of deep learning and its application to remote sensing data. Includes a chapter on hyperparameter optimization techniques to guarantee the highest performance in classification applications. Outlines the latest strategies and accuracy measures in accuracy assessment and summarizes accuracy metrics and assessment strategies. Discusses the methods used for explaining inherent structures and weighing the features of ML and AI algorithms that are critical for explaining the robustness of the models. This book is intended for industry professionals, researchers, academics, and graduate students who want a thorough and up-to-date guide to the many and varied techniques of image classification applied in the fields of geography, geospatial and earth sciences, electronic and computer science, environmental engineering, etc.
Chapter 1 Fundamentals of Remote Sensing
1.1 Introduction to Remote Sensing
1.1.1 Atmospheric Interactions
1.1.2 Reflectance Properties of Surface Materials
1.1.3 Spatial, Spectral, and Radiometric Resolution
1.1.4 Scale Issues in Remote Sensing
1.2 Optical Remote Sensing Systems
1.3 Atmospheric Correction
1.3.1 Dark Object Subtraction
1.3.2 Modeling Techniques
1.3.2.1 Modeling the Atmospheric Effect
1.3.2.2 Steps in Atmospheric Correction
1.4 Correction for Topographic Effects
1.5 Remote Sensing in the Microwave Region
1.6 Radar Fundamentals
1.6.1 SLAR Image Resolution
1.6.2 Geometric Effects on Radar Images
1.6.3 Factors Affecting Radar Backscatter
1.6.3.1 Surface Roughness
1.6.3.2 Surface Conductivity
1.6.3.3 Parameters of the Radar Equation
1.7 Imaging Radar Polarimetry
1.7.1 Radar Polarization State
1.7.2 Polarization Synthesis
1.7.3 Polarization Signatures
1.8 Radar Speckle Suppression
1.8.1 Multilook Processing
1.8.2 Filters for Speckle Suppression
References
Chapter 2 Pattern Recognition Principles
2.1 A Terminological Introduction
2.2 Taxonomy of Classification Techniques
2.3 Fundamental Pattern Recognition Techniques
2.3.1 Unsupervised Methods
2.3.1.1 The k-Means Algorithm
2.3.1.2 Fuzzy C-Means Clustering
2.3.2 Supervised Methods
2.3.2.1 Parallelepiped Method
2.3.2.2 Minimum Distance Classifier
2.3.2.3 Maximum Likelihood Classifier
2.3.2.4 Fuzzy Maximum Likelihood Classifier
2.4 Spectral Unmixing
2.5 Ensemble Classifiers
2.6 Incorporation of Ancillary Information
2.6.1 Use of Texture and Context
2.6.2 Using Ancillary Multisource Data
2.7 Epilogue
References
Chapter 3 Dimensionality Reduction: Feature Extraction and Selection
3.1 Feature Extraction
3.1.1 Principal Component Analysis
3.1.2 Minimum/Maximum Autocorrelation Factors
3.1.3 Maximum Noise Fraction (MNF) Transformation
3.1.4 Independent Component Analysis
3.1.5 Projection Pursuit
3.2 Feature Selection
3.2.1 Greedy Search Methods
3.2.2 Simulated Annealing
3.2.3 Separability Indices
3.2.4 Filter-Based Methods
3.2.4.1 Correlation-Based Feature Selection
3.2.4.2 Information Gain
3.2.4.3 Gini Impurity Index
3.2.4.4 Minimum Redundancy-Maximum Relevance
3.2.4.5 Chi-Square Test
3.2.4.6 Relief-F
3.2.4.7 Symmetric Uncertainty
3.2.4.8 Fisher’s Test
3.2.4.9 OneR
3.2.5 Wrappers
3.2.5.1 Genetic Algorithm
3.2.5.2 Particle Swarm Optimization
3.2.5.3 Feature Selection with SVMs
3.2.6 Embedded Methods
3.2.6.1 K-Nearest Neighbor-Based Feature Selection
3.2.6.2 Feature Selection with Ensemble Learners
3.2.6.3 Hilbert-Schmidt Independence Criterion with Lasso
3.3 Concluding Remarks
References
Chapter 4 Multisource Image Fusion and Classification
4.1 Image Fusion
4.1.1 Image Fusion Methods
4.1.1.1 PCA-Based Image Fusion
4.1.1.2 IHS-Based Image Fusion
4.1.1.3 Brovey Transform
4.1.1.4 Gram-Schmidt Transform
4.1.1.5 Wavelet Transform
4.1.1.6 Deep Learning for Image Fusion
4.1.2 Assessment of Fused Image Quality
4.1.3 Performance Evaluation of Fusion Methods
4.2 Multisource Classification Using the Stacked-Vector Method
4.3 The Extension of Bayesian Classification Theory
4.3.1 An Overview
4.3.1.1 Feature Extraction
4.3.1.2 Probability or Evidence Generation
4.3.1.3 Multisource Consensus
4.3.2 Bayesian Multisource Classification Mechanism
4.3.3 A Refined Multisource Bayesian Model
4.3.4 Multisource Classification Using the MRF
4.3.5 Assumption of Inter-Source Independence
4.4 Evidential Reasoning
4.4.1 Concept Development
4.4.2 Belief Function and Belief Interval
4.4.3 Evidence Combination
4.4.4 Decision Rules for Evidential Reasoning
4.5 Dealing with Source Reliability
4.5.1 Using Classification Accuracy
4.5.2 Use of Class Separability
4.5.3 Data Information Class Correspondence Matrix
4.6 Concluding Remarks and Future Trends
References
Chapter 5 Support Vector Machines
5.1 Linear Classification
5.1.1 The Separable Case
5.1.2 The Nonseparable Case
5.2 Nonlinear Classification and Kernel Functions
5.2.1 Nonlinear SVMs
5.2.2 Kernel Functions
5.3 Parameter Determination
5.3.1 t-Fold Cross-Validations
5.3.2 Bound on Leave-One-Out Error
5.3.3 Grid Search
5.3.4 Gradient Descent Method
5.4 Multiclass Classification
5.4.1 One-against-One, One-against-Others, and DAG
5.4.2 Multiclass SVMs
5.4.2.1 Vapnik’s Approach
5.4.2.2 Methodology of Crammer and Singer
5.5 Relevance Vector Machines
5.6 Twin Support Vector Machines
5.7 Deep Support Vector Machines
5.8 Concluding Remarks
References
Chapter 6 Decision Trees
6.1 ID3, C4.5, and SEE5.0 Decision Trees
6.1.1 ID3
6.1.2 C4.5
6.1.3 SEE5.0 (C5.0)
6.2 CHAID
6.3 CART
6.4 QUEST
6.4.1 Split Point Selection
6.4.2 Attribute Selection
6.5 Tree Induction from Artificial Neural Networks
6.6 Pruning Decision Trees
6.6.1 Reduced Error Pruning
6.6.2 Pessimistic Error Pruning
6.6.3 Error-Based Pruning
6.6.4 Cost Complexity Pruning
6.6.5 Minimal Error Pruning
6.7 Ensemble Methods
6.7.1 Boosting
6.7.2 Random Forest
6.7.3 Rotation Forest
6.7.4 Canonical Correlation Forest
6.7.5 Extreme Gradient Boosting
6.7.6 Light Gradient Boosting Machines
6.7.7 Gradient Boosting Machines
6.7.8 Categorical Boosting
6.7.9 Natural Gradient Boosting
6.8 Concluding Remarks
References
Chapter 7 Deep Learning
7.1 Fundamentals
7.1.1 Stochastic Gradient Descent
7.1.2 Backpropagation
7.1.3 Regularization
7.1.3.1 Weight Decay
7.1.3.2 Dropout
7.1.3.3 Data Augmentation
7.1.3.4 Early Stopping
7.1.4 Activation Functions
7.1.5 Loss Functions
7.2 Neural Network Architectures
7.2.1 Multilayer Perceptron
7.2.2 Convolutional Neural Networks
7.2.2.1 Convolutional Layers
7.2.2.2 Pooling Layers
7.2.2.3 Fully Connected Layers
7.2.2.4 Receptive Field and Feature Map
7.2.2.5 Training CNNs
7.2.2.6 Data Structures in CNNs
7.2.2.7 Evolving Trends in CNN Design
7.2.3 Recurrent Neural Networks
7.2.3.1 Long- and Short-Term Memory
7.2.3.2 Gated Recurrent Unit
7.2.4 Vision Transformers
7.2.5 Deep Multilayer Perceptron
7.2.6 Generative Adversarial Networks
7.2.7 Deep Autoencoders
7.2.7.1 Undercomplete Autoencoders
7.2.7.2 Regularized Autoencoders
7.2.7.3 Sparse Autoencoders
7.2.7.4 Denoising Autoencoders
7.2.7.5 Variational Autoencoders
7.3 Learning Paradigms
7.3.1 Transfer Learning
7.3.2 Semi-Supervised Learning
7.3.3 Reinforcement Learning
7.3.4 Active Learning
7.3.5 Multitask Learning
7.4 Application of DL in Remote Sensing
7.4.1 Semantic Segmentation
7.4.2 Object Detection
7.4.3 Scene Classification
7.4.4 Change Detection
7.5 Concluding Remarks
References
Chapter 8 Object-Based Image Analysis
8.1 Clustering-Based Segmentation
8.1.1 Mean-Shift Algorithm
8.1.2 Superpixel Segmentation
8.2 Thresholding-Based Segmentation
8.3 Edge-Based Segmentation
8.4 Watershed Segmentation
8.5 Region-Based Segmentation
8.5.1 Region Splitting and Merging
8.5.2 Region Growing
8.5.3 Multiresolution Segmentation
8.6 Hybrid Segmentation
8.7 Evaluation of Segmentation Quality
8.7.1 Supervised Approach
8.7.2 Unsupervised Approach
8.7.2.1 Estimation of the Scale Parameter
8.7.2.2 Global Score
8.7.2.3 Overall Goodness F-Measure
8.8 Concluding Remarks
References
Chapter 9 Hyperparameter Optimization
9.1 What Is Hyperparameter Optimization?
9.2 Hyperparameter Optimization Techniques
9.2.1 Model-Free Algorithms
9.2.1.1 Trial-and-Error (Manual Testing)
9.2.1.2 Grid Search
9.2.1.3 Random Search
9.2.2 Gradient-Based Optimization
9.2.3 Bayesian Optimization
9.2.4 Multifidelity Optimization
9.2.4.1 Successive Halving
9.2.4.2 Hyperband
9.2.5 Metaheuristic Algorithms
9.2.5.1 Genetic Algorithm
9.2.5.2 Particle Swarm Optimization
9.3 Challenges in Hyperparameter Optimization
9.4 Concluding Remarks
References
Chapter 10 Accuracy Assessment and Model Explainability
10.1 Accuracy Assessment
10.1.1 Sampling Scheme and Spatial Autocorrelation
10.1.2 Sample Size, Scale, and Spatial Variability
10.1.3 Adequacy of Training and Testing Data
10.1.4 Conventional Accuracy Analysis
10.1.5 Accuracy Analysis for Machine Learning
10.1.6 Fuzzy Accuracy Assessment
10.1.7 Object-Based Accuracy Assessment
10.2 Comparison of Thematic Maps
10.2.1 McNemar’s Test
10.2.2 z-Test
10.2.3 Wilcoxon Signed-Ranks Test
10.2.4 5 × 2-Cross-Validation t-Test
10.2.5 Friedman Test
10.3 Explainability Methods
10.3.1 SHapley Additive exPlanations
10.3.2 Partial Dependence Plot
10.3.3 Pairwise Interaction Importance
10.3.4 Permutation-Based Feature Importance
10.3.5 Local Interpretable Model-Agnostic Explanations (LIME)
10.4 A Case Study for Accuracy Assessment and XAI
10.5 Conclusions and Guidelines for Best Practice
classification methods for remotely sensed data second edition
classification methods for remotely sensed data
methods of classification
classification methods for remotely sensed data pdf
a classification system
methods of classification assignment
Tags: Taskin Kavzoglu, Brandt Tso, Paul M Mather, Classification, Methods