Machine learning method (list)
- Instance-based algorithm
- K-nearest neighbors algorithm (KNN)
- Learning vector quantization (LVQ)
- Self-organizing map (SOM)
- Regression analysis
- Regularization algorithm
- Classifiers
- Canonical correlation analysis (CCA)
- Factor analysis
- Feature extraction
- Feature selection
- Independent component analysis (ICA)
- Linear discriminant analysis (LDA)
- Multidimensional scaling (MDS)
- Non-negative matrix factorization (NMF)
- Partial least squares regression (PLSR)
- Principal component analysis (PCA)
- Principal component regression (PCR)
- Projection pursuit
- Sammon mapping
- t-distributed stochastic neighbor embedding (t-SNE)
- AdaBoost
- Boosting
- Bootstrap aggregating (Bagging)
- Ensemble averaging – process of creating multiple models and combining them to produce a desired output, as opposed to creating just one model. Frequently an ensemble of models performs better than any individual model, because the various errors of the models "average out."
- Gradient boosted decision tree (GBRT)
- Gradient boosting machine (GBM)
- Random Forest
- Stacked Generalization (blending)
- Q-learning
- State–action–reward–state–action (SARSA)
- Temporal difference learning (TD)
- Learning Automata
- AODE
- Artificial neural network
- Association rule learning algorithms
- Case-based reasoning
- Gaussian process regression
- Gene expression programming
- Group method of data handling (GMDH)
- Inductive logic programming
- Instance-based learning
- Lazy learning
- Learning Automata
- Learning Vector Quantization
- Logistic Model Tree
- Minimum message length (decision trees, decision graphs, etc.)
- Probably approximately correct learning (PAC) learning
- Ripple down rules, a knowledge acquisition methodology
- Symbolic machine learning algorithms
- Support vector machines
- Random Forests
- Ensembles of classifiers
- Ordinal classification
- Information fuzzy networks (IFN)
- Conditional Random Field
- ANOVA
- Quadratic classifiers
- k-nearest neighbor
- Boosting
- Bayesian networks
- Hidden Markov models
- Bayesian knowledge base
- Naive Bayes
- Gaussian Naive Bayes
- Multinomial Naive Bayes
- Averaged One-Dependence Estimators (AODE)
- Bayesian Belief Network (BBN)
- Bayesian Network (BN)
- Decision tree
- Classification and regression tree (CART)
- Iterative Dichotomiser 3 (ID3)
- C4.5 algorithm
- C5.0 algorithm
- Chi-squared Automatic Interaction Detection (CHAID)
- Decision stump
- Conditional decision tree
- ID3 algorithm
- Random forest
- SLIQ
- Fisher's linear discriminant
- Linear regression
- Logistic regression
- Multinomial logistic regression
- Naive Bayes classifier
- Perceptron
- Support vector machine
- Expectation-maximization algorithm
- Vector Quantization
- Generative topographic map
- Information bottleneck method
- BIRCH
- DBSCAN
- Expectation-maximization (EM)
- Fuzzy clustering
- Hierarchical Clustering
- K-means algorithm
- K-means clustering
- K-medians
- Mean-shift
- OPTICS algorithm
- Active learning – special case of semi-supervised learning in which a learning algorithm is able to interactively query the user (or some other information source) to obtain the desired outputs at new data points.[5] [6]
- Generative models
- Low-density separation
- Graph-based methods
- Co-training
- Transduction
- Deep belief networks
- Deep Boltzmann machines
- Deep Convolutional neural networks
- Deep Recurrent neural networks
- Hierarchical temporal memory
- Deep Boltzmann Machine (DBM)
- Stacked Auto-Encoders
- Anomaly detection
- Association rules
- Bias-variance dilemma
- Classification
- Clustering
- Data Pre-processing
- Empirical risk minimization
- Feature engineering
- Feature learning
- Learning to rank
- Occam learning
- Online machine learning
- PAC learning
- Regression
- Reinforcement Learning
- Semi-supervised learning
- Statistical learning
- Structured prediction
- Unsupervised learning
- VC theory