Skip to content

🌎 machine learning algorithms tutorials (mainly in Python3)

License

Notifications You must be signed in to change notification settings

AhlamMD/machine-learning

 
 

Repository files navigation

machine-learning

license Python 3.5 Python 3.6

This is a continuously updated repository that documents personal journey on learning data science, machine learning related topics.

  • Goal: Introduce machine learning contents in Jupyter Notebook format. The content aims to strike a good balance between mathematical notations, educational implementation from scratch (using Python's scientific stack including numpy, numba, scipy, pandas, matplotlib etc.) and open-source library usage (scikit-learn, pyspark, gensim, keras, pytorch, tensorflow).
  • Short Note: Within each section, documentations are listed in reverse chronological order of the start date (the date when the first notebook in that folder was created, if the notebook happened to be updated, then the actual date will be at the top of each notebook). Each of them are independent of one another unless specified.

Documentation Listings

time_series : 2018.07.09

Forecasting methods for timeseries-based data.

  • Getting started with time series analysis with Exponential Smoothing (Holt-Winters). [nbviewer]
  • Framing time series problem as supervised-learning. [nbviewer]

projects : 2017.09.23

End to end project including data preprocessing, model building.

ab_tests : 2017.08.09

A/B testing, a.k.a experimental design. Includes: Quick review of necessary statistic concepts. Methods and workflow/thought-process for conducting the test and caveats to look out for.

  • Frequentist A/B testing (includes a quick review of concepts such as p-value, confidence interval). [nbviewer]

model_selection : 2017.06.12

Methods for selecting, improving, evaluating models/algorithms.

  • K-fold cross validation, grid/random search from scratch. [nbviewer]
  • AUC (Area under the ROC curve and precision/recall curve) from scratch (includes the process of building a custom scikit-learn transformer). [nbviewer]
  • Evaluation metrics for imbalanced dataset. [nbviewer]
  • Detecting collinearity amongst features (Variance Inflation Factor for numeric features and Cramer's V statistics for categorical features), also introduces Linear Regression from a Maximum Likelihood perspective and the R-squared evaluation metric. [nbviewer]
  • Curated tips and tricks for technical and soft skills. [nbviewer]
  • Partial Dependece Plot (PDP), model-agnostic approach for directional feature influence. [nbviewer]

big_data : 2017.06.07

Exploring big data tools, such as Spark and H2O.ai. For those interested there's also a pyspark rdd cheatsheet and pyspark dataframe cheatsheet that may come in handy.

  • Local Hadoop cluster installation on Mac. [markdown]
  • PySpark installation on Mac. [markdown]
  • Examples of manipulating with data (crimes data) and building a RandomForest model with PySpark MLlib. [nbviewer]
  • PCA with PySpark MLlib. [nbviewer]
  • Tuning Spark Partitions. [nbviewer]
  • H2O API walkthrough (using GBM as an example). [nbviewer]
  • Spark MLlib Binary Classification (using GBM as an example). [raw zeppelin notebook][Zepl]

dim_reduct : 2017.01.02

Dimensionality reduction methods.

  • Principal Component Analysis (PCA) from scratch. [nbviewer]
  • Introduction to Singular Value Decomposition (SVD), also known as Latent Semantic Analysis/Indexing (LSA/LSI). [nbviewer]

recsys : 2016.12.17

Recommendation system with a focus on matrix factorization methods. Starters into the field should go through the first notebook to understand the basics of matrix factorization methods.

  • Alternating Least Squares with Weighted Regularization (ALS-WR) from scratch. [nbviewer]
  • ALS-WR for implicit feedback data from scratch & Mean Average Precision at k (mapk) and Normalized Cumulative Discounted Gain (ndcg) evaluation. [nbviewer]
  • Bayesian Personalized Ranking (BPR) from scratch & AUC evaluation. [nbviewer]
  • WARP (Weighted Approximate-Rank Pairwise) Loss using lightfm. [nbviewer]
  • Factorization Machine from scratch. [nbviewer]

trees : 2016.12.10

Tree-based models for both regression and classification tasks.

  • Decision Tree from scratch. [nbviewer]
  • Random Forest from scratch and Extra Trees. [nbviewer]
  • Gradient Boosting Machine (GBM) from scratch. [nbviewer]
  • Xgboost API walkthrough (includes hyperparmeter tuning via scikit-learn like API). [nbviewer]
  • LightGBM API walkthrough and a discussion about categorical features in tree-based models. [nbviewer]

association_rule : 2016.09.16

Also known as market-basket analysis.

  • Apriori from scratch. [nbviewer]
  • Using R's arules package (aprori) on tabular data. [Rmarkdown]

clustering : 2016.08.16

TF-IDF and Topic Modeling are techniques specifically used for text analytics.

  • TF-IDF (text frequency - inverse document frequency) from scratch. [nbviewer]
  • K-means, K-means++ from scratch; Elbow method for choosing K. [nbviewer]
  • Gaussian Mixture Model from scratch; AIC and BIC for choosing the number of Gaussians. [nbviewer]
  • Topic Modeling with gensim's Latent Dirichlet Allocation(LDA). [nbviewer]

data_science_is_software : 2016.08.01

Best practices for doing data science in Python.

deep_learning : 2016.07.23

Curated notes on deep learning.

  • Softmax Regression from scratch. [nbviewer]
  • Softmax Regression - Tensorflow hello world. [nbviewer]
  • Multi-layers Neural Network - Tensorflow. [nbviewer]
  • Convolutional Neural Network (CNN) - Tensorflow. [nbviewer]
  • Recurrent Neural Network (RNN).
    • Vanilla RNN - Tensorflow. [nbviewer]
    • Long Short Term Memory (LSTM) - Tensorflow. [nbviewer]
    • RNN, LSTM - PyTorch hello world. [nbviewer]
  • Word2vec (skipgram + negative sampling) using Gensim (includes text preprocessing with spaCy). [nbviewer]

keras : 2016.06.29

Note that this is mostly a API walkthrough, not a tutorial on the details of deep learning. For those interested there's also a keras cheatsheet that may come in handy.

  • Multi-layers Neural Network (keras basics). [nbviewer]
  • Multi-layers Neural Network hyperparameter tuning via scikit-learn like API. [nbviewer]
  • Convolutional Neural Network (CNN) - image classification basics. [nbviewer]
  • Recurrent Neural Network (RNN) - language modeling basics. [nbviewer]

text_classification : 2016.06.15

Naive Bayes and Logistic Regression for text classification.

  • Building intuition with spam classification using scikit-learn (scikit-learn hello world). [nbviewer]
  • Bernoulli and Multinomial Naive Bayes from scratch. [nbviewer]
  • Logistic Regression (stochastic gradient descent) from scratch. [nbviewer]
  • Chi-square feature selection. [nbviewer]

networkx : 2016.06.13

PyCon 2016: Practical Network Analysis Made Simple. Quickstart to networkx's API. Includes some basic graph plotting and algorithms.

regularization : 2016.05.25

Building intuition on Ridge and Lasso regularization using scikit-learn.

ga : 2016.04.25

Genetic Algorithm. Math-free explanation and code from scratch.

  • Start from a simple optimization problem and extending it to traveling salesman problem (tsp).
  • View [nbviewer]

unbalanced : 2015.11.25

Choosing the optimal cutoff value for logistic regression using cost-sensitive mistakes (meaning when the cost of misclassification might differ between the two classes) when your dataset consists of unbalanced binary classes. e.g. Majority of the data points in the dataset have a positive outcome, while few have negative, or vice versa. The notion can be extended to any other classification algorithm that can predict class’s probability, this documentation just uses logistic regression for illustration purpose.

  • Visualize two by two standard confusion matrix and ROC curve with costs using ggplot2.
  • View [Rmarkdown]

clustering_old

A collection of scattered old clustering documents in R.

  • 2015.12.08 | Toy sample code of the LDA algorithm (gibbs sampling) and the topicmodels library. [Rmarkdown]
  • 2015.11.19 | k-shingle, Minhash and Locality Sensitive Hashing for solving the problem of finding textually similar documents. [Rmarkdown]
  • 2015.11.17 | Introducing tf-idf (term frequency-inverse document frequency), a text mining technique. Also uses it to perform text clustering via hierarchical clustering. [Rmarkdown]
  • 2015.11.06 | Some useful evaluations when working with hierarchical clustering and K-means clustering (K-means++ is used here). Including Calinski-Harabasz index for determine the right K (cluster number) for clustering and boostrap evaluation of the clustering result’s stability. [Rmarkdown]

linear_regression : 2015.10.30

Training Linear Regression with gradient descent in R.

  • Briefly covers the interpretation and visualization of linear regression's summary output.
  • View [Rmarkdown]

Python Programming

  • 2018.01.20 | Parallel programming with Python (threading, multiprocessing, concurrent.futures, joblib). [nbviewer]
  • 2017.08.23 | Understanding iterables, iterator and generators. [nbviewer]
  • 2017.07.12 | Cohort analysis. Visualizing user retention by cohort with seaborn's heatmap and illustrating pandas's unstack. [nbviewer]
  • 2017.03.16 | Logging module. [nbviewer]
  • 2016.12.26 | Data structure, algorithms from scratch. [folder]
  • 2016.12.22 | Cython and Numba quickstart for high performance Python. [nbviewer]
  • 2016.06.22 | Optimizing Pandas (e.g. reduce memory usage using category type). [nbviewer]
  • 2016.06.10 | Unittest. [Python script]
  • 2016.04.26 | Using built-in data structure and algorithm. [nbviewer]
  • 2016.04.26 | Tricks with strings and text. [nbviewer]
  • 2016.04.17 | Python's decorators (useful script for logging and timing function). [nbviewer]
  • 2016.03.18 | Pandas's pivot table. [nbviewer]
  • 2016.03.02 | @classmethod, @staticmethod and @property. [nbviewer]

About

🌎 machine learning algorithms tutorials (mainly in Python3)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 85.7%
  • HTML 12.7%
  • Python 1.1%
  • R 0.4%
  • CSS 0.1%
  • Batchfile 0.0%