Skip to content

alejandECE/ml-examples

Repository files navigation

ml-examples

This is a extensive collection of examples demostrating core concepts, functionality and applications of machine learning, deep learning and data science in general. Most of these I used as code snippets in my lectures. The examples are grouped together in folders convering (in summary):

  • Bayesian Decision Theory (Normal distributions, MVN, Bayes theorem, bayesian iterative learning (Video), probability of error, decision rules, regions and boundaries, GDA, LDA, regularized LDA, Naive Bayes, among others).

  • Linear Regression (OLS, Robust Linear Regression, Laplace distribution, Huber Loss, Ridge Regression, LASSO, Bayesian Linear Regression). This folder includes two examples of train and deployment of Google Cloud Platform (GCP) AI-Platform:

    • A regression example (using Tensorflow) found here.
    • A prediction example (using Tensorflow) found here.
  • Introduction to Gradient Descent (Numerical gradient, otimizers, line search, Newtwon's method, Adagrad, cost functions, among other). Video 1, Video 2.

  • Logistic Regression (Logistic Regression (custom, using scipy.optimize toolbox and sklearn implementation), Softmax Regression, regularization, bias regularization effect). Video 1, Video 2.

....

  • ConvNets. Some examples using ConvNets including:
    • Toy introductory examples to convolution and pooling.
    • Toy examples of object detection using sliding windows and convolutional sliding windows (Video 1, Video 2).
    • Object detection using a complete-from-scratch implementation of Yolo V1 in Tensorflow (Video 1)
    • A pretty cool oclussion visualization example. The goal is to visualize what regions of the image are "more relevant" for the network when performing a classification task. The idea behind is to estimate something called the feature sensitivity of the posterior probabilities (FSPP), in other words how much the posterior probabilities (assuming softmax outputs) change when we "mess around" with some feature(s). This can be done in many ways, including some more matematically justified than others. For instance, in this paper FSPP is used for feature selection and they proof that to determine the rank of a feature is enough to train a classifier and then permute a feature accross observations, evaluating the effect on the classifier output as we mess around with each feature (one at a time). In the occlussion experiment we simply "occlude" (set to zero!) some square region of the image and analyze the effect on the classifier output; if it changes a lot you could assume that region is important, otherwise not so much.
    • An implementation of saliency maps.
    • Object localization.

Note: All of these examples use the tf.data module to improve the data ingestion pipeline. Also many of the datasets were conditioned/prepared (tfrecords generated as the end result) using ApacheBeam preprocessing pipelines capable or running locally and on Dataflow (GCP). Some of the TensorFlow Datasets (TFDS) weren't working properly or did not give me enough freedom to manipulate the data. Review this for more details on the latter.

The remaining description will be added shortly...

About

Some machine learning & deep learning examples using sciki-learn, pandas, numpy and tensorflow, among other python libraries.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published