Skip to content

rtoengi/deep-neural-network-tuning-tutorials

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A collection of simple python scripts examining the numerous hyperparameters of deep neural networks. The tutorials are self-contained exploring one aspect at a time of how to tune deep neural networks to get better learning, generalization and prediction performance.

The tutorials stem from the eBook: Jason Brownlee, Better Deep Learning, Machine Learning Mastery, Available from https://machinelearningmastery.com/better-deep-learning/, accessed January 25th, 2019.

The contribution of this repository is the extensions of the tutorials to further explore and discuss a given topic.

The tutorials are divided into three parts:

Better Learning. Discover the techniques to improve and accelerate the process used to learn or optimize the weights of a neural network model.

Better Generalization. Discover the techniques to reduce overfitting of the training dataset and improve the generalization of models on new data.

Better Predictions. Discover the techniques to improve the performance of final models when used to make predictions on new data.

Better Learning

Configure Capacity with Nodes and Layers

Go to code samples and discussion of the extensions

Configure Gradient Precision with Batch Size

Go to code samples and discussion of the extensions

Configure What to Optimize with Loss Functions

Go to code samples and discussion of the extensions

Configure Speed of Learning with Learning Rate

Go to code samples and discussion of the extensions

Stabilize Learning with Data Scaling

Go to code samples and discussion of the extensions

Fix Vanishing Gradients with ReLU

Go to code samples and discussion of the extensions

Fix Exploding Gradients with Gradient Clipping

Go to code samples and discussion of the extensions

Accelerate Learning with Batch Normalization

Go to code samples and discussion of the extensions

Deeper Models with Greedy Layer-Wise Pretraining

Go to code samples and discussion of the extensions

Jump-Start Training with Transfer Learning

Go to code samples and discussion of the extensions

Better Generalization

Penalize Large Weights with Weight Regularization

Go to code samples and discussion of the extensions

Sparse Representations with Activity Regularization

Go to code samples and discussion of the extensions

Force Small Weights with Weight Constraints

Go to code samples and discussion of the extensions

Decouple Layers with Dropout

Go to code samples and discussion of the extensions

Promote Robustness with Noise

Go to code samples and discussion of the extensions

Halt Training at the Right Time with Early Stopping

Go to code samples and discussion of the extensions

Better Predictions

Combine Models From Multiple Runs with Model Averaging Ensemble

Go to code samples and discussion of the extensions

Contribute Proportional to Trust with Weighted Average Ensemble

Go to code samples and discussion of the extensions

Fit Models on Different Samples with Resampling Ensembles

Go to code samples and discussion of the extensions

Models from Contiguous Epochs with Horizontal Voting Ensembles

Go to code samples and discussion of the extensions

Cyclic Learning Rate and Snapshot Ensembles

Go to code samples and discussion of the extensions

Learn to Combine Predictions with Stacked Generalization Ensemble

Go to code samples and discussion of the extensions

Combine Model Parameters with Average Model Weights Ensemble

Go to code samples and discussion of the extensions

About

Simple self-contained tutorials for tuning deep neural networks to get better learning, generalization and prediction performance

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages