Personal Repository for Neural Network course in University of Tartu
- Numpy Tutorial
- KNN, Probability Theory, Machine Learning, Overfitting, Regularization, Curse of Dimensionality, Cross-Validation
- Softmax, Feed-forward Neural Networks, CIFAR-10 Data, Softmax Classifier, Vectorization of the network and learning, Backward pass, Stochastic Gradient Descent,
- two_layer_net, back-propagation, L2 Regularization, Forward pass, Backward pass, Tune Hyperparameters
- Dropout: forward pass, backward pass, fully-connected nets, regularization experiment
- FullyConnectedNets: Optimization methods, Dropout basic, affine layer: forward and backward, ReLu layer: forward and backward, "Sandwich" layers, Softmax loss layer, Two-layer network, Solver, Multilayer network, SGD+Momentum, RMSProp and Adam
- BatchNormalization: Forward, Backward, Fully Connected Nets with BN, BN for deep networks, BN and initialization
- ConvolutionNetworks: Output dimensionality, Different Filters, Pooling, Naive forward pass, Image processing via Convolutions, Naive backward pass, Max pooling: Naive forward and backward, Fast layers, Convolutinal "sandwich" layers, Three-layer ConvNet, Sanity check loss, Gradient Check, Overfit small data, Visualize Filters
- Keras: Image Classification
- Network: Classification using pre-trained model, Saliency maps, Fooling the network,
- RNN_Captioning_Keras: Image Captioning with RNNs, h5py, Microsoft COCO, RNN for image captioning, Overfit small data, Test-time sampling,
- RNN_Embeddings: Text Classification with Keras, Embedding layers, Recurrent layers, Loss functions, Word embeddings,
- Preprocessing: Deep Learning with Python by Francois Chollet, Kaggle dogs vs cats data
- Fine-tune pre-trained model: wget, unzip, keras
- Write custom layer or loss function (scikit-image, validation pairs)
- Use pre-trained model: ResNet50 (load model), load image, elephant
- yad2k: yolo.weights, yolo_model, image recognition, yml, font, images
- Reinforcement Learning: OpenAI Gym, Atari games, Frozen Lake, CartPole, Pong, Play
- Tabular Q-learning: return, state-value, state-action value, Bellman optimality equation, q-values,
- Policy Gradient : log derivative trick, formula, reduction of variance, constant baseline, CartPole
- Contextual Bandit: Fashion-MNIST
- A2C (parallel): A2C(Advantage Actor Critic), A3C(Asynchronous Advantage Actor Critic), Atari Pong