This repository contains
- a Tensorflow implementation of the difference target propagation (DTP) algorithm for training deep networks, from Lee, Zhang, Fischer, Bengio 2014
- python / numpy implementations of new variants of target propagation
Target propagation is an alternative to backpropagation that propagates targets (instead of errors) via inverses (instead of the chain rule).
The layer inverses can be defined explicitly or learned.
Files:
-
targprop.tprop_train
contains the main implementation of target propagation. The main function,train_net()
trains a network ondataset=MNIST
ordataset=cifar
, as a classifier (mode=classification
) or autoencoder (mode=autoencoder
), using one of a few different target propagation methods (err_alg=0
,1
,2
, or3
). This function relies primarily on numpy to do forward/backward propagation. The parameters of the model are updated based on layer-local cost functions, which can be minimized using gradient descent (update_implementation=numpy
) or using tensorflow's Adam optimizer (update_implementation=tf
). -
targprop.tproptflow_train
is similar totprop_train.py
, but the entire graph is built in tensorflow. The disadvantage of doing everything with a tensorflow graph is that it is difficult to implement new target propagation methods. -
targprop.operations
contain anOp
class for implementing standard operations. Whereas tensorflow operations require both the function and its derivative, theOp
class requires the function, its derivative, its least-squares inversef_inv
and regularized least-squares inversef_rinv
, which are used in some target propagation methods that we test. -
targprop.datasets
contain a self-explanatoryDataSet
class.