Cognitive psychology models built with numpy & autograd
- Linear Regression
- Logistic Regression
- Multilayer Classifier
- Autoencoder
- DIVergent Autoencoder (Kurtz, 2007) & Modifications
- gendiscrim version
- global average pooling version
- shallow versions
- detangler
- Multitasking Network
- ALCOVE (Krushke, 1992)
- ^ a little off from original model because the similarity function is non-differentiable at zero (i think)
- WARP (Kurtz & Silliman, 2018)
git clone https://github.com/mwetzel7r/autoCog
- numpy
- autograd
- probably scipy eventually
- optional:
- matplotlib (plotting)
- models typically utilize the same basic functions:
forward(...)
: activate modelloss(...)
: cost function of model activationloss_grad(...)
: returns gradients to update model weights
- for most models, you update the parameters with the function:
-
update_params(...)
- keep the modules very light weight and consistent
- forward, loss, loss_grad functions in every model (that allow them)
- only use numpy & autograd, and sometimes scipy
- want to include other cognitive models besides neural nets
- recurrent net
- covis
- bayesian things
- neural turing machines & differentiable computers