Skip to content
/ autocog Public

cognitive psychology models automatically differentiated with autograd (https://github.com/HIPS/autograd)

Notifications You must be signed in to change notification settings

mw3i/autocog

Repository files navigation

autoCog

Cognitive psychology models built with numpy & autograd

Currently Implemented Models:

  • Linear Regression
  • Logistic Regression
  • Multilayer Classifier
  • Autoencoder
  • DIVergent Autoencoder (Kurtz, 2007) & Modifications
    • gendiscrim version
    • global average pooling version
    • shallow versions
    • detangler
  • Multitasking Network
  • ALCOVE (Krushke, 1992)
    • ^ a little off from original model because the similarity function is non-differentiable at zero (i think)
  • WARP (Kurtz & Silliman, 2018)

Install

git clone https://github.com/mwetzel7r/autoCog

requirements:

  • numpy
  • autograd
  • probably scipy eventually
  • optional:
    • matplotlib (plotting)

Usage

  • models typically utilize the same basic functions:
    • forward(...): activate model
    • loss(...): cost function of model activation
    • loss_grad(...): returns gradients to update model weights
  • for most models, you update the parameters with the function: -update_params(...)

Roadmap

  • keep the modules very light weight and consistent
    • forward, loss, loss_grad functions in every model (that allow them)
    • only use numpy & autograd, and sometimes scipy
  • want to include other cognitive models besides neural nets
    • recurrent net
    • covis
    • bayesian things
    • neural turing machines & differentiable computers

About

cognitive psychology models automatically differentiated with autograd (https://github.com/HIPS/autograd)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages