Skip to content
This repository has been archived by the owner on Jan 19, 2019. It is now read-only.

asappinc/autograd

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Autograd

Autograd can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives. It uses reverse-mode differentiation (a.k.a. backpropagation), which means it can efficiently take gradients of scalar-valued functions with respect to array-valued arguments. The main intended application is gradient-based optimization.

Example use:

import autograd.numpy as np
import matplotlib.pyplot as plt
from autograd import grad

def fun(x):
    return np.sin(x)

d_fun = grad(fun)    # First derivative
dd_fun = grad(d_fun) # Second derivative

x = np.linspace(-10, 10, 100)
plt.plot(x, map(fun, x), x, map(d_fun, x), x, map(dd_fun, x))

The function can even have control flow, which raises the prospect of differentiating through an iterative routine like an optimization. Here's a simple example.

# Taylor approximation to sin function
def fun(x):
    currterm = x
    ans = currterm
    for i in xrange(1000):
        currterm = - currterm * x ** 2 / ((2 * i + 3) * (2 * i + 2))
        ans = ans + currterm
        if np.abs(currterm) < 0.2: break # (Very generous tolerance!)

    return ans

d_fun = grad(fun)
dd_fun = grad(d_fun)

x = np.linspace(-10, 10, 100)
plt.plot(x, map(fun, x), x, map(d_fun, x), x, map(dd_fun, x))

We can take the derivative of the derivative automatically as well, as many times as we like:

# Define the tanh function
def tanh(x):
    return (1.0 - np.exp(-x))  / ( 1.0 + np.exp(-x))

d_fun = grad(tanh)           # First derivative
dd_fun = grad(d_fun)         # Second derivative
ddd_fun = grad(dd_fun)       # Third derivative
dddd_fun = grad(ddd_fun)     # Fourth derivative
ddddd_fun = grad(dddd_fun)   # Fifth derivative
dddddd_fun = grad(ddddd_fun) # Sixth derivative

x = np.linspace(-7, 7, 200)
plt.plot(x, map(tanh, x),
         x, map(d_fun, x),
         x, map(dd_fun, x),
         x, map(ddd_fun, x),
         x, map(dddd_fun, x),
         x, map(ddddd_fun, x),
         x, map(dddddd_fun, x))

Examples:

How to install:

Simply run

git clone --depth 1 --branch master https://github.com/HIPS/autograd.git
cd autograd/
python setup.py install

Authors:

Dougal Maclaurin and David Duvenaud

We thank Matthew Johnson, Jasper Snoek, and the rest of the HIPS group (led by Ryan P. Adams) for helpful contributions and advice. We thank Barak Pearlmutter for foundational work on autodiff and for guidance on our implementation. We thank Analog Devices International and Samsung Advanced Institute of Technology for their generous support.

About

Efficiently computes derivatives of numpy code.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%