Skip to content

jackkamm/adarray

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This package is a fork of the original ad package for autodifferentiation.
It is a barebones reimplementation that is intended to be faster for dealing
with large arrays of numbers.

In the original ad package, ad() objects represent scalar numbers and their
derivatives. To handle multidimensional objects, arrays of ad() objects are
constructed and manipulated. For large arrays this can be slow, because it
requires looping over entries in python.

numpy implements many array operations, such as element-wise and matrix
multiplication, in C and/or Fortran, thus can do array operations much faster
when the arrays contain primitive numbers. We take advantage of this by
modifying the ad() object so it can contain a numpy ndarray object and its derivatives.

Because of operator overloading, and the fact that the chain rule still holds
for the vectorized operations, most of the core ad() methods still work when
the ad() is holding an array instead of a scalar. This can lead to a substantial
improvement in performance, by taking advantage of numpy calls to C/Fortran instead
of using loops in Python.

However, there are two important downsides:
(1) Many functions, especially in admath, no longer work with ad(array) objects, and
    need to be reimplemented.
(2) When doing arithmetic between an ADF and non-ADF object, the ADF object MUST be
    on the left. Otherwise an exception will be raised.

    The reason for this is that, if the non-ADF object is an numpy.ndarray, and
    it is on the left, then it will try to perform the arithmetic operation
    element-wise, and ultimately return an ndarray of ADF objects.
    Whereas, we want an ADF object containing an ndarray.

    There seems to be no way to override this behavior of numpy.ndarray,
    hence we disallow all arithmetic operations with the ADF object on the
    right, to prevent this element-wise operation from succeeding.

    Thus you have two choices:
    (a) Make sure the ADF object is always on the left
    (b) Always cast the left object to an ADF type, using adarray.array()    

As another speedup, we have modified the ad() object so that by default, it
only keeps track of first-order derivatives. To make it keep track of second
order derivatives, one can call

set_order(2)

and then all subsequent ADF objects will keep track of second order derivatives.
WARNING: using ADF objects with different order of derivatives together, will break.

Many thanks to Abraham Lee for writing the original package, which we found to be very 
clean, transparent, and easy to edit for our purposes.

--Jack Kamm (2015)

ORIGINAL README BELOW:

``ad`` Package Documentation
============================

Overview
--------

The ``ad`` package allows you to **easily** and **transparently** perform 
**first and second-order automatic differentiation**. Advanced math 
involving trigonometric, logarithmic, hyperbolic, etc. functions can also 
be evaluated directly using the ``admath`` sub-module. 

**All base numeric types are supported** (``int``, ``float``, ``complex``, 
etc.). This package is designed so that the underlying numeric types will 
interact with each other *as they normally do* when performing any 
calculations. Thus, this package acts more like a "wrapper" that simply helps 
keep track of derivatives while **maintaining the original functionality** of 
the numeric calculations.

From the Wikipedia entry on `Automatic differentiation`_ (AD):

    "AD exploits the fact that every computer program, no matter how 
    complicated, executes a sequence of elementary arithmetic operations 
    (addition, subtraction, multiplication, division, etc.) and elementary 
    functions (exp, log, sin, cos, etc.). By applying the chain rule 
    repeatedly to these operations, derivatives of arbitrary order can be 
    computed automatically, and accurate to working precision."

See the `package documentation`_ for details and examples.

.. image:: https://travis-ci.org/tisimst/ad.png?branch=master

Basic examples
--------------

Let's start with the main import that all numbers use to track derivatives::

    >>> from ad import adnumber

Creating AD objects (either a scalar or an N-dimensional array is acceptable)::

    >>> x = adnumber(2.0)
    >>> x
    ad(2.0)

    >>> y = adnumber([1, 2, 3])
    >>> y
    [ad(1), ad(2), ad(3)]

    >>> z = adnumber(3, tag='z')  # tags can help track variables
    >>> z
    ad(3, z)

Now for some math::

    >>> square = x**2
    >>> square
    ad(4.0)

    >>> sum_value = sum(y)
    >>> sum_value
    ad(6)

    >>> w = x*z**2
    >>> w
    ad(18.0)

Using more advanced math functions like those in the standard `math`_ 
and `cmath`_ modules::

    >>> from ad.admath import *  # sin, cos, log, exp, sqrt, etc.
    >>> sin(1 + x**2)
    ad(-0.9589242746631385)

Calculating derivatives (evaluated at the given input values)::

    >>> square.d(x)  # get the first derivative wrt x
    4.0

    >>> square.d2(x)  # get the second derivative wrt x
    2.0

    >>> z.d(x)  # returns zero if the derivative doesn't exist
    0.0

    >>> w.d2c(x, z)  # second cross-derivatives, order doesn't matter
    6.0

    >>> w.d2c(z, z)  # equivalent to "w.d2(z)"
    4.0
    
    >>> w.d()  # a dict of all relevant derivatives shown if no input
    {ad(2.0): 9.0, ad(3, z): 12.0}

Some convenience functions (useful in optimization)::

    >>> w.gradient([x, z])  # show the gradient in the order given
    [9.0, 12.0]

    >>> w.hessian([x, z])
    [[0.0, 6.0], [6.0, 4.0]]
    
    >>> sum_value.gradient(y)  # works well with input arrays
    [1.0, 1.0, 1.0]
    
    # multiple dependents, multiple independents, first derivatives
    >>> from ad import jacobian
    >>> jacobian([w, square], [x, z])
    [[9.0, 12.0], [4.0, 0.0]]

Working with `NumPy`_ arrays (many functions should work out-of-the-box)::

    >>> import numpy as np
    >>> arr = np.array([1, 2, 3])
    >>> a = adnumber(arr)

    >>> a.sum()
    ad(6)

    >>> a.max()
    ad(3)

    >>> a.mean()
    ad(2.0)

    >>> a.var()  # array variance
    ad(0.6666666666666666)

    >>> print sqrt(a)  # vectorized operations supported with ad operators
    [ad(1.0) ad(1.4142135623730951) ad(1.7320508075688772)]

Interfacing with `scipy.optimize`_
----------------------------------

To make it easier to work with the `scipy.optimize`_ module, there's a 
**convenient way to wrap functions** that will generate appropriate gradient
and hessian functions::

    >>> from ad import gh  # the gradient and hessian function generator
    
    >>> def objective(x):
    ...     return (x[0] - 10.0)**2 + (x[1] + 5.0)**2
    
    >>> grad, hess = gh(objective)  # now gradient and hessian are automatic!
    
    >>> from scipy.optimize import minimize
    >>> x0 = np.array([24, 17])
    >>> bnds = ((0, None), (0, None))
    >>> method = 'L-BFGS-B'
    >>> res = minimize(objective, x0, method=method, jac=grad, bounds=bnds,
    ...                options={'ftol': 1e-8, 'disp': False})
    >>> res.x  # optimal parameter values
    array([ 10.,   0.])
    >>> res.fun  # optimal objective
    25.0
    >>> res.jac  # gradient at optimum
    array([  7.10542736e-15,   1.00000000e+01])
    
Main Features
-------------

- **Transparent calculations with derivatives: no or little 
  modification of existing code** is needed, including when using
  the `Numpy`_ module.

- **Almost all mathematical operations** are supported, including
  functions from the standard math_ module (sin, cos, exp, erf, 
  etc.) and cmath_ module (phase, polar, etc.) with additional convenience 
  trigonometric, hyperbolic, and logarithmic functions (csc, acoth, ln, etc.).
  Comparison operators follow the **same rules as the underlying numeric 
  types**.

- **Real and complex** arithmetic handled seamlessly. Treat objects as you
  normally would using the `math`_ and `cmath`_ functions, but with their new 
  ``admath`` counterparts.
  
- **Automatic gradient and hessian function generator** for optimization 
  studies using `scipy.optimize`_ routines with ``gh(your_func_here)``.

- **Compatible Linear Algebra Routines** in the ``ad.linalg`` submodule, 
  similar to those found in NumPy's ``linalg`` submodule, that are not 
  dependent on LAPACK. There are currently:
  
  a. Decompositions
     
     1. ``chol``: Cholesky Decomposition
     2. ``lu``: LU Decomposition
     3. ``qr``: QR Decomposition
  
  b. Solving equations and inverting matrices
     
     1. ``solve``: General solver for linear systems of equations
     2. ``lstsq``: Least-squares solver for linear systems of equations
     3. ``inv``: Solve for the (multiplicative) inverse of a matrix

Installation
------------

You have several easy, convenient options to install the ``ad`` package 
(administrative privileges may be required):

1. Download the package files below, unzip to any directory, and run 
   ``python setup.py install`` from the command-line.
   
2. Simply copy the unzipped ``ad-XYZ`` directory to any other location 
   that python can find it and rename it ``ad``.
   
3. If ``setuptools`` is installed, run ``easy_install --upgrade ad`` 
   from the command-line.
   
4. If ``pip`` is installed, run ``pip install --upgrade ad`` from the 
   command-line.

5. Download the *bleeding-edge* version on GitHub_

Python 3
--------

Download the file below, unzip it to any directory, and run::

    $ python setup.py install

or::

    $ python3 setup.py install
    
If bugs continue to pop up, please email the author.
    
Contact
-------

Please send **feature requests, bug reports, or feedback** to 
`Abraham Lee`_.

Acknowledgements
----------------

The author expresses his thanks to :

- `Eric O. LEBIGOT (EOL)`_, author of the `uncertainties`_ package, for providing 
  code insight and inspiration
- Stephen Marks, professor at Pomona College, for useful feedback concerning 
  the interface with optimization routines in ``scipy.optimize``.


.. _NumPy: http://numpy.scipy.org/
.. _math: http://docs.python.org/library/math.html
.. _cmath: http://docs.python.org/library/cmath.html
.. _Automatic differentiation: http://en.wikipedia.org/wiki/Automatic_differentiation
.. _Eric O. LEBIGOT (EOL): http://www.linkedin.com/pub/eric-lebigot/22/293/277
.. _uncertainties: http://pypi.python.org/pypi/uncertainties
.. _scipy.optimize: http://docs.scipy.org/doc/scipy/reference/optimize.html
.. _Abraham Lee: mailto:tisimst@gmail.com
.. _package documentation: http://pythonhosted.org/ad
.. _GitHub: https://github.com/tisimst/ad

About

Fast, transparent calculations of first and second-order automatic differentiation -- for large arrays.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%