示例#1
0
def improve_solution(A, x, b, maxsteps=1):
    """
    Improve a solution to a linear equation system iteratively.

    This re-uses the LU decomposition and is thus cheap.
    Usually 3 up to 4 iterations are giving the maximal improvement.
    """
    assert A.rows == A.cols, 'need n*n matrix' # TODO: really?
    for _ in xrange(maxsteps):
        r = residual(A, x, b)
        if norm_p(r, 2) < 10*eps:
            break
        # this uses cached LU decomposition and is thus cheap
        dx = lu_solve(A, -r)
        x += dx
    return x
示例#2
0
def qr_solve(A, b, norm=lambda x: norm_p(x, 2), **kwargs):
    """
    Ax = b => x, ||Ax - b||

    Solve a determined or overdetermined linear equations system and
    calculate the norm of the residual (error).
    QR decompostion using Householder factorization is applied, which gives very
    accurate results even for ill-conditioned matrices. qr_solve is twice as
    efficient.
    """
    # do not overwrite A nor b
    A, b = matrix(A, **kwargs).copy(), matrix(b, **kwargs).copy()
    if A.rows < A.cols:
        raise ValueError('cannot solve underdetermined system')
    H, p, x, r = householder(extend(A, b))
    res = norm(r)
    # calculate residual "manually" for determined systems
    if res == 0:
        res = norm(residual(A, x, b))
    return matrix(x, **kwargs), res
示例#3
0
def qr_solve(A, b, norm=lambda x: norm_p(x, 2), **kwargs):
    """
    Ax = b => x, ||Ax - b||

    Solve a determined or overdetermined linear equations system and
    calculate the norm of the residual (error).
    QR decompostion using Householder factorization is applied, which gives very
    accurate results even for ill-conditioned matrices. qr_solve is twice as
    efficient.
    """
    # do not overwrite A nor b
    A, b = matrix(A, **kwargs).copy(), matrix(b, **kwargs).copy()
    if A.rows < A.cols:
        raise ValueError('cannot solve underdetermined system')
    H, p, x, r = householder(extend(A, b))
    res = norm(r)
    # calculate residual "manually" for determined systems
    if res == 0:
        res = norm(residual(A, x, b))
    return matrix(x, **kwargs), res
示例#4
0
def findroot(f, x0, solver=Secant, tol=None, verbose=False, verify=True,
             force_type=mpmathify, **kwargs):
    r"""
    Find a solution to `f(x) = 0`, using *x0* as starting point or
    interval for *x*.

    Multidimensional overdetermined systems are supported.
    You can specify them using a function or a list of functions.

    If the found root does not satisfy `|f(x)^2 < \mathrm{tol}|`,
    an exception is raised (this can be disabled with *verify=False*).

    **Arguments**

    *f*
        one dimensional function
    *x0*
        starting point, several starting points or interval (depends on solver)
    *tol*
        the returned solution has an error smaller than this
    *verbose*
        print additional information for each iteration if true
    *verify*
        verify the solution and raise a ValueError if `|f(x) > \mathrm{tol}|`
    *force_type*
        use specified type constructor on starting points
    *solver*
        a generator for *f* and *x0* returning approximative solution and error
    *maxsteps*
        after how many steps the solver will cancel
    *df*
        first derivative of *f* (used by some solvers)
    *d2f*
        second derivative of *f* (used by some solvers)
    *multidimensional*
        force multidimensional solving
    *J*
        Jacobian matrix of *f* (used by multidimensional solvers)
    *norm*
        used vector norm (used by multidimensional solvers)

    solver has to be callable with ``(f, x0, **kwargs)`` and return an generator
    yielding pairs of approximative solution and estimated error (which is
    expected to be positive).
    You can use the following string aliases:
    'secant', 'mnewton', 'halley', 'muller', 'illinois', 'pegasus', 'anderson',
    'ridder', 'anewton', 'bisect'

    See mpmath.optimization for their documentation.

    **Examples**

    The function :func:`findroot` locates a root of a given function using the
    secant method by default. A simple example use of the secant method is to
    compute `\pi` as the root of `\sin x` closest to `x_0 = 3`::

        >>> from sympy.mpmath import *
        >>> mp.dps = 30
        >>> print findroot(sin, 3)
        3.14159265358979323846264338328

    The secant method can be used to find complex roots of analytic functions,
    although it must in that case generally be given a nonreal starting value
    (or else it will never leave the real line)::

        >>> mp.dps = 15
        >>> print findroot(lambda x: x**3 + 2*x + 1, j)
        (0.226698825758202 + 1.46771150871022j)

    A nice application is to compute nontrivial roots of the Riemann zeta
    function with many digits (good initial values are needed for convergence)::

        >>> mp.dps = 30
        >>> print findroot(zeta, 0.5+14j)
        (0.5 + 14.1347251417346937904572519836j)

    The secant method can also be used as an optimization algorithm, by passing
    it a derivative of a function. The following example locates the positive
    minimum of the gamma function::

        >>> mp.dps = 20
        >>> print findroot(lambda x: diff(gamma, x), 1)
        1.4616321449683623413

    Finally, a useful application is to compute inverse functions, such as the
    Lambert W function which is the inverse of `w e^w`, given the first
    term of the solution's asymptotic expansion as the initial value. In basic
    cases, this gives identical results to mpmath's builtin ``lambertw``
    function::

        >>> def lambert(x):
        ...     return findroot(lambda w: w*exp(w) - x, log(1+x))
        ...
        >>> mp.dps = 15
        >>> print lambert(1), lambertw(1)
        0.567143290409784 0.567143290409784
        >>> print lambert(1000), lambert(1000)
        5.2496028524016 5.2496028524016

    Multidimensional functions are also supported::

        >>> f = [lambda x1, x2: x1**2 + x2,
        ...      lambda x1, x2: 5*x1**2 - 3*x1 + 2*x2 - 3]
        >>> findroot(f, (0, 0))
        matrix(
        [['-0.618033988749895'],
         ['-0.381966011250105']])
        >>> findroot(f, (10, 10))
        matrix(
        [['1.61803398874989'],
         ['-2.61803398874989']])

    You can verify this by solving the system manually.

    **Multiple roots**

    For multiple roots all methods of the Newtonian family (including secant)
    converge slowly. Consider this example::

        >>> f = lambda x: (x - 1)**99
        >>> findroot(f, 0.9, verify=False)
        mpf('0.91807354244492868')

    Even for a very close starting point the secant method converges very
    slowly. Use ``verbose=True`` to illustrate this.

    It is possible to modify Newton's method to make it converge regardless of
    the root's multiplicity::

        >>> findroot(f, -10, solver='mnewton')
        mpf('1.0')

    This variant uses the first and second derivative of the function, which is
    not very efficient.

    Alternatively you can use an experimental Newtonian solver that keeps track
    of the speed of convergence and accelerates it using Steffensen's method if
    necessary::

        >>> findroot(f, -10, solver='anewton', verbose=True)
        x: -9.88888888888888888889
        error: 0.111111111111111111111
        converging slowly
        x: -9.77890011223344556678
        error: 0.10998877665544332211
        converging slowly
        x: -9.67002233332199662166
        error: 0.108877778911448945119
        converging slowly
        accelerating convergence
        x: -9.5622443299551077669
        error: 0.107778003366888854764
        converging slowly
        x: 0.99999999999999999214
        error: 10.562244329955107759
        x: 1.0
        error: 7.8598304758094664213e-18
        mpf('1.0')


    **Complex roots**

    For complex roots it's recommended to use Muller's method as it converges
    even for real starting points very fast::

        >>> findroot(lambda x: x**4 + x + 1, (0, 1, 2), solver='muller')
        mpc(real='0.72713608449119684', imag='0.93409928946052944')

    **Intersection methods**

    When you need to find a root in a known interval, it's highly recommended to
    use an intersection-based solver like ``'anderson'`` or ``'ridder'``.
    Usually they converge faster and more reliable. They have however problems
    with multiple roots and usually need a sign change to find a root::

        >>> findroot(lambda x: x**3, (-1, 1), solver='anderson')
        mpf('0.0')

    Be careful with symmetric functions::

        >>> findroot(lambda x: x**2, (-1, 1), solver='anderson') #doctest:+ELLIPSIS
        Traceback (most recent call last):
          ...
        ZeroDivisionError

    It fails even for better starting points, because there is no sign change::

        >>> findroot(lambda x: x**2, (-1, .5), solver='anderson')
        Traceback (most recent call last):
          ...
        ValueError: Could not find root within given tolerance. (1 > 2.1684e-19)
        Try another starting point or tweak arguments.

    """
    # initialize arguments
    if not force_type:
        force_type = lambda x: x
    elif not tol and (force_type == float or force_type == complex):
        tol = 2**(-42)
    kwargs['verbose'] = verbose
    if 'd1f' in kwargs:
        kwargs['df'] = kwargs['d1f']
    if tol is None:
        tol = eps * 2**10
    kwargs['tol'] = tol
    if isinstance(x0, (list, tuple)):
        x0 = [force_type(x) for x in x0]
    else:
        x0 = [force_type(x0)]
    if isinstance(solver, str):
        try:
            solver = str2solver[solver]
        except KeyError:
            raise ValueError('could not recognize solver')
    # accept list of functions
    if isinstance(f, (list, tuple)):
        f2 = copy(f)
        def tmp(*args):
            return [fn(*args) for fn in f2]
        f = tmp
    # detect multidimensional functions
    try:
        fx = f(*x0)
        multidimensional = isinstance(fx, (list, tuple, matrix))
    except TypeError:
        fx = f(x0[0])
        multidimensional = False
    if 'multidimensional' in kwargs:
        multidimensional = kwargs['multidimensional']
    if multidimensional:
        # only one multidimensional solver available at the moment
        solver = MDNewton
        if not 'norm' in kwargs:
            norm = lambda x: norm_p(x, mpf('inf'))
            kwargs['norm'] = norm
        else:
            norm = kwargs['norm']
    else:
        norm = abs
    # happily return starting point if it's a root
    if norm(fx) == 0:
        if multidimensional:
            return matrix(x0)
        else:
            return x0[0]
    # use solver
    iterations = solver(f, x0, **kwargs)
    if 'maxsteps' in kwargs:
        maxsteps = kwargs['maxsteps']
    else:
        maxsteps = iterations.maxsteps
    i = 0
    for x, error in iterations:
        if verbose:
            print 'x:    ', x
            print 'error:', error
        i += 1
        if error < tol * max(1, norm(x)) or i >= maxsteps:
            break
    if not isinstance(x, (list, tuple, matrix)):
        xl = [x]
    else:
        xl = x
    if verify and norm(f(*xl))**2 > tol: # TODO: better condition?
        raise ValueError('Could not find root within given tolerance. '
                         '(%g > %g)\n'
                         'Try another starting point or tweak arguments.'
                         % (norm(f(*xl))**2, tol))
    return x