def restart(self, x0, h=None): ''' Resets the optimizer, returning to its original state, and allowing to use a new first estimate. :Parameters: x0 New estimate of the minimum. Estimates can be given in any format, but internally they are converted to a one-dimension vector, where each component corresponds to the estimate of that particular variable. The vector is computed by flattening the array. h Convergence step. This method does not takes into consideration the possibility of varying the convergence step, to avoid Stiefel cages. ''' self.__x = array(x0).ravel() self.__B = inv(hessian(self.__f)(self.x)) if h is not None: self.__h = h
def __init__(self, f, x0, ranges=None, df=None, h=0.1, emax=1e-5, imax=1000): ''' Initializes the optimizer. To create an optimizer of this type, instantiate the class with the parameters given below: :Parameters: f A multivariable function to be optimized. The function should have only one parameter, a multidimensional line-vector, and return the function value, a scalar. x0 First estimate of the minimum. Estimates can be given in any format, but internally they are converted to a one-dimension vector, where each component corresponds to the estimate of that particular variable. The vector is computed by flattening the array. ranges A range of values might be passed to the algorithm, but it is not necessary. If supplied, this parameter should be a list of ranges for each variable of the objective function. It is specified as a list of tuples of two values, ``(x0, x1)``, where ``x0`` is the start of the interval, and ``x1`` its end. Obviously, ``x0`` should be smaller than ``x1``. It can also be given as a list with a simple tuple in the same format. In that case, the same range will be applied for every variable in the optimization. df A function to calculate the gradient vector of the cost function ``f``. Defaults to ``None``, if no gradient is supplied, then it is estimated from the cost function using Euler equations. h Convergence step. This method does not takes into consideration the possibility of varying the convergence step, to avoid Stiefel cages. emax Maximum allowed error. The algorithm stops as soon as the error is below this level. The error is absolute. imax Maximum number of iterations, the algorithm stops as soon this number of iterations are executed, no matter what the error is at the moment. ''' Optimizer.__init__(self) self.__f = f self.__x = array(x0).ravel() if df is None: self.__df = gradient(f) else: self.__df = df self.__B = inv(hessian(self.__f)(self.x)) self.__h = h # Determine ranges of the variables if ranges is not None: ranges = list(ranges) if len(ranges) == 1: ranges = array(ranges * len(x0[0])) else: ranges = array(ranges) self.ranges = ranges '''Holds the ranges for every variable. Although it is a writable property, care should be taken in changing parameters before ending the convergence.''' self.__emax = float(emax) self.__imax = int(imax)