def __init__(self, f, df=None, T0=1000., rt=0.95, h=0.05, emax=1e-8, imax=1000): ''' Initializes the optimizer. To create an optimizer of this type, instantiate the class with the parameters given below: :Parameters: f A multivariable function to be optimized. The function should have only one parameter, a multidimensional line-vector, and return the function value, a scalar. df A function to calculate the gradient vector of the cost function ``f``. Defaults to ``None``, if no gradient is supplied, then it is estimated from the cost function using Euler equations. T0 Initial temperature of the system. The temperature is, of course, an analogy. Defaults to 1000. rt Temperature decreasing rate. The temperature must slowly decrease in simulated annealing algorithms. In this implementation, this is controlled by this parameter. At each step, the temperature is multiplied by this value, so it is necessary that ``0 < rt < 1``. Defaults to 0.95, smaller values make the temperature decay faster, while larger values make the temperature decay slower. h Convergence step. In the case that the neighbor estimate is not accepted, a simple gradient step is executed. This parameter is the convergence step to the gradient step. emax Maximum allowed error. The algorithm stops as soon as the error is below this level. The error is absolute. imax Maximum number of iterations, the algorithm stops as soon this number of iterations are executed, no matter what the error is at the moment. ''' self.__f = f if df is None: self.__df = gradient(f) else: self.__df = df self.__t = float(T0) self.__r = float(rt) self.__h = float(h) self.__emax = float(emax) self.__imax = int(imax)
def __init__(self, f, df=None, B=None, h=0.1, emax=1e-5, imax=1000): ''' Initializes the optimizer. To create an optimizer of this type, instantiate the class with the parameters given below: :Parameters: f A multivariable function to be optimized. The function should have only one parameter, a multidimensional line-vector, and return the function value, a scalar. df A function to calculate the gradient vector of the cost function ``f``. Defaults to ``None``, if no gradient is supplied, then it is estimated from the cost function using Euler equations. B A first estimate of the inverse hessian. Note that, differently from the Newton method, the elements in this matrix are numbers, not functions. So, it is an estimate at a given point, and its values *should* be coherent with the first estimate (that is, ``B`` should be the inverse of the hessian evaluated at the first estimate), or else the algorithm might diverge. Defaults to ``None``, if none is given, it is estimated. Note that, given the same reasons as before, the estimate of ``B`` is deferred to the first calling of the ``step`` method, where it is handled automatically. h Convergence step. This method does not takes into consideration the possibility of varying the convergence step, to avoid Stiefel cages. emax Maximum allowed error. The algorithm stops as soon as the error is below this level. The error is absolute. imax Maximum number of iterations, the algorithm stops as soon this number of iterations are executed, no matter what the error is at the moment. ''' Optimizer.__init__(self) self.__f = f if df is None: self.__df = gradient(f) else: self.__df = df self.__B = B self.__h = h self.__emax = float(emax) self.__imax = int(imax)
def __init__(self, f, df=None, hf=None, h=0.1, emax=1e-5, imax=1000): ''' Initializes the optimizer. To create an optimizer of this type, instantiate the class with the parameters given below: :Parameters: f A multivariable function to be optimized. The function should have only one parameter, a multidimensional line-vector, and return the function value, a scalar. df A function to calculate the gradient vector of the cost function ``f``. Defaults to ``None``, if no gradient is supplied, then it is estimated from the cost function using Euler equations. hf A function to calculate the hessian matrix of the cost function ``f``. Defaults to ``None``, if no hessian is supplied, then it is estimated from the cost function using Euler equations. h Convergence step. This method does not takes into consideration the possibility of varying the convergence step, to avoid Stiefel cages. emax Maximum allowed error. The algorithm stops as soon as the error is below this level. The error is absolute. imax Maximum number of iterations, the algorithm stops as soon this number of iterations are executed, no matter what the error is at the moment. ''' Optimizer.__init__(self) self.__f = f if df is None: self.__df = gradient(f) else: self.__df = df if hf is None: self.__hf = hessian(f) else: self.__hf = hf self.__h = h self.__emax = float(emax) self.__imax = int(imax)