def local_optimize(cost,x0,lb,ub): from mystic.solvers import PowellDirectionalSolver from mystic.termination import NormalizedChangeOverGeneration as NCOG from mystic.monitors import VerboseMonitor, Monitor maxiter = 1000 maxfun = 1e+6 convergence_tol = 1e-4 #def func_unpickle(filename): # """ standard pickle.load of function from a File """ # import dill as pickle # return pickle.load(open(filename,'r')) #stepmon = VerboseMonitor(100) stepmon = Monitor() evalmon = Monitor() ndim = len(lb) solver = PowellDirectionalSolver(ndim) solver.SetInitialPoints(x0) solver.SetStrictRanges(min=lb,max=ub) solver.SetEvaluationLimits(maxiter,maxfun) solver.SetEvaluationMonitor(evalmon) solver.SetGenerationMonitor(stepmon) tol = convergence_tol #cost = func_unpickle(cost) #XXX: regenerate cost function from file solver.Solve(cost, termination=NCOG(tol)) solved_params = solver.bestSolution solved_energy = solver.bestEnergy func_evals = solver.evaluations return solved_params, solved_energy, func_evals
def local_optimize(cost, x0, lb, ub): from mystic.solvers import PowellDirectionalSolver from mystic.termination import NormalizedChangeOverGeneration as NCOG from mystic.monitors import VerboseMonitor, Monitor maxiter = 1000 maxfun = 1e+6 convergence_tol = 1e-4 #stepmon = VerboseMonitor(100) stepmon = Monitor() evalmon = Monitor() ndim = len(lb) solver = PowellDirectionalSolver(ndim) solver.SetInitialPoints(x0) solver.SetStrictRanges(min=lb, max=ub) solver.SetEvaluationLimits(maxiter, maxfun) solver.SetEvaluationMonitor(evalmon) solver.SetGenerationMonitor(stepmon) tol = convergence_tol solver.Solve(cost, termination=NCOG(tol)) solved_params = solver.bestSolution solved_energy = solver.bestEnergy func_evals = solver.evaluations return solved_params, solved_energy, func_evals
def __init__(self, dim): """ Takes one initial input: dim -- dimensionality of the problem """ AbstractSolver.__init__(self, dim) self._direc = None # this is the easy way to return 'direc'... x1 = self.population[0] fx = self.popEnergy[0] # [x1, fx, bigind, delta] self.__internals = [x1, fx, 0, 0.0] ftol, gtol = 1e-4, 2 from mystic.termination import NormalizedChangeOverGeneration as NCOG self._termination = NCOG(ftol, gtol)
def test_PowellDirectionalSolver_NCOG(self): # Default for this solver from mystic.solvers import PowellDirectionalSolver from mystic.termination import NormalizedChangeOverGeneration as NCOG self.solver = PowellDirectionalSolver(self.ND) self.term = NCOG() self._run_solver()
def test_NelderMeadSimplexSolver_NCOG(self): from mystic.solvers import NelderMeadSimplexSolver from mystic.termination import NormalizedChangeOverGeneration as NCOG self.solver = NelderMeadSimplexSolver(self.ND) self.term = NCOG() self._run_solver()
def test_DifferentialEvolutionSolver2_NCOG(self): from mystic.solvers import DifferentialEvolutionSolver2 from mystic.termination import NormalizedChangeOverGeneration as NCOG self.solver = DifferentialEvolutionSolver2(self.ND, self.NP) self.term = NCOG() self._run_solver()
def test_rosenbrock(): """Test the 2-dimensional Rosenbrock function. Testing 2-D Rosenbrock: Expected: x=[1., 1.] and f=0 Using DifferentialEvolutionSolver: Solution: [ 1.00000037 1.0000007 ] f value: 2.29478683682e-13 Iterations: 99 Function evaluations: 3996 Time elapsed: 0.582273006439 seconds Using DifferentialEvolutionSolver2: Solution: [ 0.99999999 0.99999999] f value: 3.84824937598e-15 Iterations: 100 Function evaluations: 4040 Time elapsed: 0.577210903168 seconds Using NelderMeadSimplexSolver: Solution: [ 0.99999921 1.00000171] f value: 1.08732211477e-09 Iterations: 70 Function evaluations: 130 Time elapsed: 0.0190329551697 seconds Using PowellDirectionalSolver: Solution: [ 1. 1.] f value: 0.0 Iterations: 28 Function evaluations: 859 Time elapsed: 0.113857030869 seconds """ print "Testing 2-D Rosenbrock:" print "Expected: x=[1., 1.] and f=0" from mystic.models import rosen as costfunc ndim = 2 lb = [-5.]*ndim ub = [5.]*ndim x0 = [2., 3.] maxiter = 10000 # DifferentialEvolutionSolver print "\nUsing DifferentialEvolutionSolver:" npop = 40 from mystic.solvers import DifferentialEvolutionSolver from mystic.termination import ChangeOverGeneration as COG from mystic.strategy import Rand1Bin esow = Monitor() ssow = Monitor() solver = DifferentialEvolutionSolver(ndim, npop) solver.SetInitialPoints(x0) solver.SetStrictRanges(lb, ub) solver.SetEvaluationLimits(generations=maxiter) solver.SetEvaluationMonitor(esow) solver.SetGenerationMonitor(ssow) term = COG(1e-10) time1 = time.time() # Is this an ok way of timing? solver.Solve(costfunc, term, strategy=Rand1Bin) sol = solver.Solution() time_elapsed = time.time() - time1 fx = solver.bestEnergy print "Solution: ", sol print "f value: ", fx print "Iterations: ", solver.generations print "Function evaluations: ", len(esow.x) print "Time elapsed: ", time_elapsed, " seconds" assert almostEqual(fx, 2.29478683682e-13, tol=3e-3) # DifferentialEvolutionSolver2 print "\nUsing DifferentialEvolutionSolver2:" npop = 40 from mystic.solvers import DifferentialEvolutionSolver2 from mystic.termination import ChangeOverGeneration as COG from mystic.strategy import Rand1Bin esow = Monitor() ssow = Monitor() solver = DifferentialEvolutionSolver2(ndim, npop) solver.SetInitialPoints(x0) solver.SetStrictRanges(lb, ub) solver.SetEvaluationLimits(generations=maxiter) solver.SetEvaluationMonitor(esow) solver.SetGenerationMonitor(ssow) term = COG(1e-10) time1 = time.time() # Is this an ok way of timing? solver.Solve(costfunc, term, strategy=Rand1Bin) sol = solver.Solution() time_elapsed = time.time() - time1 fx = solver.bestEnergy print "Solution: ", sol print "f value: ", fx print "Iterations: ", solver.generations print "Function evaluations: ", len(esow.x) print "Time elapsed: ", time_elapsed, " seconds" assert almostEqual(fx, 3.84824937598e-15, tol=3e-3) # NelderMeadSimplexSolver print "\nUsing NelderMeadSimplexSolver:" from mystic.solvers import NelderMeadSimplexSolver from mystic.termination import CandidateRelativeTolerance as CRT esow = Monitor() ssow = Monitor() solver = NelderMeadSimplexSolver(ndim) solver.SetInitialPoints(x0) solver.SetStrictRanges(lb, ub) solver.SetEvaluationLimits(generations=maxiter) solver.SetEvaluationMonitor(esow) solver.SetGenerationMonitor(ssow) term = CRT() time1 = time.time() # Is this an ok way of timing? solver.Solve(costfunc, term) sol = solver.Solution() time_elapsed = time.time() - time1 fx = solver.bestEnergy print "Solution: ", sol print "f value: ", fx print "Iterations: ", solver.generations print "Function evaluations: ", len(esow.x) print "Time elapsed: ", time_elapsed, " seconds" assert almostEqual(fx, 1.08732211477e-09, tol=3e-3) # PowellDirectionalSolver print "\nUsing PowellDirectionalSolver:" from mystic.solvers import PowellDirectionalSolver from mystic.termination import NormalizedChangeOverGeneration as NCOG esow = Monitor() ssow = Monitor() solver = PowellDirectionalSolver(ndim) solver.SetInitialPoints(x0) solver.SetStrictRanges(lb, ub) solver.SetEvaluationLimits(generations=maxiter) solver.SetEvaluationMonitor(esow) solver.SetGenerationMonitor(ssow) term = NCOG(1e-10) time1 = time.time() # Is this an ok way of timing? solver.Solve(costfunc, term) sol = solver.Solution() time_elapsed = time.time() - time1 fx = solver.bestEnergy print "Solution: ", sol print "f value: ", fx print "Iterations: ", solver.generations print "Function evaluations: ", len(esow.x) print "Time elapsed: ", time_elapsed, " seconds" assert almostEqual(fx, 0.0, tol=3e-3)
random_seed(123) ndim = 9 nbins = 8 #[2,1,2,1,2,1,2,1,1] # draw frame and exact coefficients plot_exact() # configure monitor stepmon = VerboseMonitor(1) # use lattice-Powell to solve 8th-order Chebyshev coefficients solver = LatticeSolver(ndim, nbins) solver.SetNestedSolver(PowellDirectionalSolver) solver.SetMapper(Pool().map) solver.SetGenerationMonitor(stepmon) solver.SetStrictRanges(min=[-300] * ndim, max=[300] * ndim) solver.Solve(chebyshev8cost, NCOG(1e-4), disp=1) solution = solver.Solution() # use pretty print for polynomials print(poly1d(solution)) # compare solution with actual 8th-order Chebyshev coefficients print("\nActual Coefficients:\n %s\n" % poly1d(chebyshev8coeffs)) # plot solution versus exact coefficients plot_solution(solution) getch() # end of file
ndim = 9 nbins = 8 #[2,1,2,1,2,1,2,1,1] # draw frame and exact coefficients plot_exact() # configure monitor stepmon = VerboseMonitor(1) # use lattice-Powell to solve 8th-order Chebyshev coefficients solver = LatticeSolver(ndim, nbins) solver.SetNestedSolver(PowellDirectionalSolver) solver.SetMapper(Pool().map) solver.SetGenerationMonitor(stepmon) solver.SetStrictRanges(min=[-300]*ndim, max=[300]*ndim) solver.Solve(chebyshev8cost, NCOG(1e-4), disp='all', step=False) solution = solver.Solution() shutdown() # help multiprocessing shutdown all workers # use pretty print for polynomials print(poly1d(solution)) # compare solution with actual 8th-order Chebyshev coefficients print("\nActual Coefficients:\n %s\n" % poly1d(chebyshev8coeffs)) # plot solution versus exact coefficients plot_solution(solution) getch() # end of file
def fmin_powell(cost, x0, args=(), bounds=None, xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None, **kwds): """Minimize a function using modified Powell's method. Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more variables. This method only uses function values, not derivatives. Mimics the ``scipy.optimize.fmin_powell`` interface. Powell's method is a conjugate direction method that has two loops. The outer loop simply iterates over the inner loop, while the inner loop minimizes over each current direction in the direction set. At the end of the inner loop, if certain conditions are met, the direction that gave the largest decrease is dropped and replaced with the difference between the current estimated x and the estimated x from the beginning of the inner-loop. The conditions for replacing the direction of largest increase is that: (a) no further gain can be made along the direction of greatest increase in the iteration, and (b) the direction of greatest increase accounted for a large sufficient fraction of the decrease in the function value from the current iteration of the inner loop. Args: cost (func): the function or method to be minimized: ``y = cost(x)``. x0 (ndarray): the initial guess parameter vector ``x``. args (tuple, default=()): extra arguments for cost. bounds (list(tuple), default=None): list of pairs of bounds (min,max), one for each parameter. xtol (float, default=1e-4): acceptable relative error in ``xopt`` for convergence. ftol (float, default=1e-4): acceptable relative error in ``cost(xopt)`` for convergence. gtol (float, default=2): maximum iterations to run without improvement. maxiter (int, default=None): the maximum number of iterations to perform. maxfun (int, default=None): the maximum number of function evaluations. full_output (bool, default=False): True if fval and warnflag are desired. disp (bool, default=True): if True, print convergence messages. retall (bool, default=False): True if allvecs is desired. callback (func, default=None): function to call after each iteration. The interface is ``callback(xk)``, with xk the current parameter vector. direc (tuple, default=None): the initial direction set. handler (bool, default=False): if True, enable handling interrupt signals. itermon (monitor, default=None): override the default GenerationMonitor. evalmon (monitor, default=None): override the default EvaluationMonitor. constraints (func, default=None): a function ``xk' = constraints(xk)``, where xk is the current parameter vector, and xk' is a parameter vector that satisfies the encoded constraints. penalty (func, default=None): a function ``y = penalty(xk)``, where xk is the current parameter vector, and ``y' == 0`` when the encoded constraints are satisfied (and ``y' > 0`` otherwise). Returns: ``(xopt, {fopt, iter, funcalls, warnflag, direc}, {allvecs})`` Notes: - xopt (*ndarray*): the minimizer of the cost function - fopt (*float*): value of cost function at minimum: ``fopt = cost(xopt)`` - iter (*int*): number of iterations - funcalls (*int*): number of function calls - warnflag (*int*): warning flag: - ``1 : Maximum number of function evaluations`` - ``2 : Maximum number of iterations`` - direc (*tuple*): the current direction set - allvecs (*list*): a list of solutions at each iteration """ #FIXME: need to resolve "direc" # - should just pass 'direc', and then hands-off ? How return it ? #XXX: enable use of imax? handler = kwds['handler'] if 'handler' in kwds else False from mystic.monitors import Monitor stepmon = kwds['itermon'] if 'itermon' in kwds else Monitor() evalmon = kwds['evalmon'] if 'evalmon' in kwds else Monitor() gtol = 2 # termination generations (scipy: 2, default: 10) if 'gtol' in kwds: gtol = kwds['gtol'] if gtol: #if number of generations is provided, use NCOG from mystic.termination import NormalizedChangeOverGeneration as NCOG termination = NCOG(ftol, gtol) else: from mystic.termination import VTRChangeOverGeneration termination = VTRChangeOverGeneration(ftol) solver = PowellDirectionalSolver(len(x0)) solver.SetInitialPoints(x0) solver.SetEvaluationLimits(maxiter, maxfun) solver.SetEvaluationMonitor(evalmon) solver.SetGenerationMonitor(stepmon) if 'penalty' in kwds: solver.SetPenalty(kwds['penalty']) if 'constraints' in kwds: solver.SetConstraints(kwds['constraints']) if bounds is not None: minb, maxb = unpair(bounds) solver.SetStrictRanges(minb, maxb) if handler: solver.enable_signal_handler() solver.Solve(cost, termination=termination, \ xtol=xtol, ExtraArgs=args, callback=callback, \ disp=disp, direc=direc) #XXX: last two lines use **kwds solution = solver.Solution() # code below here pushes output to scipy.optimize.fmin_powell interface #x = list(solver.bestSolution) x = solver.bestSolution fval = solver.bestEnergy warnflag = 0 fcalls = solver.evaluations iterations = solver.generations allvecs = stepmon.x direc = solver._direc if fcalls >= solver._maxfun: warnflag = 1 elif iterations >= solver._maxiter: warnflag = 2 x = squeeze(x) #FIXME: write squeezed x to stepmon instead? if full_output: retlist = x, fval, iterations, fcalls, warnflag, direc if retall: retlist += (allvecs, ) else: retlist = x if retall: retlist = (x, allvecs) return retlist
print "===========================" start = time.time() from mystic.monitors import Monitor, VerboseMonitor stepmon = VerboseMonitor(1, 1) #stepmon = Monitor() #VerboseMonitor(10) from mystic.termination import NormalizedChangeOverGeneration as NCOG #from mystic._scipyoptimize import fmin_powell from mystic.solvers import fmin_powell, PowellDirectionalSolver #print fmin_powell(rosen,x0,retall=0,full_output=0)#,maxiter=14) solver = PowellDirectionalSolver(len(x0)) solver.SetInitialPoints(x0) solver.SetStrictRanges(min, max) #solver.SetEvaluationLimits(generations=13) solver.SetGenerationMonitor(stepmon) solver.SetConstraints(constrain) solver.enable_signal_handler() solver.Solve(rosen, NCOG(tolerance=1e-4), disp=1) print solver.bestSolution #print "Current function value: %s" % solver.bestEnergy #print "Iterations: %s" % solver.generations #print "Function evaluations: %s" % solver.evaluations times.append(time.time() - start) algor.append("Powell's Method\t") for k in range(len(algor)): print algor[k], "\t -- took", times[k] # end of file
freeze_support() from pathos.pools import ProcessPool as Pool #from pathos.pools import ThreadPool as Pool #from pathos.pools import ParallelPool as Pool except ImportError: from mystic.pools import SerialPool as Pool _map = Pool().map # tools from mystic.termination import VTR, ChangeOverGeneration as COG from mystic.termination import NormalizedChangeOverGeneration as NCOG from mystic.monitors import LoggingMonitor, VerboseMonitor, Monitor from klepto.archives import dir_archive stop = NCOG(1e-4) disp = False # print optimization summary stepmon = False # use LoggingMonitor archive = False # save an archive traj = not stepmon # save all trajectories internally, if no logs # cost function from mystic.models import griewangk as model ndim = 2 # model dimensionality bounds = ndim * [(-9.5,9.5)] # griewangk # the ensemble solvers from mystic.solvers import BuckshotSolver, LatticeSolver, SparsitySolver # the local solvers from mystic.solvers import PowellDirectionalSolver
print("===============") # dimensional information from mystic.tools import random_seed random_seed(12) ndim = len(pointvec) nbins = 1 #[2,1,2,1,2,1,2,1,1] # configure monitor stepmon = VerboseMonitor(1) # use lattice-Powell to solve 8th-order Chebyshev coefficients solver = LatticeSolver(ndim, nbins) solver.SetNestedSolver(PowellDirectionalSolver) solver.SetMapper(Pool().map) solver.SetGenerationMonitor(stepmon) solver.SetStrictRanges(min=[0]*ndim, max=[1]*ndim) solver.SetConstraints(constraintfunc) solver.Solve(test_obj, NCOG(1e+2), disp=1) solution = solver.Solution() # use pretty print for polynomials print(solution) for i in list(range(0,len(pointvec))): if round(solution[i])==1: print(namevec[i]) print(team_c(solution)) # end of file
def fmin_powell(cost, x0, args=(), bounds=None, xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None, **kwds): """Minimize a function using modified Powell's method. Description: Uses a modified Powell Directional Search algorithm to find the minimum of function of one or more variables. Mimics the scipy.optimize.fmin_powell interface. Inputs: cost -- the Python function or method to be minimized. x0 -- ndarray - the initial guess. Additional Inputs: args -- extra arguments for cost. bounds -- list - n pairs of bounds (min,max), one pair for each parameter. xtol -- number - acceptable relative error in xopt for convergence. ftol -- number - acceptable relative error in cost(xopt) for convergence. gtol -- number - maximum number of iterations to run without improvement. maxiter -- number - the maximum number of iterations to perform. maxfun -- number - the maximum number of function evaluations. full_output -- number - non-zero if fval and warnflag outputs are desired. disp -- number - non-zero to print convergence messages. retall -- number - non-zero to return list of solutions at each iteration. callback -- an optional user-supplied function to call after each iteration. It is called as callback(xk), where xk is the current parameter vector. direc -- initial direction set. handler -- boolean - enable/disable handling of interrupt signal. itermon -- monitor - override the default GenerationMonitor. evalmon -- monitor - override the default EvaluationMonitor. constraints -- an optional user-supplied function. It is called as constraints(xk), where xk is the current parameter vector. This function must return xk', a parameter vector that satisfies the encoded constraints. penalty -- an optional user-supplied function. It is called as penalty(xk), where xk is the current parameter vector. This function should return y', with y' == 0 when the encoded constraints are satisfied, and y' > 0 otherwise. Returns: (xopt, {fopt, iter, funcalls, warnflag, direc}, {allvecs}) xopt -- ndarray - minimizer of function fopt -- number - value of function at minimum: fopt = cost(xopt) iter -- number - number of iterations funcalls -- number - number of function calls warnflag -- number - Integer warning flag: 1 : 'Maximum number of function evaluations.' 2 : 'Maximum number of iterations.' direc -- current direction set allvecs -- list - a list of solutions at each iteration """ #FIXME: need to resolve "direc" # - should just pass 'direc', and then hands-off ? How return it ? handler = kwds['handler'] if 'handler' in kwds else False from mystic.monitors import Monitor stepmon = kwds['itermon'] if 'itermon' in kwds else Monitor() evalmon = kwds['evalmon'] if 'evalmon' in kwds else Monitor() gtol = 2 # termination generations (scipy: 2, default: 10) if 'gtol' in kwds: gtol = kwds['gtol'] if gtol: #if number of generations is provided, use NCOG from mystic.termination import NormalizedChangeOverGeneration as NCOG termination = NCOG(ftol,gtol) else: from mystic.termination import VTRChangeOverGeneration termination = VTRChangeOverGeneration(ftol) solver = PowellDirectionalSolver(len(x0)) solver.SetInitialPoints(x0) solver.SetEvaluationLimits(maxiter,maxfun) solver.SetEvaluationMonitor(evalmon) solver.SetGenerationMonitor(stepmon) if 'penalty' in kwds: solver.SetPenalty(kwds['penalty']) if 'constraints' in kwds: solver.SetConstraints(kwds['constraints']) if bounds is not None: minb,maxb = unpair(bounds) solver.SetStrictRanges(minb,maxb) if handler: solver.enable_signal_handler() solver.Solve(cost, termination=termination, \ xtol=xtol, ExtraArgs=args, callback=callback, \ disp=disp, direc=direc) #XXX: last two lines use **kwds solution = solver.Solution() # code below here pushes output to scipy.optimize.fmin_powell interface #x = list(solver.bestSolution) x = solver.bestSolution fval = solver.bestEnergy warnflag = 0 fcalls = solver.evaluations iterations = solver.generations allvecs = stepmon.x direc = solver._direc if fcalls >= solver._maxfun: warnflag = 1 elif iterations >= solver._maxiter: warnflag = 2 x = squeeze(x) #FIXME: write squeezed x to stepmon instead? if full_output: retlist = x, fval, iterations, fcalls, warnflag, direc if retall: retlist += (allvecs,) else: retlist = x if retall: retlist = (x, allvecs) return retlist