BBopt aims to provide the easiest hyperparameter optimization you'll ever do. Think of BBopt like Keras for black box optimization: one universal interface for working with any black box optimization backend.
BBopt's features include:
- a universal API for defining your tunable parameters based on the standard library
random
module (so you don't even have to learn anything new!), - tons of state-of-the-art black box optimization algorithms such as Gaussian Processes from
scikit-optimize
or Tree Structured Parzen Estimation fromhyperopt
for tuning parameters, - the ability to switch algorithms (even across different backends!) while retaining all previous trials,
- multiprocessing-safe data saving to enable running multiple trials in parallel,
- support for optimizing over conditional parameters that only appear during some runs,
- support for all major Python versions (
2.7
or3.4+
), and - a straightforward interface for extending BBopt with your own custom algorithms.
Once you've defined your parameters, training a black box optimization model on those parameters is as simple as
bbopt your_file.py
and serving your file with optimized parameters is as simple as
import your_file
To get going with BBopt, just install it with
pip install bbopt
or, to also install the extra dependencies necessary for running BBopt's examples, run pip install bbopt[examples]
.
To use bbopt, just add
# BBopt setup:
from bbopt import BlackBoxOptimizer
bb = BlackBoxOptimizer(file=__file__)
if __name__ == "__main__":
bb.run()
to the top of your file, then call
x = bb.uniform("x", 0, 1)
for each of the tunable parameters in your model, and finally add
bb.maximize(y) or bb.minimize(y)
to set the value being optimized. Then, run
bbopt <your file here> -n <number of trials> -j <number of processes>
to train your model, and just
import <your module here>
to serve it!
Some examples of BBopt in action:
random_example.py
: Extremely basic example using therandom
backend.skopt_example.py
: Slightly more complex example making use of thegaussian_process
algorithm from thescikit-optimize
backend.hyperopt_example.py
: Example showcasing thetree_structured_parzen_estimator
algorithm from thehyperopt
backend.numpy_example.py
: Example which showcases how to have numpy array parameters.conditional_skopt_example.py
: Example of having black box parameters that are dependent on other black box parameters using thegaussian_process
algorithm from thescikit-optimize
backend.conditional_hyperopt_example.py
: Example of doing conditional parameters with thetree_structured_parzen_estimator
algorithm from thehyperopt
backend.keras_example.py
: Complete example of using BBopt to optimize a neural network built with Keras. Uses the full API to implement its own optimization loop and thus avoid the overhead of running the entire file multiple times.mixture_example.py
: Example of using themixture
backend to randomly switch between different algorithms.json_example.py
: Example of usingjson
instead ofpickle
to save parameters.
- Command-Line Interface
- Black Box Optimization Methods
- Parameter Definition Methods
- Writing Your Own Backend
The bbopt
command is extremely simple in terms of what it actually does. For the command bbopt <file> -n <trials> -j <processes>
, BBopt simply runs python <file>
a number of times equal to <trials>
, split across <processes>
different processes.
Why does this work? If you're using the basic boilerplate, then running python <file>
will trigger the if __name__ == "__main__":
clause, which will run a training episode. But when you go to import
your file, the if __name__ == "__main__":
clause won't get triggered, and you'll just get served the best parameters found so far. Since the command-line interface is so simple, advanced users who want to use the full API instead of the boilerplate need not use the bbopt
command at all. If you want more information on the bbopt
command, just run bbopt -h
.
BlackBoxOptimizer(file, protocol=None
)
Create a new bb
object; this should be done at the beginning of your program as all the other functions are methods of this object.
file is used by BBopt to figure out where to load and save data to, and should usually just be set to __file__
(BBopt uses os.path.splitext(file)[0]
as the base path for the data file).
protocol determines how BBopt serializes data. If None
(the default), BBopt will use pickle protocol 2, which is the highest version that works on both Python 2 and Python 3 (unless a json
file is present, in which case BBopt will use json
). To use the newest protocol instead, pass protocol=-1
. If protocol="json"
, BBopt will use json
instead of pickle
, which is occasionally useful for cross-platform compatibility.
BlackBoxOptimizer.run(alg="tree_structured_parzen_estimator")
Start optimizing using the given black box optimization algorithm. Use algs to get the valid values for alg.
If this method is never called, or called with alg=None
, BBopt will just serve the best parameters found so far, which is how the basic boilerplate works. Note that, if no saved parameter data is found, and a guess is present, BBopt will use that, which is a good way of distributing your parameter values without including all your saved parameter data.
BlackBoxOptimizer.algs
A dictionary mapping the valid algorithms for use in run to the pair (backend, kwargs)
of the backend and arguments to that backend that the algorithm corresponds to.
Supported algorithms are:
"serving"
(orNone
) (serving
backend),"random"
(random
backend),"tree_structured_parzen_estimator"
(hyperopt
backend) (the default),"annealing"
(hyperopt
backend),"gaussian_process"
(scikit-optimize
backend),"random_forest"
(scikit-optimize
backend),"extra_trees"
(scikit-optimize
backend), and"gradient_boosted_regression_trees"
(scikit-optimize
backend).
BlackBoxOptimizer.run_backend(backend, *args, **kwargs)
The base function behind run. Instead of specifying an algorithm, run_backend lets you specify the specific backend you want to call and the parameters you want to call it with. Different backends do different things with the remaining arguments:
scikit-optimize
passes the arguments toskopt.Optimizer
,hyperopt
passes the arguments tofmin
, andmixture
expects adistribution
argument to specify the mixture of different algorithms to use, specifically a list of(alg, weight)
tuples.
BlackBoxOptimizer.minimize(value)
Finish optimizing and set the loss for this run to value. To start another run, call run again.
BlackBoxOptimizer.maximize(value)
Same as minimize but sets the gain instead of the loss.
BlackBoxOptimizer.remember(info)
Update the current run's "memo"
field with the given info dictionary. Useful for saving information about a run that shouldn't actually impact optimization but that you would like to have access to later (using get_optimal_run, for example).
BlackBoxOptimizer.get_current_run()
Get information on the current run, including the values of all parameters encountered so far and the loss/gain of the run if specified yet.
BlackBoxOptimizer.get_optimal_run()
Get information on the best run so far. These are the parameters that will be used if run is not called.
BlackBoxOptimizer.get_data()
Dump a dictionary containing all the information on your program collected by BBopt.
BlackBoxOptimizer.data_file
The path of the file where BBopt is saving data to.
BlackBoxOptimizer.backend
The backend object being used by the current BlackBoxOptimizer instance.
Every BBopt parameter definition method has the form
bb.<random function>(<name>, <args>, **kwargs)
where
- the method itself specifies what distribution is being modeled,
- the first argument is always name, a unique string identifying that parameter,
- following name are whatever arguments are needed to specify the distribution's parameters, and
- at the end are keyword arguments, which are the same for all the different methods. The supported kwargs are:
- guess, which specifies the initial value for the parameter, and
- placeholder_when_missing, which specifies what placeholder value a conditional parameter should be given if missing.
Important note: Once you bind a name to a parameter you cannot change that parameter's options. Thus, if the options defining your parameters can vary from run to run, you must use a different name for each possible combination.
BlackBoxOptimizer.randrange(name, stop, **kwargs)
BlackBoxOptimizer.randrange(name, start, stop, step=1
, **kwargs)
Create a new parameter modeled by random.randrange(start, stop, step)
, which is equivalent to random.choice(range(start, stop, step))
, but can be much more efficient.
Backends which support randrange: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.randint(name, a, b, **kwargs)
Create a new parameter modeled by random.randint(a, b)
, which is equivalent to random.randrange(a, b-1)
.
Backends which support randint: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.getrandbits(name, k, **kwargs)
Create a new parameter modeled by random.getrandbits(k)
, which is equivalent to random.randrange(0, 2**k)
.
Backends which support getrandbits: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.choice(name, seq, **kwargs)
Create a new parameter modeled by random.choice(seq)
, which chooses an element from seq.
Backends which support choice: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.randbool(name, **kwargs)
Create a new boolean parameter, modeled by the equivalent of random.choice([True, False])
.
Backends which support randbool: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.shuffle(name, x, **kwargs)
Create a new parameter modeled by random.shuffle(population, k)
, except that it returns the shuffled list instead of shuffling it in place.
Backends which support shuffle: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.sample(name, population, k, **kwargs)
Create a new parameter modeled by random.sample(population, k)
, which chooses k elements from population.
Backends which support sample: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.random(name, **kwargs)
Create a new parameter modeled by random.random()
, which is equivalent to random.uniform(0, 1)
.
Backends which support random: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.uniform(name, a, b, **kwargs)
Create a new parameter modeled by random.uniform(a, b)
, which uniformly selects a float between a and b.
Backends which support uniform: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.loguniform(name, min_val, max_val, **kwargs)
Create a new parameter modeled by
math.exp(random.uniform(math.log(min_val), math.log(max_val)))
which logarithmically selects a float between min_val and max_val.
Backends which support loguniform: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.normalvariate(name, mu, sigma, **kwargs)
Create a new parameter modeled by random.normalvariate(mu, sigma)
.
Backends which support normalvariate: hyperopt
, random
.
BlackBoxOptimizer.lognormvariate(name, mu, sigma, **kwargs)
Create a new parameter modeled by random.lognormvariate(mu, sigma)
such that the natural log is a normal distribution with mean mu and standard deviation sigma.
Backends which support lognormvariate: hyperopt
, random
.
BlackBoxOptimizer.rand(name, *shape, **kwargs)
Create a new parameter modeled by numpy.random.rand(*shape)
, which creates a numpy
array of the given shape with entries generated uniformly in [0, 1)
.
Backends which support rand: scikit-optimize
, hyperopt
, random
.
BlackBoxOptimizer.randn(name, *shape, **kwargs)
Create a new parameter modeled by numpy.random.randn(*shape)
, which creates a numpy
array of the given shape with entries generated according to a standard normal distribution.
Backends which support randn: hyperopt
, random
.
BBopt's backend system is built to be extremely extensible, allowing anyone to write and register their own BBopt backends. The basic template for writing a BBopt backend is as follows:
from bbopt.backends.util import Backend
class MyBackend(Backend):
backend_name = "my-backend"
implemented_funcs = [
..., # list the random functions you support here
]
def __init__(self, examples, params, **options):
self.init_fallback_backend()
# the values you want to use for this run as a dict;
# you can use params to get the args for each param
# and examples to get all the past data (to see what
# examples and params look like, use bb.get_data)
self.current_values = ...
MyBackend.register()
MyBackend.register_alg("my_alg")
Once you've written a BBopt backend as above, you simply need to import it to trigger the register
calls and enable it to be used in BBopt. For some example BBopt backends, see BBopt's default backends (written in Coconut):