- Software framework for benchmarking hyperparameter optimization (HPO) techniques for machine learning (ML) use cases in the production domain
- The framework can be used to apply HPO techniques to user-defined use cases in production and benchmark their performance comparably
- For each experiment, the framework provides several performance metrics, learning curves, and log-files
- It is possible to extend the framework by further HPO techniques, ML techniques, and data sets
- Run the script
run_benchmark.py
to start a benchmarking experiment (specify positional and optional arguments via the command line to define the use case) - The framework conducts the benchmarking experiments according to the setup and generates performance metrics, log-files, and learning curves
- Optional: Apply further scripts from
./analysis
for a detailed performance analysis
Benchmarking process:
- HpBandSter: BOHB and Hyperband
- Optuna: random search, TPE and CMA-ES
- Scikit-Optimize: GPBO and SMAC
- RoBO: FABOLAS and BOHAMIANN
- Scikit-Learn: multilayer perceptron, random forest, support vector machine, AdaBoost, decision tree, linear regression, k-nearest neighbor, logistic regression, naive Bayes, and elastic net
- Keras: multilayer perceptron
- XGBoost: XGBRegressor and XGBClassifier
- LightGBM: LGBRegressor and LGBClassifier