Skip to content

pengyanghua/optimus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Optimus

Optimus is a customized cluster scheduler for deep learning training jobs that targets high job performance and resource efficiency in production clusters. It builds resource-performance models for each job on the go, and dynamically schedules resources to jobs based on job progress and the cluster load to maximize training performance and resource efficiency. The implementation uses MXNet as the distributed training framework and schedules jobs based on Kubernetes.

Setup

Optimus is not bound to any specific version of the software listed below. You can skip this part if your Kubernetes cluster works.

Software Environment

(1) Ubuntu 14.04.5 Server 64bit LTS;

(2) HDFS 2.8;

(3) Docker 17.06.0-ce;

(4) Kubernetes 1.7;

(5) NVIDIA Driver version >= 375.66;

(6) CUDA version >= 8.0.61;

(7) CuDNN Library version >= 6.0

See docs for installation guide.

Container Environment

MXNet GPU container (if the server has NVIDIA GPUs): see images. You may also build a CPU container.

Usage

The PS load balance algorithm and code are in mxnet and it works on MXNet 1.0. The scheduling code is in scheduler. Before running experimentor.py, make sure hyper-parameters in params.py are correct.

Please use the images for running, or you can build your own by copying the scripts into your own CPU or GPU image. These scripts are for parsing training logs and collecting training speed, loss, accuracy etc.

All training examples (e.g., image classification) in the paper are from the open source community. Most are from MXNet official examples and you can find how to run these examples (e.g., preparing the training data and starting training) there. The machine translation example is from sockeye.

This is a prototype, so it may take some time to make it work on your testbed. Before running the code, please read the scheduler code first to understand how Optimus interacts with k8s. That may save you a lot of time if encounting any bugs.

More

Read the Optimus paper and the morning report for details.

Contact yhpeng@cs.hku.hk if you have any questions.

About

A Deep Learning Cluster Scheduler

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages