Skip to content

shrree/Optimus

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Optimus

Optimus is a customized cluster scheduler for deep learning training jobs that targets high job performance and resource efficiency in production clusters. It builds resource-performance models for each job on the go, and dynamically schedules resources to jobs based on job progress and the cluster load to maximize training performance and resource efficiency. It uses MXNet as the distributed training framework and is integrated into Kubernetes cluster manager.

Setup

OS Environment

(1) Ubuntu 14.04.5 Server 64bit LTS;

(2) Install the basic tools by running the script preinstall.sh.

Cluster Environment

Build up the following platforms in the cluster:

(1) HDFS 2.8: see hadoop.md for more details;

(2) Docker 17.06.0-ce: see the official installation tutorial https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/;

(3) Kubernetes 1.7: see k8s.md for detailed installation steps (the official one is outdated). Generally, you need to install from the modified source code. Configure the config-default.sh script for cluster node information (e.g., Master IP) and label each node as CPU or GPU node in label_nodes.sh. Run start.sh to start the resource manager and shutdown.sh to shutdown it.

Container Environment

(1) MXNet CPU container: see k8s-mxnet-cpu-experiment.Dockerfile and build.sh to see how to compile MXNet container. To get faster training speed on Intel CPU, set USE_MKL2017=1 and USE_MKL2017_EXPERIMENTAL=1 when building the container to enable Intel Math Kernel Library. To get even faster training speed, copy the scripts into the image to enable balanced parameter assignment.

(2) MXNet GPU container (if the server has NVIDIA GPUs): see k8s-mxnet-gpu-experiment.Dockerfile and build.sh. Note that NVIDIA Docker plugin is required in such case, see https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker-plugin for installation details.

CUDA Environment

Run install-nvidia-driver-cuda-cudnn.sh to install:

(1) NVIDIA Driver version >= 375.66;

(2) CUDA version >= 8.0.61;

(3) CuDNN Library version >= 6.0

Usage

A Simple Example

To train a ResNet-50 model in a distributed way in k8s cluster,

(1) Set the number of parameter servers and workers, the HDFS URL of ImageNet dataset in measure-speed.py;

(2) Run

$ python measure-speed.py

Basically, what it does is to sumbit a job to k8s and display training details(eg., training progress, speed, cpu usage) every 5 minutes. See here for more examples.

Submit Your Job

(1) Prepare the container: copy your program into the script folder under the image path and build the image by running

$ ./build.sh

(2) Similar to the simple example, configure job details such as image path, container resource requirement etc, and run measure-speed.py

More

Read the technical report for the details of Optimus.

About

An Efficient Dynamic Resource Scheduler for Deep Learning Clusters

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 74.2%
  • C++ 17.4%
  • Shell 6.8%
  • Other 1.6%