Skip to content

BenJamesbabala/dist-dqn

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

dist-dqn

Distributed Reinforcement Learning using Deep Q-Network in TensorFlow.

Distributed DQN framework for training OpenAI Gym (https://gym.openai.com/) environments over multiple GPUs. Can also be configured to run in a cluster of hosts.

Single node training: ./scripts/dqn_single_node.sh <env_type> <env_name>
Multi-GPU training: ./scripts/dqn_multi_gpu.sh <env_type> <env_name> <num_gpus>
Currently supported values for env_type are control for classic control environments (https://gym.openai.com/envs#classic_control) and atari for Atari envrionments (https://gym.openai.com/envs#atari).

Implements a simple fully-connected network with two hidden layers for small environments like CartPole (https://gym.openai.com/envs/CartPole-v0) as well as the convolutional network architecture described in https://storage.googleapis.com/deepmind-data/assets/papers/DeepMindNature14236Paper.pdf for enviroments such as Pong (https://gym.openai.com/envs/Pong-v0).

TODO: More info soon!

About

Distributed reinforcement learning using Deep Q-Network in TensorFlow

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.8%
  • Shell 11.2%