Skip to content

zhuboli/alf

 
 

Repository files navigation

ALF

ALF-logo

CI

Agent Learning Framework (ALF) is a reinforcement learning framework emphasizing on the flexibility and easiness of implementing complex algorithms involving many different components. ALF is built on PyTorch. The development of previous version based on Tensorflow 2.1 has been stopped.

Algorithms

Installation

You can run the following commands to install ALF

git clone https://github.com/HorizonRobotics/alf
cd alf
pip install -e .

Examples

All the examples below are trained on a single machine Intel(R) Core(TM) i9-7960X CPU @ 2.80GHz with 32 CPUs and one RTX 2080Ti GPU.

You can train model of the examples using the following command:

python -m alf.bin.train --gin_file=GIN_FILE --root_dir=LOG_DIR
  • GIN_FILE is the file of gin configuration. You can find sample gin configuration files for different tasks under directory alf/examples (note that some of the examples have not been converted to use the latest pytorch version of ALF).
  • LOG_DIR is the directory when you want to store the training results. Note that if you want to train from scratch, a new value for LOG_DIR need to be used. Othewise, it is assumed to resume the training from a previous checkpoint (if any).

During training, you can use tensorboard to show the progress of training:

tensorboard --logdir=LOG_DIR

After training, you can visualize the trained model using the following command:

python -m alf.bin.play --root_dir=LOG_DIR

A2C

  • Cart pole. The training score took only 30 seconds to reach 200, using 8 environments.

    breakout-training-curve cartpole-video

  • Atari games. Need to install python package atari-py for atari game environments. The evaluation score (by taking argmax of the policy) took 1.5 hours to reach 800 on Breakout, using 64 environments.

    breakout-training-curve breakout-playing-screen

  • Simple navigation with visual input. Follow the instruction at SocialRobot to install the environment.

    simple-navigation-curve simple0navigation-video

PPO

  • PR2 grasping state only. Follow the instruction at SocialRobot to install the environment.

    ppo-pr2-curve pr2-video

  • Humanoid. Learning to walk using the pybullet Humanoid environment. Need to install python pybullet>=2.5.0 for the environment. The training score took 1 hour 40 minutes to reach 2k, using asynchronous training with 2 actors (192 environments).

    Humanoid-training-curve Humanoid-video

DDPG

  • FetchSlide (sparse rewards). Need to install the MuJoCo simulator first. This example reproduces the performance of vanilla DDPG reported in the OpenAI's Robotics environment paper. Our implementation doesn't use MPI, but obtains (evaluation) performance on par with the original implementation. (The original MPI implementation has 19 workers, each worker containing 2 environments for rollout and sampling a minibatch of size 256 from its replay buffer for computing gradients. All the workers' gradients will be summed together for a centralized optimizer step. Our implementation simply samples a minibatch of size 5000 from a common replay buffer per optimizer step.) The training took about 1 hour with 38 (19*2) parallel environments on a single GPU.

    ddpg-fetchslide-training-curve

SAC

  • Bipedal Walker.

    bipedal-walker-training-curve bipedal-walker-video

  • FetchReach (sparse rewards). Need to install the MuJoCo simulator first. The training took about 20 minutes with 20 parallel environments on a single GPU.

    sac-fetchreach-training-curve

  • FetchSlide (sparse rewards). Need to install the MuJoCo simulator first. This is the same task with the DDPG example above, but with SAC as the learning algorithm. Also it has only 20 (instead of 38) parallel environments to improve sample efficiency. The training took about 2 hours on a single GPU.

    sac-fetchslide-training-curve

  • Fetch Environments (sparse rewards) w/ Action Repeat. We are able to achieve even better performance than reported by DDPG + Hindsight Experience Replay in some cases simply by using SAC + Action Repeat with length 3 timesteps. See this note to view learning curves, videos, and more details.

ICM

  • Super Mario. Playing Super Mario only using intrinsic reward. Python package gym-retro>=0.7.0 is required for this experiment and also a suitable SuperMarioBros-Nes rom should be obtained and imported (roms are not included in gym-retro). See this doc on how to import roms.

    super-mario-training-curve super-mario-video

RND

  • Montezuma's Revenge. Training the hard exploration game Montezuma's Revenge with intrinsic rewards generated by RND. A lucky agent can get an episodic score of 6600 in 160M frames (40M steps with frame_skip=4). A normal agent would get an episodic score of 4000~6000 in the same number of frames. The training took about 6.5 hours with 128 parallel environments on a single GPU.

mrevenge-training-curvemrevenge-video

DIAYN

  • Pendulum. Learning diverse skills without external rewards.

    Discriminator loss Skills learned with DIAYN

Merlin

  • Collect Good Objects. Learn to collect good objects and avoid bad objects. DeepmindLab is required, Follow the instruction at DeepmindLab to install the environment.

    room-collect-good-objects-training-curve room-collect-good-objects

MISC

MuZero

  • 6x6 Go. It took about a day to train a reasonable agent to play 6x6 go using one GPU.

    6x6-go

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.9%
  • Other 1.1%