Modified from https://github.com/g6ling/Reinforcement-Learning-Pytorch-Cartpole.
How to use this repo:
- Make sure that you have installed miniconda (for Linux, see
https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html
) cd
into this repocd packages
conda create --name pomdpr python=3.7
wherepomdpr
stands for POMDP and Roboticsconda activate pomdpr
chmod +x install_packages.sh
wherechmod
makes the bash script executable on your device./install_packages.sh
installsnumpy scipy torch gym PyYAML wandb
andrl_parsers
andgym-pomdp
(these two are stored insidedrqn/packages
)cd ..
back to the top level of the repo- Test your installation using
python algorithms/POMDP/3-DRQN-Store-State-HeavenHell/train.py --lr=0.00005 --use_experts=0 --seed=1 --debug_mode=1
wheredebug_mode=1
makes sure thatwandb
is not used wandb login
- Do anything you want now.
Note that if you modify the Heaven-Hell pomdp file (e.g., modify the initial belief or the starting state distribution) you will need to re-install gym-pomdp for the change to take effect.
Below is README from the original repo.
Simple Cartpole example writed with pytorch.
Cartpole is very easy problem and is converged very fast in many case. So you can run this example in your computer(maybe it take just only 1~2 minitue).
- DQN [1]
- Double [2]
- Duel [3]
- Multi-step [4]
- PER(Prioritized Experience Replay) [5]
- Nosiy-Net [6]
- Distributional(C51) [7]
- Rainbow [8]
- REINFORCE [9]
- Actor Critic [10]
- Advantage Actor Critic
- GAE(Generalized Advantage Estimation) [12]
- TNPG [20]
- TRPO [13]
- PPO - Single Version [14]
- Asynchronous Q-learning [11]
- A3C (Asynchronous Advantage Actor Critic) [11]
- ACER [21]
- PPO [14]
- APE-X DQN [15]
- IMPALA [23]
- R2D2 [16]
- DQN (use state stack)
- DRQN [24] [25]
- DRQN (use state stack)
- DRQN (store Rnn State) [16]
- R2D2 - Single Version [16]
[1]Playing Atari with Deep Reinforcement Learning
[2]Deep Reinforcement Learning with Double Q-learning
[3]Dueling Network Architectures for Deep Reinforcement Learning
[4]Reinforcement Learning: An Introduction
[5]Prioritized Experience Replay
[6]Noisy Networks for Exploration
[7]A Distributional Perspective on Reinforcement Learning
[8]Rainbow: Combining Improvements in Deep Reinforcement Learning
[9]Policy Gradient Methods for Reinforcement Learning with Function Approximation
[10]Actor-Critic Algorithms
[11]Asynchronous Methods for Deep Reinforcement Learning
[12]HIGH-DIMENSIONAL CONTINUOUS CONTROL USING GENERALIZED ADVANTAGE ESTIMATION
[13]Trust Region Policy Optimization
[14]Proximal Policy Optimization
[15]DISTRIBUTED PRIORITIZED EXPERIENCE REPLAY
[16]RECURRENT EXPERIENCE REPLAY IN DISTRIBUTED REINFORCEMENT LEARNING
[17]EXPLORATION BY RANDOM NETWORK DISTILLATION
[18]Distributional Reinforcement Learning with Quantile Regression
[19]Implicit Quantile Networks for Distributional Reinforcement Learning
[20]A Natural Policy Gradient
[21]SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY
[22]Curiosity-driven Exploration by Self-supervised Prediction
[23]IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
[24]Deep Recurrent Q-Learning for Partially Observable MDPs
[25]Playing FPS Games with Deep Reinforcement Learning
- https://github.com/openai/baselines
- https://github.com/reinforcement-learning-kr/pg_travel
- https://github.com/reinforcement-learning-kr/distributional_rl
- https://github.com/Kaixhin/Rainbow
- https://github.com/Kaixhin/ACER
- https://github.com/higgsfield/RL-Adventure-2
check this issue. g6ling/Reinforcement-Learning-Pytorch-Cartpole#1