RL-Experiments

RL-Experiments aims to modify and compare deep RL algorithms in single machine easily. For distributed training, I highly recommend ray.

The codes refer to openai/baselines mostly but are implemented by PyTorch. We also highlight the differences between implementation and paper which can be found by searching highlight in codes. Evaluated on 4 atari games, our implementation is 15% faster than baseline with similar performance on average in single machine.

Dependency

Evaluation

With the same default parameters in openai/baselines, the FPS and performance with random seed 0 on four environments are illustrated as follows.

Devices:

Pong

A2C DQN PPO TRPO
Our 1667 277 1515 513
Baselines 1596 246 1089 501

Pong

SpaceInvaders

A2C DQN PPO TRPO
Our 1667 278 1550 501
Baselines 1572 247 1186 440

SpaceInvaders

BeamRider

A2C DQN PPO TRPO
Our 1667 272 1515 494
Baselines 1543 243 1062 451

BeamRider

Seaquest

A2C DQN PPO TRPO
Our 1667 275 1515 501
Baselines 1572 236 1203 481

Seaquest

Usage

git clone https://github.com/Officium/RL-Experiments.git
cd RL-Experiments/src
python run.py --env=CartPole-v1 --algorithm=dqn --number_timesteps=1e5

Implemented algorithms