ChainerRL is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using Chainer, a flexible deep learning framework.
ChainerRL is tested with 3.5.1+. For other requirements, see requirements.txt.
ChainerRL can be installed via PyPI:
pip install chainerrl
It can also be installed from the source code:
python setup.py install
Refer to Installation for more information on installation.
You can try ChainerRL Quickstart Guide first, or check the examples ready for Atari 2600 and Open AI Gym.
For more information, you can refer to ChainerRL's documentation.
Algorithm | Discrete Action | Continous Action | Recurrent Model | Batch Training | CPU Async Training |
---|---|---|---|---|---|
DQN (including DoubleDQN etc.) | ✓ | ✓ (NAF) | ✓ | ✓ | x |
Categorical DQN | ✓ | x | ✓ | ✓ | x |
Rainbow | ✓ | x | ✓ | ✓ | x |
IQN | ✓ | x | ✓ | ✓ | x |
DDPG | x | ✓ | ✓ | ✓ | x |
A3C | ✓ | ✓ | ✓ | ✓ (A2C) | ✓ |
ACER | ✓ | ✓ | ✓ | x | ✓ |
NSQ (N-step Q-learning) | ✓ | ✓ (NAF) | ✓ | x | ✓ |
PCL (Path Consistency Learning) | ✓ | ✓ | ✓ | x | ✓ |
PPO | ✓ | ✓ | ✓ | ✓ | x |
TRPO | ✓ | ✓ | ✓ | ✓ | x |
TD3 | x | ✓ | x | ✓ | x |
SAC | x | ✓ | x | ✓ | x |
Following algorithms have been implemented in ChainerRL:
Following useful techniques have been also implemented in ChainerRL:
ChainerRL has a set of accompanying visualization tools in order to aid developers' ability to understand and debug their RL agents. With this visualization tool, the behavior of ChainerRL agents can be easily inspected from a browser UI.
Environments that support the subset of OpenAI Gym's interface (reset
and step
methods) can be used.
Any kind of contribution to ChainerRL would be highly appreciated! If you are interested in contributing to ChainerRL, please read CONTRIBUTING.md.
To cite ChainerRL in publications:
@InProceedings{fujita2019chainerrl,
author = {Fujita, Yasuhiro and Kataoka, Toshiki and Nagarajan, Prabhat and Ishikawa, Takahiro},
title = {ChainerRL: A Deep Reinforcement Learning Library},
booktitle = {Workshop on Deep Reinforcement Learning at the 33rd Conference on Neural Information Processing Systems},
location = {Vancouver, Canada},
month = {December},
year = {2019}
}