Gym Environment API based Bitcoin trading simulator with continuous observation space and discrete action space. It uses real world transactions from CoinBaseUSD exchange to sample per minute closing, lowest and highest prices along with volume of the currency traded in the particular minute interval.
Contents of this document
git clone https://github.com/samre12/gym-cryptotrading.git
cd gym-cryptotrading
pip install -e .
Importing the module into the current session using import gym_cryptotrading
will register the environment with gym
after which it can be used as any other gym environment.
'RealizedPnLEnv-v0'
'UnRealizedPnLEnv-v0'
'WeightedPnLEnv-v0'
import gym
import gym_cryptotrading
env = gym.make('RealizedPnLEnv-v0')
Use env.reset()
to start a new random episode.
[1.0]
at the start of the episode. Look Parameters for more information.state = env.reset() # use state to make initial prediction
Note: Make sure to reset the environment before first use else gym.error.ResetNeeded()
will be raised.
Use env.step(action)
to take one step in the environment.
(observation, reward, is_terminal, fractional_remaining_trades)
in respective orderobservation, reward, is_terminal, remaining_trades = env.step(action)
Note: Calling env.step(action)
after the terminal state is reached will raise gym.error.ResetNeeded()
.
With the current implementation, the environment does not support env.render()
.
Setting the logging level of gym
using gym.logger.set_level(level)
to a value less than or equal 10 will allow to track all the logs (debug
and info
levels) generated by the environment.
These include human readable timestamps of Bitcoin prices used to simulate an episode.
For more information on gym.logger
and setting logging levels, visit here .
Note: Custom loggers can also be provided to environments using env.env.set_logger(logger=)
Observation at a time step is the relative (closing, lowest, highest, volume)
of Bitcoin in the corresponding minute interval.
Since the price of Bitcoin varies from a few dollars to 15K dollars, the observation for time step i + 1 is normalized by the prices at time instant i.
Each entry in the observation is the ratio of increase (value greater than 1.0) or decrease (value lessar than 1.0) from the price at previos time instant.
At each time step, the agent can either go LONG or SHORT in a unit
(for more information , refer to Parameters) of Bitcoin or can stay NEUTRAL.
Action space thus becomes discrete with three possible actions:
NEUTRAL
corresponds to 0
LONG
corresponds to 1
SHORT
corresponds to 2
Note: Use env.action_space.get_action(action)
to lookup action names corresponding to their respective values.
The basic environment is characterized with these parameters:
history_length
lag in the observations that is used for the state representation of the trading agent.
every call to env.reset()
returns a numpy array of shape (history_length,) + shape(observation)
that corresponds to observations of length history_length
prior to the starting point of the episode.
trading agent can use the returned array to predict the first action
defaults to 100
.
supplied value must be greater than or equal to 0
horizon
alternatively episode length is the number trades that the agent does in a single episode
defaults to 5
.
supplied value must be greater than 0
unit
is the fraction of Bitcoin that can be traded in each time step
defaults to 5e-4
.
supplied value must be greater than 0
env = gym.make('RealizedPnLEnv-v0')
env.env.set_params(history_length, horizon, unit)
Note: parameters can only be set before first reset of the environment, that is, before the first call to env.reset()
, else gym_cryptotrading.errors.EnvironmentAlreadyLoaded
will be raised.
Some environments contain their own specific parameters due to the nature of their reward function.
These parameters can be passed using env.env.set_params(history_length, horizon, unit, **kwargs)
as keyworded arguements alongside setting history length, horizon and unit.
Dataset
Per minute Bitcoin series is obtained by modifying the procedure mentioned in this repository. Transactions in the Coinbase exchange are sampled to generate the Bitcoin price series.
Dataset for per minute prices of Bitcoin is not continuos and complete due to the downtime of the exchanges.
Current implementation does not make any assumptions about the missing values.
It rather finds continuos blocks with lengths greater than history_length + horizon + 1
and use them to simulate episodes. This avoids any discrepancies in results due to random subsitution of missing values
Sample logs generated by the simulator while preprocessing the dataset:
INFO: Columns found in the dataset Index([u'DateTime_UTC', u'Timestamp', u'price_open', u'price_high',
u'price_low', u'price_close', u'volume'],
dtype='object')
INFO: Number of blocks of continuous prices found are 58880
INFO: Number of usable blocks obtained from the dataset are 1651
INFO: Number of distinct episodes for the current configuration are 838047
Upon first use, the environment downloads latest transactions dataset from the exchange which are then cached in tempory directory of the operating system for future use.
A user can also update the latest transactions dataset by the following code:
from gym_cryptotrading.generator import Generator
Generator.update_gen()
update_gen
should be called prior to first reset of the environment to reflect the latest transactions in itIf you are running the environment behind a proxy, export suitalble http proxy settings to allow the environment to download transactions from the exchange
Coming soon.
Listing changes from b9af98db728230569a18d54dcfa87f7337930314
commit. Visit here to browse the repository with head at this commit.
Added support for trading environments with Realized PnL and Weighted Unrealized PnL reward functions
Renamed cryptotrading.py
to unrealizedPnL.py
to emphasize the specific reward function of the environment
Added support for setting custom logger for an environment using env.env.set_logger(logger=)
Updated environments to output the number of remaining trades on each call to env.step(action=)
Environment with Unrealized PnL reward function is now built using env = gym.make('UnrealizedPnLEnv-v0')
rather than env = gym.make('CryptoTrading-v0')
Instead of remaining_trades
, env.step(action)
now outputs np.array([fractional_remaining_trades])
. This is to take into account more supplementary information (like technical indicators) in the future