Efficient distributed AutoRL script for any framework
- Distributed training on single machine, multiple CPU/GPUs
- Automatic resource allocation
- TPESampler and Hyperband pruner support
- Automatic testing for stable hyperparameter set
In rl.py
- Implement your training logic in train_rl_agent
- Specify hyperparameter range
In rl-auto-gpu.py
- Specify minimum and maximum steps per trial, and number of trials
- Set parallel trials per GPU & parallel envs per trial according to your hardware
- Set reduction factor (integer). In most situations you can set one so that Hyperband bracket number stays in [4, 6]
Then run rl-auto-gpu.py, the script will find all available gpus and run hyperparameter search in parallel