Trainer gets episodes through Sampling to train the policy. In
to perform sampling.
and assign specific tasks to them, which is doing rollouts with agents and
environments. You can also implement your own sampler and worker. The
followings introduce the existing samplers and workers in Garage.
Sampler is responsible for assign sampling jobs to workers. Garage now has
two types of
LocalSampler, the default sampler, which runs workers in the main process in serial style. With this sampler, all the sampling tasks will run in the same thread.
Worker is the basic unit to perform a rollout per step. In paralleling
samplers, each worker will typically run in one exclusive CPU. For most
algorithms, Garage provides two kinds of workers,
VecWorker. A few algorithms (RL2 and PEARL) use custom workers specific to
DefaultWorker, the default worker. It works with one single agent/policy and one single environment in one step.
VecWorker, the worker with Vectorization, which runs multiple instances of the simulation on a single CPU.
VecWorkercan compute a batch of actions from a policy regarding multiple environments to reduce of overhead of sampling (e.g. feeding forward a neural network).
Setup Sampler and Worker for a Trainer¶
Setup the sampler and worker for a
Trainer is easy. Just passing
trainer.setup(). The number of workers in
the sampler can be set by the parameter
VecWorker, you can set the level of vectorization (i.e. the number of
environments simulated in one step) by setting
from garage.sampler import RaySampler, VecWorker ... trainer.setup( algo=algo, env=env, sampler_cls=RaySampler, n_workers=4, worker_class=VecWorker, worker_args=dict(n_envs=12) ) ...
In the above example, we choose
VecWorker, set the number
of workers to 4, and set the level of vectorization to 12. With this
configuration, sampling will run in 4 CPUs in parallel and each worker will
sample 12 actions in one step.
This page was authored by Ruofu Wang (@yeukfu).