# Sampling¶

Trainer gets episodes through Sampling to train the policy. In Garage, Trainer uses Sampler to perform sampling. Sampler manages Workers and assign specific tasks to them, which is doing rollouts with agents and environments. You can also implement your own sampler and worker. The followings introduce the existing samplers and workers in Garage.

## Sampler¶

Sampler is responsible for assign sampling jobs to workers. Garage now has two types of Samplers:

• LocalSampler, the default sampler, which runs workers in the main process in serial style. With this sampler, all the sampling tasks will run in the same thread.

• RaySampler, the sampler using Ray framework to run distributed workers in parallel style. RaySampler can not only run workers in different CPUs, but also in different machines across network.

## Worker¶

Worker is the basic unit to perform a rollout per step. In paralleling samplers, each worker will typically run in one exclusive CPU. For most algorithms, Garage provides two kinds of workers, DefaultWorker and VecWorker. A few algorithms (RL2 and PEARL) use custom workers specific to that algorithm.

• DefaultWorker, the default worker. It works with one single agent/policy and one single environment in one step.

• VecWorker, the worker with Vectorization, which runs multiple instances of the simulation on a single CPU. VecWorker can compute a batch of actions from a policy regarding multiple environments to reduce of overhead of sampling (e.g. feeding forward a neural network).

## Setup Sampler and Worker for a Trainer¶

Setup the sampler and worker for a Trainer is easy. Just passing sampler_cls and worker_class to trainer.setup(). The number of workers in the sampler can be set by the parameter n_workers.

For VecWorker, you can set the level of vectorization (i.e. the number of environments simulated in one step) by setting n_envs in worker_args.

from garage.sampler import RaySampler, VecWorker

...
trainer.setup(
algo=algo,
env=env,
sampler_cls=RaySampler,
n_workers=4,
worker_class=VecWorker,
worker_args=dict(n_envs=12)
)
...


In the above example, we choose RaySampler and VecWorker, set the number of workers to 4, and set the level of vectorization to 12. With this configuration, sampling will run in 4 CPUs in parallel and each worker will sample 12 actions in one step.