# garage.sampler.vec_worker¶

Worker that “vectorizes” environments.

class VecWorker(*, seed, max_episode_length, worker_number, n_envs=DEFAULT_N_ENVS)

Worker with a single policy and multiple environments.

Alternates between taking a single step in all environments and asking the policy for an action for every environment. This allows computing a batch of actions, which is generally much more efficient than computing a single action when using neural networks.

Parameters
• seed (int) – The seed to use to intialize random number generators.

• max_episode_length (int or float) – The maximum length of episodes which will be sampled. Can be (floating point) infinity.

• worker_number (int) – The number of the worker this update is occurring in. This argument is used set a different seed for each worker.

• n_envs (int) – Number of environment copies to use.

DEFAULT_N_ENVS = 8
update_agent(self, agent_update)

Update an agent, assuming it implements Policy.

Parameters

agent_update (np.ndarray or dict or Policy) – If a tuple, dict, or np.ndarray, these should be parameters to agent, which should have been generated by calling Policy.get_param_values. Alternatively, a policy itself. Note that other implementations of Worker may take different types for this parameter.

update_env(self, env_update)

Update the environments.

If passed a list (inside this list passed to the Sampler itself), distributes the environments across the “vectorization” dimension.

Parameters

env_update (Environment or EnvUpdate or None) – The environment to replace the existing env with. Note that other implementations of Worker may take different types for this parameter.

Raises
• TypeError – If env_update is not one of the documented types.

• ValueError – If the wrong number of updates is passed.

start_episode(self)

Begin a new episode.

step_episode(self)

Take a single time-step in the current episode.

Returns

True iff at least one of the episodes was completed.

Return type

bool

collect_episode(self)

Collect all completed episodes.

Returns

A batch of the episodes completed since the last call

to collect_episode().

Return type

EpisodeBatch

shutdown(self)

Close the worker’s environments.

worker_init(self)

Initialize a worker.

rollout(self)

Sample a single episode of the agent in the environment.

Returns

The collected episode.

Return type

EpisodeBatch