garage.np.exploration_policies.add_gaussian_noise

Gaussian exploration strategy.

class AddGaussianNoise(env_spec, policy, total_timesteps, max_sigma=1.0, min_sigma=0.1, decay_ratio=1.0)[source]

Bases: garage.np.exploration_policies.exploration_policy.ExplorationPolicy

Inheritance diagram of garage.np.exploration_policies.add_gaussian_noise.AddGaussianNoise

Add Gaussian noise to the action taken by the deterministic policy.

Parameters
  • env_spec (EnvSpec) – Environment spec to explore.

  • policy (garage.Policy) – Policy to wrap.

  • total_timesteps (int) – Total steps in the training, equivalent to max_episode_length * n_epochs.

  • max_sigma (float) – Action noise standard deviation at the start of exploration.

  • min_sigma (float) – Action noise standard deviation at the end of the decay period.

  • decay_ratio (float) – Fraction of total steps for epsilon decay.

get_action(observation)[source]

Get action from this policy for the input observation.

Parameters

observation (numpy.ndarray) – Observation from the environment.

Returns

Actions with noise. List[dict]: Arbitrary policy state information (agent_info).

Return type

np.ndarray

get_actions(observations)[source]

Get actions from this policy for the input observation.

Parameters

observations (list) – Observations from the environment.

Returns

Actions with noise. List[dict]: Arbitrary policy state information (agent_info).

Return type

np.ndarray

update(episode_batch)[source]

Update the exploration policy using a batch of trajectories.

Parameters

episode_batch (EpisodeBatch) – A batch of trajectories which were sampled with this policy active.

get_param_values()[source]

Get parameter values.

Returns

Values of each parameter.

Return type

list or dict

set_param_values(params)[source]

Set param values.

Parameters

params (np.ndarray) – A numpy array of parameter values.

reset(dones=None)

Reset the state of the exploration.

Parameters

dones (List[bool] or numpy.ndarray or None) – Which vectorization states to reset.