garage.torch.policies.stochastic_policy module

Base Stochastic Policy.

class StochasticPolicy(env_spec, name)[source]

Bases: garage.torch.policies.policy.Policy, abc.ABC

Abstract base class for torch stochastic policies.

forward(observations)[source]

Compute the action distributions from the observations.

Parameters:observations (torch.Tensor) – Batch of observations on default torch device.
Returns:Batch distribution of actions. dict[str, torch.Tensor]: Additional agent_info, as torch Tensors.
Do not need to be detached, and can be on any device.
Return type:torch.distributions.Distribution
get_action(observation)[source]

Get a single action given an observation.

Parameters:observation (np.ndarray) – Observation from the environment. Shape is \(env_spec.observation_space\).
Returns:
  • np.ndarray: Predicted action. Shape is
    \(env_spec.action_space\).
  • dict:
    • np.ndarray[float]: Mean of the distribution
    • np.ndarray[float]: Standard deviation of logarithmic
      values of the distribution.
Return type:tuple
get_actions(observations)[source]

Get actions given observations.

Parameters:observations (np.ndarray) – Observations from the environment. Shape is \(batch_dim \bullet env_spec.observation_space\).
Returns:
  • np.ndarray: Predicted actions.
    \(batch_dim \bullet env_spec.action_space\).
  • dict:
    • np.ndarray[float]: Mean of the distribution.
    • np.ndarray[float]: Standard deviation of logarithmic
      values of the distribution.
Return type:tuple