garage.torch.policies.tanh_gaussian_mlp_policy module

TanhGaussianMLPPolicy.

class TanhGaussianMLPPolicy(env_spec, hidden_sizes=(32, 32), hidden_nonlinearity=<sphinx.ext.autodoc.importer._MockObject object>, hidden_w_init=<sphinx.ext.autodoc.importer._MockObject object>, hidden_b_init=<sphinx.ext.autodoc.importer._MockObject object>, output_nonlinearity=None, output_w_init=<sphinx.ext.autodoc.importer._MockObject object>, output_b_init=<sphinx.ext.autodoc.importer._MockObject object>, init_std=1.0, min_std=2.061153622438558e-09, max_std=7.38905609893065, std_parameterization='exp', layer_normalization=False)[source]

Bases: garage.torch.policies.stochastic_policy.StochasticPolicy

Multiheaded MLP whose outputs are fed into a TanhNormal distribution.

A policy that contains a MLP to make prediction based on a gaussian distribution with a tanh transformation.

Parameters:
  • env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
  • hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
  • hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a torch.Tensor. Set it to None to maintain a linear activation.
  • hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a torch.Tensor.
  • hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a torch.Tensor.
  • output_nonlinearity (callable) – Activation function for output dense layer. It should return a torch.Tensor. Set it to None to maintain a linear activation.
  • output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a torch.Tensor.
  • output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a torch.Tensor.
  • init_std (float) – Initial value for std. (plain value - not log or exponentiated).
  • min_std (float) – If not None, the std is at least the value of min_std, to avoid numerical issues (plain value - not log or exponentiated).
  • max_std (float) – If not None, the std is at most the value of max_std, to avoid numerical issues (plain value - not log or exponentiated).
  • std_parameterization (str) –

    How the std should be parametrized. There are two options: - exp: the logarithm of the std will be stored, and applied a

    exponential transformation
    • softplus: the std will be computed as log(1+exp(x))
  • layer_normalization (bool) – Bool for using layer normalization or not.
forward(observations)[source]

Compute the action distributions from the observations.

Parameters:observations (torch.Tensor) – Batch of observations on default torch device.
Returns:Batch distribution of actions. dict[str, torch.Tensor]: Additional agent_info, as torch Tensors
Return type:torch.distributions.Distribution