garage.torch.policies package¶
PyTorch Policies.
-
class
DeterministicMLPPolicy
(env_spec, name='DeterministicMLPPolicy', **kwargs)[source]¶ Bases:
garage.torch.policies.policy.Policy
Implements a deterministic policy network.
The policy network selects action based on the state of the environment. It uses a PyTorch neural network module to fit the function of pi(s).
-
forward
(observations)[source]¶ Compute actions from the observations.
Parameters: observations (torch.Tensor) – Batch of observations on default torch device. Returns: Batch of actions. Return type: torch.Tensor
-
get_action
(observation)[source]¶ Get a single action given an observation.
Parameters: observation (np.ndarray) – Observation from the environment. Returns: - np.ndarray: Predicted action.
- dict:
- list[float]: Mean of the distribution
- list[float]: Log of standard deviation of the
- distribution
Return type: tuple
-
get_actions
(observations)[source]¶ Get actions given observations.
Parameters: observations (np.ndarray) – Observations from the environment. Returns: - np.ndarray: Predicted actions.
- dict:
- list[float]: Mean of the distribution
- list[float]: Log of standard deviation of the
- distribution
Return type: tuple
-
-
class
GaussianMLPPolicy
(env_spec, hidden_sizes=(32, 32), hidden_nonlinearity=<sphinx.ext.autodoc.importer._MockObject object>, hidden_w_init=<sphinx.ext.autodoc.importer._MockObject object>, hidden_b_init=<sphinx.ext.autodoc.importer._MockObject object>, output_nonlinearity=None, output_w_init=<sphinx.ext.autodoc.importer._MockObject object>, output_b_init=<sphinx.ext.autodoc.importer._MockObject object>, learn_std=True, init_std=1.0, min_std=1e-06, max_std=None, std_parameterization='exp', layer_normalization=False, name='GaussianMLPPolicy')[source]¶ Bases:
garage.torch.policies.stochastic_policy.StochasticPolicy
MLP whose outputs are fed into a Normal distribution..
A policy that contains a MLP to make prediction based on a gaussian distribution.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a torch.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a torch.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a torch.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a torch.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a torch.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a torch.Tensor.
- learn_std (bool) – Is std trainable.
- init_std (float) – Initial value for std. (plain value - not log or exponentiated).
- min_std (float) – Minimum value for std.
- max_std (float) – Maximum value for std.
- std_parameterization (str) –
How the std should be parametrized. There are two options: - exp: the logarithm of the std will be stored, and applied a
exponential transformation- softplus: the std will be computed as log(1+exp(x))
- layer_normalization (bool) – Bool for using layer normalization or not.
- name (str) – Name of policy.
-
forward
(observations)[source]¶ Compute the action distributions from the observations.
Parameters: observations (torch.Tensor) – Batch of observations on default torch device. Returns: Batch distribution of actions. dict[str, torch.Tensor]: Additional agent_info, as torch Tensors Return type: torch.distributions.Distribution
-
class
Policy
(env_spec, name)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
,abc.ABC
Policy base class.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Name of policy.
-
action_space
¶ The action space for the environment.
Returns: Action space. Return type: akro.Space
-
get_action
(observation)[source]¶ Get a single action given an observation.
Parameters: observation (torch.Tensor) – Observation from the environment. Returns: - torch.Tensor: Predicted action.
- dict:
- list[float]: Mean of the distribution
- list[float]: Log of standard deviation of the
- distribution
Return type: tuple
-
get_actions
(observations)[source]¶ Get actions given observations.
Parameters: observations (torch.Tensor) – Observations from the environment. Returns: - torch.Tensor: Predicted actions.
- dict:
- list[float]: Mean of the distribution
- list[float]: Log of standard deviation of the
- distribution
Return type: tuple
-
get_param_values
()[source]¶ Get the parameters to the policy.
This method is included to ensure consistency with TF policies.
Returns: The parameters (in the form of the state dictionary). Return type: dict
-
observation_space
¶ The observation space for the environment.
Returns: Observation space. Return type: akro.Space
-
class
TanhGaussianMLPPolicy
(env_spec, hidden_sizes=(32, 32), hidden_nonlinearity=<sphinx.ext.autodoc.importer._MockObject object>, hidden_w_init=<sphinx.ext.autodoc.importer._MockObject object>, hidden_b_init=<sphinx.ext.autodoc.importer._MockObject object>, output_nonlinearity=None, output_w_init=<sphinx.ext.autodoc.importer._MockObject object>, output_b_init=<sphinx.ext.autodoc.importer._MockObject object>, init_std=1.0, min_std=2.061153622438558e-09, max_std=7.38905609893065, std_parameterization='exp', layer_normalization=False)[source]¶ Bases:
garage.torch.policies.stochastic_policy.StochasticPolicy
Multiheaded MLP whose outputs are fed into a TanhNormal distribution.
A policy that contains a MLP to make prediction based on a gaussian distribution with a tanh transformation.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a torch.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a torch.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a torch.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a torch.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a torch.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a torch.Tensor.
- init_std (float) – Initial value for std. (plain value - not log or exponentiated).
- min_std (float) – If not None, the std is at least the value of min_std, to avoid numerical issues (plain value - not log or exponentiated).
- max_std (float) – If not None, the std is at most the value of max_std, to avoid numerical issues (plain value - not log or exponentiated).
- std_parameterization (str) –
How the std should be parametrized. There are two options: - exp: the logarithm of the std will be stored, and applied a
exponential transformation- softplus: the std will be computed as log(1+exp(x))
- layer_normalization (bool) – Bool for using layer normalization or not.
-
forward
(observations)[source]¶ Compute the action distributions from the observations.
Parameters: observations (torch.Tensor) – Batch of observations on default torch device. Returns: Batch distribution of actions. dict[str, torch.Tensor]: Additional agent_info, as torch Tensors Return type: torch.distributions.Distribution
-
class
ContextConditionedPolicy
(latent_dim, context_encoder, policy, use_information_bottleneck, use_next_obs)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
A policy that outputs actions based on observation and latent context.
In PEARL, policies are conditioned on current state and a latent context (adaptation data) variable Z. This inference network estimates the posterior probability of z given past transitions. It uses context information stored in the encoder to infer the probabilistic value of z and samples from a policy conditioned on z.
Parameters: - latent_dim (int) – Latent context variable dimension.
- context_encoder (garage.torch.embeddings.ContextEncoder) – Recurrent or permutation-invariant context encoder.
- policy (garage.torch.policies.Policy) – Policy used to train the network.
- use_information_bottleneck (bool) – True if latent context is not deterministic; false otherwise.
- use_next_obs (bool) – True if next observation is used in context for distinguishing tasks; false otherwise.
-
compute_kl_div
()[source]¶ Compute \(KL(q(z|c) \| p(z))\).
Returns: \(KL(q(z|c) \| p(z))\). Return type: float
-
context
¶ Return context.
Returns: - Context values, with shape \((X, N, C)\).
- X is the number of tasks. N is batch size. C is the combined size of observation, action, reward, and next observation if next observation is used in context. Otherwise, C is the combined size of observation, action, and reward.
Return type: torch.Tensor
-
forward
(obs, context)[source]¶ Given observations and context, get actions and probs from policy.
Parameters: - obs (torch.Tensor) –
Observation values, with shape \((X, N, O)\). X is the number of tasks. N is batch size. O
is the size of the flattened observation space. - context (torch.Tensor) – Context values, with shape \((X, N, C)\). X is the number of tasks. N is batch size. C is the combined size of observation, action, reward, and next observation if next observation is used in context. Otherwise, C is the combined size of observation, action, and reward.
Returns: - torch.Tensor: Predicted action values.
- np.ndarray: Mean of distribution.
- np.ndarray: Log std of distribution.
- torch.Tensor: Log likelihood of distribution.
- torch.Tensor: Sampled values from distribution before
- applying tanh transformation.
- torch.Tensor: z values, with shape \((N, L)\). N is batch size.
L is the latent dimension.
Return type: - obs (torch.Tensor) –
-
get_action
(obs)[source]¶ Sample action from the policy, conditioned on the task embedding.
Parameters: obs (torch.Tensor) – Observation values, with shape \((1, O)\). O is the size of the flattened observation space. Returns: - Output action value, with shape \((1, A)\).
- A is the size of the flattened action space.
- dict:
- np.ndarray[float]: Mean of the distribution.
- np.ndarray[float]: Standard deviation of logarithmic values
- of the distribution.
Return type: torch.Tensor
-
infer_posterior
(context)[source]¶ Compute \(q(z \| c)\) as a function of input context and sample new z.
Parameters: context (torch.Tensor) – Context values, with shape \((X, N, C)\). X is the number of tasks. N is batch size. C is the combined size of observation, action, reward, and next observation if next observation is used in context. Otherwise, C is the combined size of observation, action, and reward.