garage.tf.policies.gaussian_mlp_policy module¶
Gaussian MLP Policy.
A policy represented by a Gaussian distribution which is parameterized by a multilayer perceptron (MLP).
-
class
GaussianMLPPolicy
(env_spec, name='GaussianMLPPolicy', hidden_sizes=(32, 32), hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, learn_std=True, adaptive_std=False, std_share_network=False, init_std=1.0, min_std=1e-06, max_std=None, std_hidden_sizes=(32, 32), std_hidden_nonlinearity=<function tanh>, std_output_nonlinearity=None, std_parameterization='exp', layer_normalization=False)[source]¶ Bases:
garage.tf.policies.policy.StochasticPolicy
Gaussian MLP Policy.
A policy represented by a Gaussian distribution which is parameterized by a multilayer perceptron (MLP).
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- learn_std (bool) – Is std trainable.
- adaptive_std (bool) – Is std a neural network. If False, it will be a parameter.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- init_std (float) – Initial value for std.
- std_hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for std. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- min_std (float) – If not None, the std is at least the value of min_std, to avoid numerical issues.
- max_std (float) – If not None, the std is at most the value of max_std, to avoid numerical issues.
- std_hidden_nonlinearity (callable) – Nonlinearity for each hidden layer in the std network. The function should return a tf.Tensor.
- std_output_nonlinearity (callable) – Nonlinearity for output layer in the std network. The function should return a tf.Tensor.
- std_parameterization (str) – How the std should be parametrized. There are a few options:
- exp (-) – the logarithm of the std will be stored, and applied a exponential transformation
- softplus (-) – the std will be computed as log(1+exp(x))
- layer_normalization (bool) – Bool for using layer normalization or not.
-
build
(state_input, name=None)[source]¶ Build policy.
Parameters: - state_input (tf.Tensor) – State input.
- name (str) – Name of the policy, which is also the name scope.
Returns: Distribution. tf.tensor: Mean. tf.Tensor: Log of standard deviation.
Return type: tfp.distributions.MultivariateNormalDiag
-
clone
(name)[source]¶ Return a clone of the policy.
It only copies the configuration of the primitive, not the parameters.
Parameters: name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph. Returns: Newly cloned policy. Return type: garage.tf.policies.GaussianMLPPolicy
-
distribution
¶ Policy distribution.
Returns: Policy distribution. Return type: tfp.Distribution.MultivariateNormalDiag
-
get_action
(observation)[source]¶ Get single action from this policy for the input observation.
Parameters: observation (numpy.ndarray) – Observation from environment. Returns: Actions dict: Predicted action and agent information. Return type: numpy.ndarray Note
It returns an action and a dict, with keys - mean (numpy.ndarray): Mean of the distribution. - log_std (numpy.ndarray): Log standard deviation of the
distribution.
-
get_actions
(observations)[source]¶ Get multiple actions from this policy for the input observations.
Parameters: observations (numpy.ndarray) – Observations from environment. Returns: Actions dict: Predicted action and agent information. Return type: numpy.ndarray Note
It returns actions and a dict, with keys - mean (numpy.ndarray): Means of the distribution. - log_std (numpy.ndarray): Log standard deviations of the
distribution.
-
vectorized
¶ Vectorized or not.
Returns: True if primitive supports vectorized operations. Return type: Bool