garage.tf.policies.gaussian_mlp_policy

Gaussian MLP Policy.

A policy represented by a Gaussian distribution which is parameterized by a multilayer perceptron (MLP).

class GaussianMLPPolicy(env_spec, name='GaussianMLPPolicy', hidden_sizes=(32, 32), hidden_nonlinearity=tf.nn.tanh, hidden_w_init=tf.initializers.glorot_uniform(seed=deterministic.get_tf_seed_stream()), hidden_b_init=tf.zeros_initializer(), output_nonlinearity=None, output_w_init=tf.initializers.glorot_uniform(seed=deterministic.get_tf_seed_stream()), output_b_init=tf.zeros_initializer(), learn_std=True, adaptive_std=False, std_share_network=False, init_std=1.0, min_std=1e-06, max_std=None, std_hidden_sizes=(32, 32), std_hidden_nonlinearity=tf.nn.tanh, std_output_nonlinearity=None, std_parameterization='exp', layer_normalization=False)

Bases: garage.tf.models.GaussianMLPModel, garage.tf.policies.policy.Policy

Inheritance diagram of garage.tf.policies.gaussian_mlp_policy.GaussianMLPPolicy

Gaussian MLP Policy.

A policy represented by a Gaussian distribution which is parameterized by a multilayer perceptron (MLP).

Parameters
  • env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.

  • name (str) – Model name, also the variable scope.

  • hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.

  • hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.

  • hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.

  • hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.

  • output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.

  • output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.

  • output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.

  • learn_std (bool) – Is std trainable.

  • adaptive_std (bool) – Is std a neural network. If False, it will be a parameter.

  • std_share_network (bool) – Boolean for whether mean and std share the same network.

  • init_std (float) – Initial value for std.

  • std_hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for std. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.

  • min_std (float) – If not None, the std is at least the value of min_std, to avoid numerical issues.

  • max_std (float) – If not None, the std is at most the value of max_std, to avoid numerical issues.

  • std_hidden_nonlinearity (callable) – Nonlinearity for each hidden layer in the std network. The function should return a tf.Tensor.

  • std_output_nonlinearity (callable) – Nonlinearity for output layer in the std network. The function should return a tf.Tensor.

  • std_parameterization (str) – How the std should be parametrized. There are a few options:

  • exp (-) – the logarithm of the std will be stored, and applied a exponential transformation

  • softplus (-) – the std will be computed as log(1+exp(x))

  • layer_normalization (bool) – Bool for using layer normalization or not.

property input_dim

Dimension of the policy input.

Type

int

property env_spec

Policy environment specification.

Returns

Environment specification.

Return type

garage.EnvSpec

property parameters

Parameters of the model.

Returns

Parameters

Return type

np.ndarray

property name

Name (str) of the model.

This is also the variable scope of the model.

Returns

Name of the model.

Return type

str

property input

Default input of the model.

When the model is built the first time, by default it creates the ‘default’ network. This property creates a reference to the input of the network.

Returns

Default input of the model.

Return type

tf.Tensor

property output

Default output of the model.

When the model is built the first time, by default it creates the ‘default’ network. This property creates a reference to the output of the network.

Returns

Default output of the model.

Return type

tf.Tensor

property inputs

Default inputs of the model.

When the model is built the first time, by default it creates the ‘default’ network. This property creates a reference to the inputs of the network.

Returns

Default inputs of the model.

Return type

list[tf.Tensor]

property outputs

Default outputs of the model.

When the model is built the first time, by default it creates the ‘default’ network. This property creates a reference to the outputs of the network.

Returns

Default outputs of the model.

Return type

list[tf.Tensor]

property state_info_specs

State info specification.

Returns

keys and shapes for the information related to the

module’s state when taking an action.

Return type

List[str]

property state_info_keys

State info keys.

Returns

keys for the information related to the module’s state

when taking an input.

Return type

List[str]

property observation_space

Observation space.

Returns

The observation space of the environment.

Return type

akro.Space

property action_space

Action space.

Returns

The action space of the environment.

Return type

akro.Space

get_action(observation)

Get single action from this policy for the input observation.

Parameters

observation (numpy.ndarray) – Observation from environment.

Returns

Actions dict: Predicted action and agent information.

Return type

numpy.ndarray

Note

It returns an action and a dict, with keys - mean (numpy.ndarray): Mean of the distribution. - log_std (numpy.ndarray): Log standard deviation of the

distribution.

get_actions(observations)

Get multiple actions from this policy for the input observations.

Parameters

observations (numpy.ndarray) – Observations from environment.

Returns

Actions dict: Predicted action and agent information.

Return type

numpy.ndarray

Note

It returns actions and a dict, with keys - mean (numpy.ndarray): Means of the distribution. - log_std (numpy.ndarray): Log standard deviations of the

distribution.

clone(name)

Return a clone of the policy.

It copies the configuration of the primitive and also the parameters.

Parameters

name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph.

Returns

Newly cloned policy.

Return type

garage.tf.policies.GaussianMLPPolicy

network_output_spec()

Network output spec.

Returns

List of key(str) for the network outputs.

Return type

list[str]

build(*inputs, name=None)

Build a Network with the given input(s).

* Do not call tf.global_variable_initializers() after building a model as it will reassign random weights to the model. The parameters inside a model will be initialized when calling build(). *

It uses the same, fixed variable scope for all Networks, to ensure parameter sharing. Different Networks must have an unique name.

Parameters
  • inputs (list[tf.Tensor]) – Tensor input(s), recommended to be positional arguments, for example, def build(self, state_input, action_input, name=None).

  • name (str) – Name of the model, which is also the name scope of the model.

Raises

ValueError – When a Network with the same name is already built.

Returns

Output tensors of the model with the given

inputs.

Return type

list[tf.Tensor]

network_input_spec()

Network input spec.

Returns

List of key(str) for the network inputs.

Return type

list[str]

reset(do_resets=None)

Reset the module.

This is effective only to recurrent modules. do_resets is effective only to vectoried modules.

For a vectorized modules, do_resets is an array of boolean indicating which internal states to be reset. The length of do_resets should be equal to the length of inputs.

Parameters

do_resets (numpy.ndarray) – Bool array indicating which states to be reset.

terminate()

Clean up operation.

get_trainable_vars()

Get trainable variables.

Returns

A list of trainable variables in the current

variable scope.

Return type

List[tf.Variable]

get_global_vars()

Get global variables.

Returns

A list of global variables in the current

variable scope.

Return type

List[tf.Variable]

get_regularizable_vars()

Get all network weight variables in the current scope.

Returns

A list of network weight variables in the

current variable scope.

Return type

List[tf.Variable]

get_params()

Get the trainable variables.

Returns

A list of trainable variables in the current

variable scope.

Return type

List[tf.Variable]

get_param_shapes()

Get parameter shapes.

Returns

A list of variable shapes.

Return type

List[tuple]

get_param_values()

Get param values.

Returns

Values of the parameters evaluated in

the current session

Return type

np.ndarray

set_param_values(param_values)

Set param values.

Parameters

param_values (np.ndarray) – A numpy array of parameter values.

flat_to_params(flattened_params)

Unflatten tensors according to their respective shapes.

Parameters

flattened_params (np.ndarray) – A numpy array of flattened params.

Returns

A list of parameters reshaped to the

shapes specified.

Return type

List[np.ndarray]