garage.tf.policies.gaussian_lstm_policy module¶
GaussianLSTMPolicy with GaussianLSTMModel.
-
class
GaussianLSTMPolicy
(env_spec, hidden_dim=32, name='GaussianLSTMPolicy', hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, recurrent_nonlinearity=<function sigmoid>, recurrent_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, hidden_state_init=<tensorflow.python.ops.init_ops.Zeros object>, hidden_state_init_trainable=False, cell_state_init=<tensorflow.python.ops.init_ops.Zeros object>, cell_state_init_trainable=False, forget_bias=True, learn_std=True, std_share_network=False, init_std=1.0, layer_normalization=False, state_include_action=True)[source]¶ Bases:
garage.tf.policies.base.StochasticPolicy
A policy which models actions with a Gaussian parameterized by an LSTM.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Model name, also the variable scope.
- hidden_dim (int) – Hidden dimension for LSTM cell for mean.
- hidden_nonlinearity (Callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (Callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (Callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- recurrent_nonlinearity (Callable) – Activation function for recurrent layers. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- recurrent_w_init (Callable) – Initializer function for the weight of recurrent layer(s). The function should return a tf.Tensor.
- output_nonlinearity (Callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (Callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (Callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- hidden_state_init (Callable) – Initializer function for the initial hidden state. The functino should return a tf.Tensor.
- hidden_state_init_trainable (bool) – Bool for whether the initial hidden state is trainable.
- cell_state_init (Callable) – Initializer function for the initial cell state. The functino should return a tf.Tensor.
- cell_state_init_trainable (bool) – Bool for whether the initial cell state is trainable.
- forget_bias (bool) – If True, add 1 to the bias of the forget gate at initialization. It’s used to reduce the scale of forgetting at the beginning of the training.
- learn_std (bool) – Is std trainable.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- init_std (float) – Initial value for std.
- layer_normalization (bool) – Bool for using layer normalization or not.
- state_include_action (bool) – Whether the state includes action. If True, input dimension will be (observation dimension + action dimension).
-
dist_info_sym
(obs_var, state_info_vars, name=None)[source]¶ Build a symbolic graph of the action distribution parameters.
Parameters: Returns: - Output of the symbolic graph of action
distribution parameters.
Return type: dict[tf.Tensor]
-
distribution
¶ Policy distribution.
Type: garage.tf.distributions.DiagonalGaussian
-
get_action
(observation)[source]¶ Get single action from this policy for the input observation.
Parameters: observation (numpy.ndarray) – Observation from environment. Returns: Predicted action and agent information. action (numpy.ndarray): Predicted action. agent_info (dict): Distribution obtained after observing thegiven observation, with keys * mean: (numpy.ndarray) * log_std: (numpy.ndarray) * prev_action: (numpy.ndarray), only present ifself._state_include_action is True.Return type: tuple[numpy.ndarray, dict]
-
get_actions
(observations)[source]¶ Get multiple actions from this policy for the input observations.
Parameters: observations (numpy.ndarray) – Observations from environment. Returns: Predicted action and agent information. actions (numpy.ndarray): Predicted actions. agent_infos (dict): Distribution obtained after observing thegiven observation, with keys * mean: (numpy.ndarray) * log_std: (numpy.ndarray) * prev_action: (numpy.ndarray), only present ifself._state_include_action is True.Return type: tuple[numpy.ndarray, dict]
-
reset
(dones=None)[source]¶ Reset the policy.
Note
If dones is None, it will be by default np.array([True]), which implies the policy will not be “vectorized”, i.e. number of paralle environments for training data sampling = 1.
Parameters: dones (numpy.ndarray) – Bool that indicates terminal state(s).