garage.tf.policies.categorical_lstm_policy module¶
Categorical LSTM Policy.
A policy represented by a Categorical distribution which is parameterized by a Long short-term memory (LSTM).
-
class
CategoricalLSTMPolicy
(env_spec, name='CategoricalLSTMPolicy', hidden_dim=32, hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, recurrent_nonlinearity=<function sigmoid>, recurrent_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, hidden_state_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, hidden_state_init_trainable=False, cell_state_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, cell_state_init_trainable=False, state_include_action=True, forget_bias=True, layer_normalization=False)[source]¶ Bases:
garage.tf.policies.policy.StochasticPolicy
Categorical LSTM Policy.
A policy represented by a Categorical distribution which is parameterized by a Long short-term memory (LSTM).
It only works with akro.Discrete action space.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Policy name, also the variable scope.
- hidden_dim (int) – Hidden dimension for LSTM cell.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- recurrent_nonlinearity (callable) – Activation function for recurrent layers. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- recurrent_w_init (callable) – Initializer function for the weight of recurrent layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- hidden_state_init (callable) – Initializer function for the initial hidden state. The functino should return a tf.Tensor.
- hidden_state_init_trainable (bool) – Bool for whether the initial hidden state is trainable.
- cell_state_init (callable) – Initializer function for the initial cell state. The functino should return a tf.Tensor.
- cell_state_init_trainable (bool) – Bool for whether the initial cell state is trainable.
- state_include_action (bool) – Whether the state includes action. If True, input dimension will be (observation dimension + action dimension).
- forget_bias (bool) – If True, add 1 to the bias of the forget gate at initialization. It’s used to reduce the scale of forgetting at the beginning of the training.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
build
(state_input, name=None)[source]¶ Build policy.
Parameters: - state_input (tf.Tensor) – State input.
- name (str) – Name of the policy, which is also the name scope.
Returns: Policy distribution. tf.Tensor: Step output, with shape \((N, S^*)\) tf.Tensor: Step hidden state, with shape \((N, S^*)\) tf.Tensor: Step cell state, with shape \((N, S^*)\) tf.Tensor: Initial hidden state, used to reset the hidden state
when policy resets. Shape: \((S^*)\)
- tf.Tensor: Initial cell state, used to reset the cell state
when policy resets. Shape: \((S^*)\)
Return type: tfp.distributions.OneHotCategorical
-
clone
(name)[source]¶ Return a clone of the policy.
It only copies the configuration of the primitive, not the parameters.
Parameters: name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph. Returns: Newly cloned policy. Return type: garage.tf.policies.CategoricalLSTMPolicy
-
distribution
¶ Policy distribution.
Returns: Policy distribution. Return type: tfp.Distribution.OneHotCategorical
-
get_action
(observation)[source]¶ Return a single action.
Parameters: observation (numpy.ndarray) – Observations. Returns: Action given input observation. dict(numpy.ndarray): Distribution parameters. Return type: int
-
get_actions
(observations)[source]¶ Return multiple actions.
Parameters: observations (numpy.ndarray) – Observations. Returns: Actions given input observations. dict(numpy.ndarray): Distribution parameters. Return type: list[int]
-
reset
(do_resets=None)[source]¶ Reset the policy.
Note
If do_resets is None, it will be by default np.array([True]), which implies the policy will not be “vectorized”, i.e. number of paralle environments for training data sampling = 1.
Parameters: do_resets (numpy.ndarray) – Bool that indicates terminal state(s).
-
state_info_specs
¶ State info specifcation.
Returns: - keys and shapes for the information related to the
- policy’s state when taking an action.
Return type: List[str]
-
vectorized
¶ Vectorized or not.
Returns: True if primitive supports vectorized operations. Return type: Bool