garage.tf.policies package¶
Policies for TensorFlow-based algorithms.
-
class
Policy
(name, env_spec)[source]¶ Bases:
garage.tf.models.module.Module
Base class for policies in TensorFlow.
Parameters: - name (str) – Policy name, also the variable scope.
- env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
-
action_space
¶ Action space.
Returns: The action space of the environment. Return type: akro.Space
-
env_spec
¶ Policy environment specification.
Returns: Environment specification. Return type: garage.EnvSpec
-
get_action
(observation)[source]¶ Get action sampled from the policy.
Parameters: observation (np.ndarray) – Observation from the environment. Returns: Action sampled from the policy. Return type: (np.ndarray)
-
get_actions
(observations)[source]¶ Get action sampled from the policy.
Parameters: observations (list[np.ndarray]) – Observations from the environment. Returns: Actions sampled from the policy. Return type: (np.ndarray)
-
log_diagnostics
(paths)[source]¶ Log extra information per iteration based on the collected paths.
Parameters: paths (dict[numpy.ndarray]) – Sample paths.
-
observation_space
¶ Observation space.
Returns: The observation space of the environment. Return type: akro.Space
-
class
StochasticPolicy
(name, env_spec)[source]¶ Bases:
garage.tf.policies.policy.Policy
,garage.tf.models.module.StochasticModule
Stochastic Policy.
-
class
CategoricalCNNPolicy
(env_spec, filters, strides, padding, name='CategoricalCNNPolicy', hidden_sizes=(32, 32), hidden_nonlinearity=<function relu>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, layer_normalization=False)[source]¶ Bases:
garage.tf.policies.policy.StochasticPolicy
CategoricalCNNPolicy.
A policy that contains a CNN and a MLP to make prediction based on a categorical distribution.
It only works with akro.Discrete action space.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- filters (Tuple[Tuple[int, Tuple[int, int]], ..]) – Number and dimension of filters. For example, ((3, (3, 5)), (32, (3, 3))) means there are two convolutional layers. The filter for the first layer have 3 channels and its shape is (3 x 5), while the filter for the second layer have 32 channels and its shape is (3 x 3).
- strides (tuple[int]) – The stride of the sliding window. For example, (1, 2) means there are two convolutional layers. The stride of the filter for first layer is 1 and that of the second layer is 2.
- padding (str) – The type of padding algorithm to use, either ‘SAME’ or ‘VALID’.
- name (str) – Policy name, also the variable scope of the policy.
- hidden_sizes (list[int]) – Output dimension of dense layer(s). For example, (32, 32) means the MLP of this policy consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
build
(state_input, name=None)[source]¶ Build policy.
Parameters: - state_input (tf.Tensor) – State input.
- name (str) – Name of the policy, which is also the name scope.
Returns: Policy distribution.
Return type: tfp.distributions.OneHotCategorical
-
clone
(name)[source]¶ Return a clone of the policy.
It only copies the configuration of the primitive, not the parameters.
Parameters: name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph. Returns: Newly cloned policy. Return type: garage.tf.policies.CategoricalCNNPolicy
-
distribution
¶ Policy distribution.
Returns: Policy distribution. Return type: tfp.Distribution.OneHotCategorical
-
get_action
(observation)[source]¶ Return a single action.
Parameters: observation (numpy.ndarray) – Observations. Returns: Action given input observation. dict(numpy.ndarray): Distribution parameters. Return type: int
-
class
CategoricalGRUPolicy
(env_spec, name='CategoricalGRUPolicy', hidden_dim=32, hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, recurrent_nonlinearity=<function sigmoid>, recurrent_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, hidden_state_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, hidden_state_init_trainable=False, state_include_action=True, layer_normalization=False)[source]¶ Bases:
garage.tf.policies.policy.StochasticPolicy
Categorical GRU Policy.
A policy represented by a Categorical distribution which is parameterized by a Gated Recurrent Unit (GRU).
It only works with akro.Discrete action space.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Policy name, also the variable scope.
- hidden_dim (int) – Hidden dimension for LSTM cell.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- recurrent_nonlinearity (callable) – Activation function for recurrent layers. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- recurrent_w_init (callable) – Initializer function for the weight of recurrent layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- hidden_state_init (callable) – Initializer function for the initial hidden state. The functino should return a tf.Tensor.
- hidden_state_init_trainable (bool) – Bool for whether the initial hidden state is trainable.
- state_include_action (bool) – Whether the state includes action. If True, input dimension will be (observation dimension + action dimension).
- layer_normalization (bool) – Bool for using layer normalization or not.
-
build
(state_input, name=None)[source]¶ Build policy.
Parameters: - state_input (tf.Tensor) – State input.
- name (str) – Name of the policy, which is also the name scope.
Returns: Policy distribution. tf.Tensor: Step output, with shape \((N, S^*)\). tf.Tensor: Step hidden state, with shape \((N, S^*)\). tf.Tensor: Initial hidden state , used to reset the hidden state
when policy resets. Shape: \((S^*)\).
Return type: tfp.distributions.OneHotCategorical
-
clone
(name)[source]¶ Return a clone of the policy.
It only copies the configuration of the primitive, not the parameters.
Parameters: name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph. Returns: Newly cloned policy. Return type: garage.tf.policies.CategoricalGRUPolicy
-
distribution
¶ Policy distribution.
Returns: Policy distribution. Return type: tfp.Distribution.OneHotCategorical
-
get_action
(observation)[source]¶ Return a single action.
Parameters: observation (numpy.ndarray) – Observations. Returns: Action given input observation. dict(numpy.ndarray): Distribution parameters. Return type: int
-
get_actions
(observations)[source]¶ Return multiple actions.
Parameters: observations (numpy.ndarray) – Observations. Returns: Actions given input observations. dict(numpy.ndarray): Distribution parameters. Return type: list[int]
-
reset
(do_resets=None)[source]¶ Reset the policy.
Note
If do_resets is None, it will be by default np.array([True]), which implies the policy will not be “vectorized”, i.e. number of paralle environments for training data sampling = 1.
Parameters: do_resets (numpy.ndarray) – Bool that indicates terminal state(s).
-
state_info_specs
¶ State info specifcation.
Returns: - keys and shapes for the information related to the
- policy’s state when taking an action.
Return type: List[str]
-
vectorized
¶ Vectorized or not.
Returns: True if primitive supports vectorized operations. Return type: Bool
-
class
CategoricalLSTMPolicy
(env_spec, name='CategoricalLSTMPolicy', hidden_dim=32, hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, recurrent_nonlinearity=<function sigmoid>, recurrent_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, hidden_state_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, hidden_state_init_trainable=False, cell_state_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, cell_state_init_trainable=False, state_include_action=True, forget_bias=True, layer_normalization=False)[source]¶ Bases:
garage.tf.policies.policy.StochasticPolicy
Categorical LSTM Policy.
A policy represented by a Categorical distribution which is parameterized by a Long short-term memory (LSTM).
It only works with akro.Discrete action space.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Policy name, also the variable scope.
- hidden_dim (int) – Hidden dimension for LSTM cell.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- recurrent_nonlinearity (callable) – Activation function for recurrent layers. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- recurrent_w_init (callable) – Initializer function for the weight of recurrent layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- hidden_state_init (callable) – Initializer function for the initial hidden state. The functino should return a tf.Tensor.
- hidden_state_init_trainable (bool) – Bool for whether the initial hidden state is trainable.
- cell_state_init (callable) – Initializer function for the initial cell state. The functino should return a tf.Tensor.
- cell_state_init_trainable (bool) – Bool for whether the initial cell state is trainable.
- state_include_action (bool) – Whether the state includes action. If True, input dimension will be (observation dimension + action dimension).
- forget_bias (bool) – If True, add 1 to the bias of the forget gate at initialization. It’s used to reduce the scale of forgetting at the beginning of the training.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
build
(state_input, name=None)[source]¶ Build policy.
Parameters: - state_input (tf.Tensor) – State input.
- name (str) – Name of the policy, which is also the name scope.
Returns: Policy distribution. tf.Tensor: Step output, with shape \((N, S^*)\) tf.Tensor: Step hidden state, with shape \((N, S^*)\) tf.Tensor: Step cell state, with shape \((N, S^*)\) tf.Tensor: Initial hidden state, used to reset the hidden state
when policy resets. Shape: \((S^*)\)
- tf.Tensor: Initial cell state, used to reset the cell state
when policy resets. Shape: \((S^*)\)
Return type: tfp.distributions.OneHotCategorical
-
clone
(name)[source]¶ Return a clone of the policy.
It only copies the configuration of the primitive, not the parameters.
Parameters: name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph. Returns: Newly cloned policy. Return type: garage.tf.policies.CategoricalLSTMPolicy
-
distribution
¶ Policy distribution.
Returns: Policy distribution. Return type: tfp.Distribution.OneHotCategorical
-
get_action
(observation)[source]¶ Return a single action.
Parameters: observation (numpy.ndarray) – Observations. Returns: Action given input observation. dict(numpy.ndarray): Distribution parameters. Return type: int
-
get_actions
(observations)[source]¶ Return multiple actions.
Parameters: observations (numpy.ndarray) – Observations. Returns: Actions given input observations. dict(numpy.ndarray): Distribution parameters. Return type: list[int]
-
reset
(do_resets=None)[source]¶ Reset the policy.
Note
If do_resets is None, it will be by default np.array([True]), which implies the policy will not be “vectorized”, i.e. number of paralle environments for training data sampling = 1.
Parameters: do_resets (numpy.ndarray) – Bool that indicates terminal state(s).
-
state_info_specs
¶ State info specifcation.
Returns: - keys and shapes for the information related to the
- policy’s state when taking an action.
Return type: List[str]
-
vectorized
¶ Vectorized or not.
Returns: True if primitive supports vectorized operations. Return type: Bool
-
class
CategoricalMLPPolicy
(env_spec, name='CategoricalMLPPolicy', hidden_sizes=(32, 32), hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, layer_normalization=False)[source]¶ Bases:
garage.tf.policies.policy.StochasticPolicy
Categorical MLP Policy.
A policy represented by a Categorical distribution which is parameterized by a multilayer perceptron (MLP).
It only works with akro.Discrete action space.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Policy name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s). For example, (32, 32) means the MLP of this policy consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
build
(state_input, name=None)[source]¶ Build policy.
Parameters: - state_input (tf.Tensor) – State input.
- name (str) – Name of the policy, which is also the name scope.
Returns: Policy distribution.
Return type: tfp.distributions.OneHotCategorical
-
clone
(name)[source]¶ Return a clone of the policy.
It only copies the configuration of the primitive, not the parameters.
Parameters: name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph. Returns: Newly cloned policy. Return type: garage.tf.policies.Policy
-
distribution
¶ Policy distribution.
Returns: Policy distribution. Return type: tfp.Distribution.OneHotCategorical
-
get_action
(observation)[source]¶ Return a single action.
Parameters: observation (numpy.ndarray) – Observations. Returns: Action given input observation. dict(numpy.ndarray): Distribution parameters. Return type: int
-
get_actions
(observations)[source]¶ Return multiple actions.
Parameters: observations (numpy.ndarray) – Observations. Returns: Actions given input observations. dict(numpy.ndarray): Distribution parameters. Return type: list[int]
-
get_regularizable_vars
()[source]¶ Get regularizable weight variables under the Policy scope.
Returns: Trainable variables. Return type: list[tf.Tensor]
-
vectorized
¶ Vectorized or not.
Returns: True if primitive supports vectorized operations. Return type: Bool
-
class
ContinuousMLPPolicy
(env_spec, name='ContinuousMLPPolicy', hidden_sizes=(64, 64), hidden_nonlinearity=<function relu>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=<function tanh>, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, layer_normalization=False)[source]¶ Bases:
garage.tf.policies.policy.Policy
Continuous MLP Policy Network.
The policy network selects action based on the state of the environment. It uses neural nets to fit the function of pi(s).
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Policy name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s). For example, (32, 32) means the MLP of this policy consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
clone
(name)[source]¶ Return a clone of the policy.
It only copies the configuration of the Q-function, not the parameters.
Parameters: name (str) – Name of the newly created policy. Returns: Clone of this object Return type: garage.tf.policies.ContinuousMLPPolicy
-
get_action
(observation)[source]¶ Get single action from this policy for the input observation.
Parameters: observation (numpy.ndarray) – Observation from environment. Returns: Predicted action. dict: Empty dict since this policy does not model a distribution. Return type: numpy.ndarray
-
get_action_sym
(obs_var, name=None)[source]¶ Symbolic graph of the action.
Parameters: - obs_var (tf.Tensor) – Tensor input for symbolic graph.
- name (str) – Name for symbolic graph.
Returns: symbolic graph of the action.
Return type: tf.Tensor
-
get_actions
(observations)[source]¶ Get multiple actions from this policy for the input observations.
Parameters: observations (numpy.ndarray) – Observations from environment. Returns: Predicted actions. dict: Empty dict since this policy does not model a distribution. Return type: numpy.ndarray
-
class
DiscreteQfDerivedPolicy
(env_spec, qf, name='DiscreteQfDerivedPolicy')[source]¶ Bases:
garage.tf.policies.policy.Policy
DiscreteQfDerived policy.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- qf (garage.q_functions.QFunction) – The q-function used.
- name (str) – Name of the policy.
-
get_action
(observation)[source]¶ Get action from this policy for the input observation.
Parameters: observation (numpy.ndarray) – Observation from environment. Returns: Single optimal action from this policy. dict: Predicted action and agent information. It returns an empty dict since there is no parameterization.Return type: numpy.ndarray
-
get_actions
(observations)[source]¶ Get actions from this policy for the input observations.
Parameters: observations (numpy.ndarray) – Observations from environment. Returns: Optimal actions from this policy. dict: Predicted action and agent information. It returns an empty dict since there is no parameterization.Return type: numpy.ndarray
-
vectorized
¶ Vectorized or not.
Returns: True if primitive supports vectorized operations. Return type: Bool
-
class
GaussianGRUPolicy
(env_spec, hidden_dim=32, name='GaussianGRUPolicy', hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, recurrent_nonlinearity=<function sigmoid>, recurrent_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, hidden_state_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, hidden_state_init_trainable=False, learn_std=True, std_share_network=False, init_std=1.0, layer_normalization=False, state_include_action=True)[source]¶ Bases:
garage.tf.policies.policy.StochasticPolicy
Gaussian GRU Policy.
A policy represented by a Gaussian distribution which is parameterized by a Gated Recurrent Unit (GRU).
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Model name, also the variable scope.
- hidden_dim (int) – Hidden dimension for GRU cell for mean.
- hidden_nonlinearity (Callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (Callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (Callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- recurrent_nonlinearity (Callable) – Activation function for recurrent layers. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- recurrent_w_init (Callable) – Initializer function for the weight of recurrent layer(s). The function should return a tf.Tensor.
- output_nonlinearity (Callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (Callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (Callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- hidden_state_init (Callable) – Initializer function for the initial hidden state. The functino should return a tf.Tensor.
- hidden_state_init_trainable (bool) – Bool for whether the initial hidden state is trainable.
- learn_std (bool) – Is std trainable.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- init_std (float) – Initial value for std.
- layer_normalization (bool) – Bool for using layer normalization or not.
- state_include_action (bool) – Whether the state includes action. If True, input dimension will be (observation dimension + action dimension).
-
build
(state_input, name=None)[source]¶ Build policy.
Parameters: - state_input (tf.Tensor) – State input.
- name (str) – Name of the policy, which is also the name scope.
Returns: Policy distribution. tf.Tensor: Step means, with shape \((N, S^*)\). tf.Tensor: Step log std, with shape \((N, S^*)\). tf.Tensor: Step hidden state, with shape \((N, S^*)\). tf.Tensor: Initial hidden state, with shape \((S^*)\).
Return type: tfp.distributions.MultivariateNormalDiag
-
clone
(name)[source]¶ Return a clone of the policy.
It only copies the configuration of the primitive, not the parameters.
Parameters: name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph. Returns: Newly cloned policy. Return type: garage.tf.policies.GaussianGRUPolicy
-
distribution
¶ Policy distribution.
Returns: Policy distribution. Return type: tfp.Distribution.MultivariateNormalDiag
-
get_action
(observation)[source]¶ Get single action from this policy for the input observation.
Parameters: observation (numpy.ndarray) – Observation from environment. Returns: Actions dict: Predicted action and agent information. Return type: numpy.ndarray Note
It returns an action and a dict, with keys - mean (numpy.ndarray): Mean of the distribution. - log_std (numpy.ndarray): Log standard deviation of the
distribution.- prev_action (numpy.ndarray): Previous action, only present if
- self._state_include_action is True.
-
get_actions
(observations)[source]¶ Get multiple actions from this policy for the input observations.
Parameters: observations (numpy.ndarray) – Observations from environment. Returns: Actions dict: Predicted action and agent information. Return type: numpy.ndarray Note
It returns an action and a dict, with keys - mean (numpy.ndarray): Means of the distribution. - log_std (numpy.ndarray): Log standard deviations of the
distribution.- prev_action (numpy.ndarray): Previous action, only present if
- self._state_include_action is True.
-
reset
(do_resets=None)[source]¶ Reset the policy.
Note
If do_resets is None, it will be by default np.array([True]) which implies the policy will not be “vectorized”, i.e. number of parallel environments for training data sampling = 1.
Parameters: do_resets (numpy.ndarray) – Bool that indicates terminal state(s).
-
state_info_specs
¶ State info specifcation.
Returns: - keys and shapes for the information related to the
- policy’s state when taking an action.
Return type: List[str]
-
vectorized
¶ Vectorized or not.
Returns: True if primitive supports vectorized operations. Return type: Bool
-
class
GaussianLSTMPolicy
(env_spec, hidden_dim=32, name='GaussianLSTMPolicy', hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, recurrent_nonlinearity=<function sigmoid>, recurrent_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, hidden_state_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, hidden_state_init_trainable=False, cell_state_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, cell_state_init_trainable=False, forget_bias=True, learn_std=True, std_share_network=False, init_std=1.0, layer_normalization=False, state_include_action=True)[source]¶ Bases:
garage.tf.policies.policy.StochasticPolicy
Gaussian LSTM Policy.
A policy represented by a Gaussian distribution which is parameterized by a Long short-term memory (LSTM).
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Model name, also the variable scope.
- hidden_dim (int) – Hidden dimension for LSTM cell for mean.
- hidden_nonlinearity (Callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (Callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (Callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- recurrent_nonlinearity (Callable) – Activation function for recurrent layers. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- recurrent_w_init (Callable) – Initializer function for the weight of recurrent layer(s). The function should return a tf.Tensor.
- output_nonlinearity (Callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (Callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (Callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- hidden_state_init (Callable) – Initializer function for the initial hidden state. The functino should return a tf.Tensor.
- hidden_state_init_trainable (bool) – Bool for whether the initial hidden state is trainable.
- cell_state_init (Callable) – Initializer function for the initial cell state. The functino should return a tf.Tensor.
- cell_state_init_trainable (bool) – Bool for whether the initial cell state is trainable.
- forget_bias (bool) – If True, add 1 to the bias of the forget gate at initialization. It’s used to reduce the scale of forgetting at the beginning of the training.
- learn_std (bool) – Is std trainable.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- init_std (float) – Initial value for std.
- layer_normalization (bool) – Bool for using layer normalization or not.
- state_include_action (bool) – Whether the state includes action. If True, input dimension will be (observation dimension + action dimension).
-
build
(state_input, name=None)[source]¶ Build policy.
Parameters: - state_input (tf.Tensor) – State input.
- name (str) – Name of the policy, which is also the name scope.
Returns: Policy distribution. tf.Tensor: Step means, with shape \((N, S^*)\). tf.Tensor: Step log std, with shape \((N, S^*)\). tf.Tensor: Step hidden state, with shape \((N, S^*)\). tf.Tensor: Step cell state, with shape \((N, S^*)\). tf.Tensor: Initial hidden state, with shape \((S^*)\). tf.Tensor: Initial cell state, with shape \((S^*)\)
Return type: tfp.distributions.MultivariateNormalDiag
-
clone
(name)[source]¶ Return a clone of the policy.
It only copies the configuration of the primitive, not the parameters.
Parameters: name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph. Returns: Newly cloned policy. Return type: garage.tf.policies.GaussianLSTMPolicy
-
distribution
¶ Policy distribution.
Returns: Policy distribution. Return type: tfp.Distribution.MultivariateNormalDiag
-
get_action
(observation)[source]¶ Get single action from this policy for the input observation.
Parameters: observation (numpy.ndarray) – Observation from environment. Returns: Actions dict: Predicted action and agent information. Return type: numpy.ndarray Note
It returns an action and a dict, with keys - mean (numpy.ndarray): Mean of the distribution. - log_std (numpy.ndarray): Log standard deviation of the
distribution.- prev_action (numpy.ndarray): Previous action, only present if
- self._state_include_action is True.
-
get_actions
(observations)[source]¶ Get multiple actions from this policy for the input observations.
Parameters: observations (numpy.ndarray) – Observations from environment. Returns: Actions dict: Predicted action and agent information. Return type: numpy.ndarray Note
It returns an action and a dict, with keys - mean (numpy.ndarray): Means of the distribution. - log_std (numpy.ndarray): Log standard deviations of the
distribution.- prev_action (numpy.ndarray): Previous action, only present if
- self._state_include_action is True.
-
reset
(do_resets=None)[source]¶ Reset the policy.
Note
If do_resets is None, it will be by default np.array([True]), which implies the policy will not be “vectorized”, i.e. number of paralle environments for training data sampling = 1.
Parameters: do_resets (numpy.ndarray) – Bool that indicates terminal state(s).
-
state_info_specs
¶ State info specifcation.
Returns: - keys and shapes for the information related to the
- policy’s state when taking an action.
Return type: List[str]
-
vectorized
¶ Vectorized or not.
Returns: True if primitive supports vectorized operations. Return type: Bool
-
class
GaussianMLPPolicy
(env_spec, name='GaussianMLPPolicy', hidden_sizes=(32, 32), hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, learn_std=True, adaptive_std=False, std_share_network=False, init_std=1.0, min_std=1e-06, max_std=None, std_hidden_sizes=(32, 32), std_hidden_nonlinearity=<function tanh>, std_output_nonlinearity=None, std_parameterization='exp', layer_normalization=False)[source]¶ Bases:
garage.tf.policies.policy.StochasticPolicy
Gaussian MLP Policy.
A policy represented by a Gaussian distribution which is parameterized by a multilayer perceptron (MLP).
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- learn_std (bool) – Is std trainable.
- adaptive_std (bool) – Is std a neural network. If False, it will be a parameter.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- init_std (float) – Initial value for std.
- std_hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for std. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- min_std (float) – If not None, the std is at least the value of min_std, to avoid numerical issues.
- max_std (float) – If not None, the std is at most the value of max_std, to avoid numerical issues.
- std_hidden_nonlinearity (callable) – Nonlinearity for each hidden layer in the std network. The function should return a tf.Tensor.
- std_output_nonlinearity (callable) – Nonlinearity for output layer in the std network. The function should return a tf.Tensor.
- std_parameterization (str) – How the std should be parametrized. There are a few options:
- exp (-) – the logarithm of the std will be stored, and applied a exponential transformation
- softplus (-) – the std will be computed as log(1+exp(x))
- layer_normalization (bool) – Bool for using layer normalization or not.
-
build
(state_input, name=None)[source]¶ Build policy.
Parameters: - state_input (tf.Tensor) – State input.
- name (str) – Name of the policy, which is also the name scope.
Returns: Distribution. tf.tensor: Mean. tf.Tensor: Log of standard deviation.
Return type: tfp.distributions.MultivariateNormalDiag
-
clone
(name)[source]¶ Return a clone of the policy.
It only copies the configuration of the primitive, not the parameters.
Parameters: name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph. Returns: Newly cloned policy. Return type: garage.tf.policies.GaussianMLPPolicy
-
distribution
¶ Policy distribution.
Returns: Policy distribution. Return type: tfp.Distribution.MultivariateNormalDiag
-
get_action
(observation)[source]¶ Get single action from this policy for the input observation.
Parameters: observation (numpy.ndarray) – Observation from environment. Returns: Actions dict: Predicted action and agent information. Return type: numpy.ndarray Note
It returns an action and a dict, with keys - mean (numpy.ndarray): Mean of the distribution. - log_std (numpy.ndarray): Log standard deviation of the
distribution.
-
get_actions
(observations)[source]¶ Get multiple actions from this policy for the input observations.
Parameters: observations (numpy.ndarray) – Observations from environment. Returns: Actions dict: Predicted action and agent information. Return type: numpy.ndarray Note
It returns actions and a dict, with keys - mean (numpy.ndarray): Means of the distribution. - log_std (numpy.ndarray): Log standard deviations of the
distribution.
-
vectorized
¶ Vectorized or not.
Returns: True if primitive supports vectorized operations. Return type: Bool
-
class
GaussianMLPTaskEmbeddingPolicy
(env_spec, encoder, name='GaussianMLPTaskEmbeddingPolicy', hidden_sizes=(32, 32), hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, learn_std=True, adaptive_std=False, std_share_network=False, init_std=1.0, min_std=1e-06, max_std=None, std_hidden_sizes=(32, 32), std_hidden_nonlinearity=<function tanh>, std_output_nonlinearity=None, std_parameterization='exp', layer_normalization=False)[source]¶ Bases:
garage.tf.policies.task_embedding_policy.TaskEmbeddingPolicy
GaussianMLPTaskEmbeddingPolicy.
Parameters: - env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.
- encoder (garage.tf.embeddings.StochasticEncoder) – Embedding network.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- learn_std (bool) – Is std trainable.
- adaptive_std (bool) – Is std a neural network. If False, it will be a parameter.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- init_std (float) – Initial value for std.
- std_hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for std. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- min_std (float) – If not None, the std is at least the value of min_std, to avoid numerical issues.
- max_std (float) – If not None, the std is at most the value of max_std, to avoid numerical issues.
- std_hidden_nonlinearity (callable) – Nonlinearity for each hidden layer in the std network. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- std_output_nonlinearity (callable) – Nonlinearity for output layer in the std network. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- std_parameterization (str) –
How the std should be parametrized. There are a few options: - exp: the logarithm of the std will be stored, and applied a
exponential transformation- softplus: the std will be computed as log(1+exp(x))
- layer_normalization (bool) – Bool for using layer normalization or not.
-
build
(obs_input, task_input, name=None)[source]¶ Build policy.
Parameters: - obs_input (tf.Tensor) – Observation input.
- task_input (tf.Tensor) – One-hot task id input.
- name (str) – Name of the model, which is also the name scope.
Returns: Policy network. namedtuple: Encoder network.
Return type: namedtuple
-
clone
(name)[source]¶ Return a clone of the policy.
Parameters: name (str) – Name of the newly created policy. It has to be different from source policy if cloned under the same computational graph. Returns: Cloned policy. Return type: garage.tf.policies.GaussianMLPTaskEmbeddingPolicy
-
distribution
¶ Policy action distribution.
Returns: Policy distribution. Return type: tfp.Distribution.MultivariateNormalDiag
-
get_action
(observation)[source]¶ Get action sampled from the policy.
Parameters: observation (np.ndarray) – Augmented observation from the environment, with shape \((O+N, )\). O is the dimension of observation, N is the number of tasks. Returns: - Action sampled from the policy,
- with shape \((A, )\). A is the dimension of action.
- dict: Action distribution information, with keys:
- mean (numpy.ndarray): Mean of the distribution,
- with shape \((A, )\). A is the dimension of action.
- log_std (numpy.ndarray): Log standard deviation of the
- distribution, with shape \((A, )\). A is the dimension of action.
Return type: np.ndarray
-
get_action_given_latent
(observation, latent)[source]¶ Sample an action given observation and latent.
Parameters: - observation (np.ndarray) – Observation from the environment, with shape \((O, )\). O is the dimension of observation.
- latent (np.ndarray) – Latent, with shape \((Z, )\). Z is the dimension of the latent embedding.
Returns: - Action sampled from the policy,
with shape \((A, )\). A is the dimension of action.
- dict: Action distribution information, with keys:
- mean (numpy.ndarray): Mean of the distribution,
- with shape \((A, )\). A is the dimension of action.
- log_std (numpy.ndarray): Log standard deviation of the
- distribution, with shape \((A, )\). A is the dimension of action.
Return type: np.ndarray
-
get_action_given_task
(observation, task_id)[source]¶ Sample an action given observation and task id.
Parameters: - observation (np.ndarray) – Observation from the environment, with shape \((O, )\). O is the dimension of the observation.
- task_id (np.ndarray) – One-hot task id, with shape :math:`(N, ). N is the number of tasks.
Returns: - Action sampled from the policy, with shape
\((A, )\). A is the dimension of action.
- dict: Action distribution information, with keys:
- mean (numpy.ndarray): Mean of the distribution,
- with shape \((A, )\). A is the dimension of action.
- log_std (numpy.ndarray): Log standard deviation of the
- distribution, with shape \((A, )\). A is the dimension of action.
Return type: np.ndarray
-
get_actions
(observations)[source]¶ Get actions sampled from the policy.
Parameters: observations (np.ndarray) – Augmented observation from the environment, with shape \((T, O+N)\). T is the number of environment steps, O is the dimension of observation, N is the number of tasks. Returns: - Actions sampled from the policy,
- with shape \((T, A)\). T is the number of environment steps, A is the dimension of action.
- dict: Action distribution information, with keys:
- mean (numpy.ndarray): Mean of the distribution,
- with shape \((T, A)\). T is the number of environment steps, A is the dimension of action.
- log_std (numpy.ndarray): Log standard deviation of the
- distribution, with shape \((T, A)\). T is the number of environment steps, Z is the dimension of action.
Return type: np.ndarray
-
get_actions_given_latents
(observations, latents)[source]¶ Sample a batch of actions given observations and latents.
Parameters: - observations (np.ndarray) – Observations from the environment, with shape \((T, O)\). T is the number of environment steps, O is the dimension of observation.
- latents (np.ndarray) – Latents, with shape \((T, Z)\). T is the number of environment steps, Z is the dimension of latent embedding.
Returns: - Actions sampled from the policy,
with shape \((T, A)\). T is the number of environment steps, A is the dimension of action.
- dict: Action distribution information, , with keys:
- mean (numpy.ndarray): Mean of the distribution,
- with shape \((T, A)\). T is the number of environment steps. A is the dimension of action.
- log_std (numpy.ndarray): Log standard deviation of the
- distribution, with shape \((T, A)\). T is the number of environment steps. A is the dimension of action.
Return type: np.ndarray
-
get_actions_given_tasks
(observations, task_ids)[source]¶ Sample a batch of actions given observations and task ids.
Parameters: - observations (np.ndarray) – Observations from the environment, with shape \((T, O)\). T is the number of environment steps, O is the dimension of observation.
- task_ids (np.ndarry) – One-hot task ids, with shape \((T, N)\). T is the number of environment steps, N is the number of tasks.
Returns: - Actions sampled from the policy,
with shape \((T, A)\). T is the number of environment steps, A is the dimension of action.
- dict: Action distribution information, , with keys:
- mean (numpy.ndarray): Mean of the distribution,
- with shape \((T, A)\). T is the number of environment steps. A is the dimension of action.
- log_std (numpy.ndarray): Log standard deviation of the
- distribution, with shape \((T, A)\). T is the number of environment steps. A is the dimension of action.
Return type: np.ndarray
-
class
TaskEmbeddingPolicy
(name, env_spec, encoder)[source]¶ Bases:
garage.tf.policies.policy.StochasticPolicy
Base class for Task Embedding policies in TensorFlow.
This policy needs a task id in addition to observation to sample an action.
Parameters: - name (str) – Policy name, also the variable scope.
- env_spec (garage.envs.EnvSpec) – Environment specification.
- encoder (garage.tf.embeddings.StochasticEncoder) – A encoder that embeds a task id to a latent.
-
augmented_observation_space
¶ Concatenated observation space and one-hot task id.
Type: akro.Box
-
encoder
¶ Encoder.
Type: garage.tf.embeddings.encoder.Encoder
-
encoder_distribution
¶ Encoder distribution.
Type: garage.tf.distributions.DiagonalGaussian
-
get_action
(observation)[source]¶ Get action sampled from the policy.
Parameters: observation (np.ndarray) – Augmented observation from the environment, with shape \((O+N, )\). O is the dimension of observation, N is the number of tasks. Returns: - Action sampled from the policy,
- with shape \((A, )\). A is the dimension of action.
dict: Action distribution information.
Return type: np.ndarray
-
get_action_given_latent
(observation, latent)[source]¶ Sample an action given observation and latent.
Parameters: - observation (np.ndarray) – Observation from the environment, with shape \((O, )\). O is the dimension of observation.
- latent (np.ndarray) – Latent, with shape \((Z, )\). Z is the dimension of latent embedding.
Returns: - Action sampled from the policy,
with shape \((A, )\). A is the dimension of action.
dict: Action distribution information.
Return type: np.ndarray
-
get_action_given_task
(observation, task_id)[source]¶ Sample an action given observation and task id.
Parameters: - observation (np.ndarray) – Observation from the environment, with shape \((O, )\). O is the dimension of the observation.
- task_id (np.ndarray) – One-hot task id, with shape :math:`(N, ). N is the number of tasks.
Returns: - Action sampled from the policy, with shape
\((A, )\). A is the dimension of action.
dict: Action distribution information.
Return type: np.ndarray
-
get_actions
(observations)[source]¶ Get actions sampled from the policy.
Parameters: observations (np.ndarray) – Augmented observation from the environment, with shape \((T, O+N)\). T is the number of environment steps, O is the dimension of observation, N is the number of tasks. Returns: - Actions sampled from the policy,
- with shape \((T, A)\). T is the number of environment steps, A is the dimension of action.
dict: Action distribution information.
Return type: np.ndarray
-
get_actions_given_latents
(observations, latents)[source]¶ Sample a batch of actions given observations and latents.
Parameters: - observations (np.ndarray) – Observations from the environment, with shape \((T, O)\). T is the number of environment steps, O is the dimension of observation.
- latents (np.ndarray) – Latents, with shape \((T, Z)\). T is the number of environment steps, Z is the dimension of latent embedding.
Returns: - Actions sampled from the policy,
with shape \((T, A)\). T is the number of environment steps, A is the dimension of action.
dict: Action distribution information.
Return type: np.ndarray
-
get_actions_given_tasks
(observations, task_ids)[source]¶ Sample a batch of actions given observations and task ids.
Parameters: - observations (np.ndarray) – Observations from the environment, with shape \((T, O)\). T is the number of environment steps, O is the dimension of observation.
- task_ids (np.ndarry) – One-hot task ids, with shape \((T, N)\). T is the number of environment steps, N is the number of tasks.
Returns: - Actions sampled from the policy,
with shape \((T, A)\). T is the number of environment steps, A is the dimension of action.
dict: Action distribution information.
Return type: np.ndarray
-
get_global_vars
()[source]¶ Get global variables.
The global vars of a multitask policy should be the global vars of its model and the trainable vars of its embedding model.
Returns: - A list of global variables in the current
- variable scope.
Return type: List[tf.Variable]
-
get_latent
(task_id)[source]¶ Get embedded task id in latent space.
Parameters: task_id (np.ndarray) – One-hot task id, with shape \((N, )\). N is the number of tasks. Returns: - An embedding sampled from embedding distribution, with
- shape \((Z, )\). Z is the dimension of the latent embedding.
dict: Embedding distribution information.
Return type: np.ndarray
-
get_trainable_vars
()[source]¶ Get trainable variables.
The trainable vars of a multitask policy should be the trainable vars of its model and the trainable vars of its embedding model.
Returns: - A list of trainable variables in the current
- variable scope.
Return type: List[tf.Variable]
-
latent_space
¶ Space of latent.
Type: akro.Box
-
split_augmented_observation
(collated)[source]¶ Splits up observation into one-hot task and environment observation.
Parameters: collated (np.ndarray) – Environment observation concatenated with task one-hot, with shape \((O+N, )\). O is the dimension of observation, N is the number of tasks. Returns: - Vanilla environment observation,
- with shape \((O, )\). O is the dimension of observation.
- np.ndarray: Task one-hot, with shape \((N, )\). N is the number
- of tasks.
Return type: np.ndarray
-
task_space
¶ One-hot space of task id.
Type: akro.Box
Submodules¶
- garage.tf.policies.categorical_cnn_policy module
- garage.tf.policies.categorical_gru_policy module
- garage.tf.policies.categorical_lstm_policy module
- garage.tf.policies.categorical_mlp_policy module
- garage.tf.policies.continuous_mlp_policy module
- garage.tf.policies.discrete_qf_derived_policy module
- garage.tf.policies.gaussian_gru_policy module
- garage.tf.policies.gaussian_lstm_policy module
- garage.tf.policies.gaussian_mlp_policy module
- garage.tf.policies.gaussian_mlp_task_embedding_policy module
- garage.tf.policies.policy module
- garage.tf.policies.task_embedding_policy module
- garage.tf.policies.uniform_control_policy module