garage.tf.policies.task_embedding_policy module

Policy class for Task Embedding envs.

class TaskEmbeddingPolicy(name, env_spec, encoder)[source]

Bases: garage.tf.policies.policy.StochasticPolicy

Base class for Task Embedding policies in TensorFlow.

This policy needs a task id in addition to observation to sample an action.

Parameters:
augmented_observation_space

Concatenated observation space and one-hot task id.

Type:akro.Box
encoder

Encoder.

Type:garage.tf.embeddings.encoder.Encoder
encoder_distribution

Encoder distribution.

Type:garage.tf.distributions.DiagonalGaussian
get_action(observation)[source]

Get action sampled from the policy.

Parameters:observation (np.ndarray) – Augmented observation from the environment, with shape \((O+N, )\). O is the dimension of observation, N is the number of tasks.
Returns:
Action sampled from the policy,
with shape \((A, )\). A is the dimension of action.

dict: Action distribution information.

Return type:np.ndarray
get_action_given_latent(observation, latent)[source]

Sample an action given observation and latent.

Parameters:
  • observation (np.ndarray) – Observation from the environment, with shape \((O, )\). O is the dimension of observation.
  • latent (np.ndarray) – Latent, with shape \((Z, )\). Z is the dimension of latent embedding.
Returns:

Action sampled from the policy,

with shape \((A, )\). A is the dimension of action.

dict: Action distribution information.

Return type:

np.ndarray

get_action_given_task(observation, task_id)[source]

Sample an action given observation and task id.

Parameters:
  • observation (np.ndarray) – Observation from the environment, with shape \((O, )\). O is the dimension of the observation.
  • task_id (np.ndarray) – One-hot task id, with shape :math:`(N, ). N is the number of tasks.
Returns:

Action sampled from the policy, with shape

\((A, )\). A is the dimension of action.

dict: Action distribution information.

Return type:

np.ndarray

get_actions(observations)[source]

Get actions sampled from the policy.

Parameters:observations (np.ndarray) – Augmented observation from the environment, with shape \((T, O+N)\). T is the number of environment steps, O is the dimension of observation, N is the number of tasks.
Returns:
Actions sampled from the policy,
with shape \((T, A)\). T is the number of environment steps, A is the dimension of action.

dict: Action distribution information.

Return type:np.ndarray
get_actions_given_latents(observations, latents)[source]

Sample a batch of actions given observations and latents.

Parameters:
  • observations (np.ndarray) – Observations from the environment, with shape \((T, O)\). T is the number of environment steps, O is the dimension of observation.
  • latents (np.ndarray) – Latents, with shape \((T, Z)\). T is the number of environment steps, Z is the dimension of latent embedding.
Returns:

Actions sampled from the policy,

with shape \((T, A)\). T is the number of environment steps, A is the dimension of action.

dict: Action distribution information.

Return type:

np.ndarray

get_actions_given_tasks(observations, task_ids)[source]

Sample a batch of actions given observations and task ids.

Parameters:
  • observations (np.ndarray) – Observations from the environment, with shape \((T, O)\). T is the number of environment steps, O is the dimension of observation.
  • task_ids (np.ndarry) – One-hot task ids, with shape \((T, N)\). T is the number of environment steps, N is the number of tasks.
Returns:

Actions sampled from the policy,

with shape \((T, A)\). T is the number of environment steps, A is the dimension of action.

dict: Action distribution information.

Return type:

np.ndarray

get_global_vars()[source]

Get global variables.

The global vars of a multitask policy should be the global vars of its model and the trainable vars of its embedding model.

Returns:
A list of global variables in the current
variable scope.
Return type:List[tf.Variable]
get_latent(task_id)[source]

Get embedded task id in latent space.

Parameters:task_id (np.ndarray) – One-hot task id, with shape \((N, )\). N is the number of tasks.
Returns:
An embedding sampled from embedding distribution, with
shape \((Z, )\). Z is the dimension of the latent embedding.

dict: Embedding distribution information.

Return type:np.ndarray
get_trainable_vars()[source]

Get trainable variables.

The trainable vars of a multitask policy should be the trainable vars of its model and the trainable vars of its embedding model.

Returns:
A list of trainable variables in the current
variable scope.
Return type:List[tf.Variable]
latent_space

Space of latent.

Type:akro.Box
split_augmented_observation(collated)[source]

Splits up observation into one-hot task and environment observation.

Parameters:collated (np.ndarray) – Environment observation concatenated with task one-hot, with shape \((O+N, )\). O is the dimension of observation, N is the number of tasks.
Returns:
Vanilla environment observation,
with shape \((O, )\). O is the dimension of observation.
np.ndarray: Task one-hot, with shape \((N, )\). N is the number
of tasks.
Return type:np.ndarray
task_space

One-hot space of task id.

Type:akro.Box