garage.envs.mujoco.half_cheetah_vel_env module

Variant of the HalfCheetahEnv with different target velocity.

class HalfCheetahVelEnv(task=None)[source]

Bases: garage.envs.mujoco.half_cheetah_env_meta_base.HalfCheetahEnvMetaBase

Half-cheetah environment with target velocity, as described in [1].

The code is adapted from https://github.com/cbfinn/maml_rl/blob/9c8e2ebd741cb0c7b8bf2d040c4caeeb8e06cc95/rllab/envs/mujoco/half_cheetah_env_rand.py

The half-cheetah follows the dynamics from MuJoCo [2], and receives at each time step a reward composed of a control cost and a penalty equal to the difference between its current velocity and the target velocity. The tasks are generated by sampling the target velocities from the uniform distribution on [0, 2].

[1] Chelsea Finn, Pieter Abbeel, Sergey Levine, “Model-Agnostic
Meta-Learning for Fast Adaptation of Deep Networks”, 2017 (https://arxiv.org/abs/1703.03400)
[2] Emanuel Todorov, Tom Erez, Yuval Tassa, “MuJoCo: A physics engine for
model-based control”, 2012 (https://homes.cs.washington.edu/~todorov/papers/TodorovIROS12.pdf)
Parameters:task (dict or None) – velocity (float): Target velocity, usually between 0 and 2.
sample_tasks(num_tasks)[source]

Sample a list of num_tasks tasks.

Parameters:num_tasks (int) – Number of tasks to sample.
Returns:
A list of “tasks,” where each task is a
dictionary containing a single key, “velocity”, mapping to a value between 0 and 2.
Return type:list[dict[str, float]]
set_task(task)[source]

Reset with a task.

Parameters:task (dict[str, float]) – A task (a dictionary containing a single key, “velocity”, usually between 0 and 2).
step(action)[source]

Take one step in the environment.

Equivalent to step in HalfCheetahEnv, but with different rewards.

Parameters:action (np.ndarray) – The action to take in the environment.
Returns:
  • observation (np.ndarray): The observation of the environment.
  • reward (float): The reward acquired at this time step.
  • done (boolean): Whether the environment was completed at this
    time step. Always False for this environment.
  • infos (dict):
    • reward_forward (float): Reward for moving, ignoring the
      control cost.
    • reward_ctrl (float): The reward for acting i.e. the
      control cost (always negative).
    • task_vel (float): Target velocity.
      Usually between 0 and 2.
Return type:tuple