garage.experiment.meta_evaluator

Evaluator which tests Meta-RL algorithms on test environments.

class MetaEvaluator(*, test_task_sampler, n_exploration_eps=10, n_test_tasks=None, n_test_episodes=1, prefix='MetaTest', test_task_names=None, worker_class=DefaultWorker, worker_args=None)

Evaluates Meta-RL algorithms on test environments.

Parameters
  • test_task_sampler (TaskSampler) – Sampler for test tasks. To demonstrate the effectiveness of a meta-learning method, these should be different from the training tasks.

  • n_test_tasks (int or None) – Number of test tasks to sample each time evaluation is performed. Note that tasks are sampled “without replacement”. If None, is set to test_task_sampler.n_tasks.

  • n_exploration_eps (int) – Number of episodes to gather from the exploration policy before requesting the meta algorithm to produce an adapted policy.

  • n_test_episodes (int) – Number of episodes to use for each adapted policy. The adapted policy should forget previous episodes when .reset() is called.

  • prefix (str) – Prefix to use when logging. Defaults to MetaTest. For example, this results in logging the key ‘MetaTest/SuccessRate’. If not set to MetaTest, it should probably be set to MetaTrain.

  • test_task_names (list[str]) – List of task names to test. Should be in an order consistent with the task_id env_info, if that is present.

  • worker_class (type) – Type of worker the Sampler should use.

  • worker_args (dict or None) – Additional arguments that should be passed to the worker.

evaluate(algo, test_episodes_per_task=None)

Evaluate the Meta-RL algorithm on the test tasks.

Parameters
  • algo (MetaRLAlgorithm) – The algorithm to evaluate.

  • test_episodes_per_task (int or None) – Number of episodes per task.