garage.torch.algos.trpo module

Trust Region Policy Optimization.

class TRPO(env_spec, policy, value_function, policy_optimizer=None, vf_optimizer=None, max_path_length=100, num_train_per_epoch=1, discount=0.99, gae_lambda=0.98, center_adv=True, positive_adv=False, policy_ent_coeff=0.0, use_softplus_entropy=False, stop_entropy_gradient=False, entropy_method='no_entropy')[source]

Bases: garage.torch.algos.vpg.VPG

Trust Region Policy Optimization (TRPO).

Parameters:
  • env_spec (garage.envs.EnvSpec) – Environment specification.
  • policy (garage.torch.policies.Policy) – Policy.
  • value_function (garage.torch.value_functions.ValueFunction) – The value function.
  • policy_optimizer (garage.torch.optimizer.OptimizerWrapper) – Optimizer for policy.
  • vf_optimizer (garage.torch.optimizer.OptimizerWrapper) – Optimizer for value function.
  • max_path_length (int) – Maximum length of a single rollout.
  • num_train_per_epoch (int) – Number of train_once calls per epoch.
  • discount (float) – Discount.
  • gae_lambda (float) – Lambda used for generalized advantage estimation.
  • center_adv (bool) – Whether to rescale the advantages so that they have mean 0 and standard deviation 1.
  • positive_adv (bool) – Whether to shift the advantages so that they are always positive. When used in conjunction with center_adv the advantages will be standardized before shifting.
  • policy_ent_coeff (float) – The coefficient of the policy entropy. Setting it to zero would mean no entropy regularization.
  • use_softplus_entropy (bool) – Whether to estimate the softmax distribution of the entropy to prevent the entropy from being negative.
  • stop_entropy_gradient (bool) – Whether to stop the entropy gradient.
  • entropy_method (str) – A string from: ‘max’, ‘regularized’, ‘no_entropy’. The type of entropy method to use. ‘max’ adds the dense entropy to the reward for each time step. ‘regularized’ adds the mean entropy to the surrogate objective. See https://arxiv.org/abs/1805.00909 for more details.