garage.torch.optimizers.optimizer_wrapper
¶
A PyTorch optimizer wrapper that compute loss and optimize module.
-
class
OptimizerWrapper
(optimizer, module, max_optimization_epochs=1, minibatch_size=None)¶ A wrapper class to handle torch.optim.optimizer.
- Parameters
optimizer (Union[type, tuple[type, dict]]) – Type of optimizer for policy. This can be an optimizer type such as torch.optim.Adam or a tuple of type and dictionary, where dictionary contains arguments to initialize the optimizer. e.g. (torch.optim.Adam, {‘lr’ : 1e-3}) Sample strategy to be used when sampling a new task.
module (torch.nn.Module) – Module to be optimized.
max_optimization_epochs (int) – Maximum number of epochs for update.
minibatch_size (int) – Batch size for optimization.
-
get_minibatch
(self, *inputs)¶ Yields a batch of inputs.
Notes: P is the size of minibatch (self._minibatch_size)
- Parameters
*inputs (list[torch.Tensor]) – A list of inputs. Each input has shape \((N \dot [T], *)\).
- Yields
list[torch.Tensor] –
- A list batch of inputs. Each batch has shape
\((P, *)\).
-
zero_grad
(self)¶ Clears the gradients of all optimized
torch.Tensor
s.
-
step
(self, **closure)¶ Performs a single optimization step.
- Parameters
**closure (callable, optional) – A closure that reevaluates the model and returns the loss.