garage.tf.optimizers.first_order_optimizer
¶
First order optimizer.
- class FirstOrderOptimizer(optimizer=None, learning_rate=None, max_optimization_epochs=1000, tolerance=1e-06, batch_size=32, callback=None, verbose=False, name='FirstOrderOptimizer')¶
First order optimier.
Performs (stochastic) gradient descent, possibly using fancier methods like ADAM etc.
- Parameters
optimizer (tf.Optimizer) – Optimizer to be used.
learning_rate (dict) – learning rate arguments. learning rates are our main interest parameters to tune optimizers.
max_optimization_epochs (int) – Maximum number of epochs for update.
tolerance (float) – Tolerance for difference in loss during update.
batch_size (int) – Batch size for optimization.
callback (callable) – Function to call during each epoch. Default is None.
verbose (bool) – If true, intermediate log message will be printed.
name (str) – Name scope of the optimizer.
- update_opt(loss, target, inputs, extra_inputs=None, **kwargs)¶
Construct operation graph for the optimizer.
- Parameters
loss (tf.Tensor) – Loss objective to minimize.
target (object) – Target object to optimize. The object should implemenet get_params() and get_param_values.
inputs (list[tf.Tensor]) – List of input placeholders.
extra_inputs (list[tf.Tensor]) – List of extra input placeholders.
kwargs (dict) – Extra unused keyword arguments. Some optimizers have extra input, e.g. KL constraint.
- loss(inputs, extra_inputs=None)¶
The loss.
- optimize(inputs, extra_inputs=None, callback=None)¶
Perform optimization.
- Parameters
- Raises
NotImplementedError – If inputs are invalid.
Exception – If loss function is None, i.e. not defined.