garage.tf.optimizers.first_order_optimizer module¶
First order optimizer.
-
class
FirstOrderOptimizer
(optimizer=None, learning_rate=None, max_epochs=1000, tolerance=1e-06, batch_size=32, callback=None, verbose=False, name='FirstOrderOptimizer')[source]¶ Bases:
object
First order optimier.
Performs (stochastic) gradient descent, possibly using fancier methods like ADAM etc.
Parameters: - optimizer (tf.Optimizer) – Optimizer to be used.
- learning_rate (dict) – learning rate arguments. learning rates are our main interest parameters to tune optimizers.
- max_epochs (int) – Maximum number of epochs for update.
- tolerance (float) – Tolerance for difference in loss during update.
- batch_size (int) – Batch size for optimization.
- callback (callable) – Function to call during each epoch. Default is None.
- verbose (bool) – If true, intermediate log message will be printed.
- name (str) – Name scope of the optimizer.
-
loss
(inputs, extra_inputs=None)[source]¶ The loss.
Parameters: Returns: Loss.
Return type: Raises: Exception
– If loss function is None, i.e. not defined.
-
optimize
(inputs, extra_inputs=None, callback=None)[source]¶ Perform optimization.
Parameters: Raises: NotImplementedError
– If inputs are invalid.Exception
– If loss function is None, i.e. not defined.
-
update_opt
(loss, target, inputs, extra_inputs=None, **kwargs)[source]¶ Construct operation graph for the optimizer.
Parameters: - loss (tf.Tensor) – Loss objective to minimize.
- target (object) – Target object to optimize. The object should implemenet get_params() and get_param_values.
- inputs (list[tf.Tensor]) – List of input placeholders.
- extra_inputs (list[tf.Tensor]) – List of extra input placeholders.
- kwargs (dict) – Extra unused keyword arguments. Some optimizers have extra input, e.g. KL constraint.