garage.tf.models.mlp_dueling_model module

MLP Dueling Model.

class MLPDuelingModel(output_dim, name=None, hidden_sizes=(32, 32), hidden_nonlinearity=<function relu>, hidden_w_init=<function xavier_initializer>, hidden_b_init=<class 'tensorflow.python.ops.init_ops.Zeros'>, output_nonlinearity=None, output_w_init=<function xavier_initializer>, output_b_init=<class 'tensorflow.python.ops.init_ops.Zeros'>, layer_normalization=False)[source]

Bases: garage.tf.models.base.Model

MLP Model with dueling network structure.

Parameters:
  • output_dim (int) – Dimension of the network output.
  • hidden_sizes (list[int]) – Output dimension of dense layer(s). For example, (32, 32) means this MLP consists of two hidden layers, each with 32 hidden units.
  • name (str) – Model name, also the variable scope.
  • hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
  • hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
  • hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
  • output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
  • output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
  • output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
  • layer_normalization (bool) – Bool for using layer normalization or not.