garage.tf.models.normalized_input_mlp_model module¶
NormalizedInputMLPModel.
-
class
NormalizedInputMLPModel
(input_shape, output_dim, name='NormalizedInputMLPModel', hidden_sizes=(32, 32), hidden_nonlinearity=<function relu>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, layer_normalization=False)[source]¶ Bases:
garage.tf.models.mlp_model.MLPModel
NormalizedInputMLPModel based on garage.tf.models.Model class.
This class normalized the inputs and pass the normalized input to a MLP model, which can be used to perform linear regression to the outputs.
Parameters: - input_shape (tuple[int]) – Input shape of the training data.
- output_dim (int) – Output dimension of the model.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- layer_normalization (bool) – Bool for using layer normalization or not.