garage.torch.modules.mlp_module module¶
MLP Module.
-
class
MLPModule
(input_dim, output_dim, hidden_sizes, hidden_nonlinearity=<sphinx.ext.autodoc.importer._MockObject object>, hidden_w_init=<sphinx.ext.autodoc.importer._MockObject object>, hidden_b_init=<sphinx.ext.autodoc.importer._MockObject object>, output_nonlinearity=None, output_w_init=<sphinx.ext.autodoc.importer._MockObject object>, output_b_init=<sphinx.ext.autodoc.importer._MockObject object>, layer_normalization=False)[source]¶ Bases:
garage.torch.modules.multi_headed_mlp_module.MultiHeadedMLPModule
MLP Model.
A Pytorch module composed only of a multi-layer perceptron (MLP), which maps real-valued inputs to real-valued outputs.
Parameters: - input_dim (int) – Dimension of the network input.
- output_dim (int) – Dimension of the network output.
- hidden_sizes (list[int]) – Output dimension of dense layer(s). For example, (32, 32) means this MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable or torch.nn.Module) – Activation function for intermediate dense layer(s). It should return a torch.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a torch.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a torch.Tensor.
- output_nonlinearity (callable or torch.nn.Module) – Activation function for output dense layer. It should return a torch.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a torch.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a torch.Tensor.
- layer_normalization (bool) – Bool for using layer normalization or not.