garage.torch.modules package

Pytorch modules.

class MLPModule(input_dim, output_dim, hidden_sizes, hidden_nonlinearity=<sphinx.ext.autodoc.importer._MockObject object>, hidden_w_init=<sphinx.ext.autodoc.importer._MockObject object>, hidden_b_init=<sphinx.ext.autodoc.importer._MockObject object>, output_nonlinearity=None, output_w_init=<sphinx.ext.autodoc.importer._MockObject object>, output_b_init=<sphinx.ext.autodoc.importer._MockObject object>, layer_normalization=False)[source]

Bases: garage.torch.modules.multi_headed_mlp_module.MultiHeadedMLPModule

MLP Model.

A Pytorch module composed only of a multi-layer perceptron (MLP), which maps real-valued inputs to real-valued outputs.

Parameters:
  • input_dim (int) – Dimension of the network input.
  • output_dim (int) – Dimension of the network output.
  • hidden_sizes (list[int]) – Output dimension of dense layer(s). For example, (32, 32) means this MLP consists of two hidden layers, each with 32 hidden units.
  • hidden_nonlinearity (callable or torch.nn.Module) – Activation function for intermediate dense layer(s). It should return a torch.Tensor. Set it to None to maintain a linear activation.
  • hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a torch.Tensor.
  • hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a torch.Tensor.
  • output_nonlinearity (callable or torch.nn.Module) – Activation function for output dense layer. It should return a torch.Tensor. Set it to None to maintain a linear activation.
  • output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a torch.Tensor.
  • output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a torch.Tensor.
  • layer_normalization (bool) – Bool for using layer normalization or not.
forward(input_value)[source]

Forward method.

Parameters:input_value (torch.Tensor) – Input values with (N, *, input_dim) shape.
Returns:Output value
Return type:torch.Tensor
class MultiHeadedMLPModule(n_heads, input_dim, output_dims, hidden_sizes, hidden_nonlinearity=<sphinx.ext.autodoc.importer._MockObject object>, hidden_w_init=<sphinx.ext.autodoc.importer._MockObject object>, hidden_b_init=<sphinx.ext.autodoc.importer._MockObject object>, output_nonlinearities=None, output_w_inits=<sphinx.ext.autodoc.importer._MockObject object>, output_b_inits=<sphinx.ext.autodoc.importer._MockObject object>, layer_normalization=False)[source]

Bases: sphinx.ext.autodoc.importer._MockObject

MultiHeadedMLPModule Model.

A PyTorch module composed only of a multi-layer perceptron (MLP) with multiple parallel output layers which maps real-valued inputs to real-valued outputs. The length of outputs is n_heads and shape of each output element is depend on each output dimension

Parameters:
  • n_heads (int) – Number of different output layers
  • input_dim (int) – Dimension of the network input.
  • output_dims (int or list or tuple) – Dimension of the network output.
  • hidden_sizes (list[int]) – Output dimension of dense layer(s). For example, (32, 32) means this MLP consists of two hidden layers, each with 32 hidden units.
  • hidden_nonlinearity (callable or torch.nn.Module or list or tuple) – Activation function for intermediate dense layer(s). It should return a torch.Tensor. Set it to None to maintain a linear activation.
  • hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a torch.Tensor.
  • hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a torch.Tensor.
  • output_nonlinearities (callable or torch.nn.Module or list or tuple) – Activation function for output dense layer. It should return a torch.Tensor. Set it to None to maintain a linear activation. Size of the parameter should be 1 or equal to n_head
  • output_w_inits (callable or list or tuple) – Initializer function for the weight of output dense layer(s). The function should return a torch.Tensor. Size of the parameter should be 1 or equal to n_head
  • output_b_inits (callable or list or tuple) – Initializer function for the bias of output dense layer(s). The function should return a torch.Tensor. Size of the parameter should be 1 or equal to n_head
  • layer_normalization (bool) – Bool for using layer normalization or not.
forward(input_val)[source]

Forward method.

Parameters:input_val (torch.Tensor) – Input values with (N, *, input_dim) shape.
Returns:Output values
Return type:List[torch.Tensor]
class GaussianMLPModule(input_dim, output_dim, hidden_sizes=(32, 32), hidden_nonlinearity=<sphinx.ext.autodoc.importer._MockObject object>, hidden_w_init=<sphinx.ext.autodoc.importer._MockObject object>, hidden_b_init=<sphinx.ext.autodoc.importer._MockObject object>, output_nonlinearity=None, output_w_init=<sphinx.ext.autodoc.importer._MockObject object>, output_b_init=<sphinx.ext.autodoc.importer._MockObject object>, learn_std=True, init_std=1.0, min_std=1e-06, max_std=None, std_parameterization='exp', layer_normalization=False)[source]

Bases: garage.torch.modules.gaussian_mlp_module.GaussianMLPBaseModule

GaussianMLPModule that mean and std share the same network.

class GaussianMLPIndependentStdModule(input_dim, output_dim, hidden_sizes=(32, 32), hidden_nonlinearity=<sphinx.ext.autodoc.importer._MockObject object>, hidden_w_init=<sphinx.ext.autodoc.importer._MockObject object>, hidden_b_init=<sphinx.ext.autodoc.importer._MockObject object>, output_nonlinearity=None, output_w_init=<sphinx.ext.autodoc.importer._MockObject object>, output_b_init=<sphinx.ext.autodoc.importer._MockObject object>, learn_std=True, init_std=1.0, min_std=1e-06, max_std=None, std_hidden_sizes=(32, 32), std_hidden_nonlinearity=<sphinx.ext.autodoc.importer._MockObject object>, std_hidden_w_init=<sphinx.ext.autodoc.importer._MockObject object>, std_hidden_b_init=<sphinx.ext.autodoc.importer._MockObject object>, std_output_nonlinearity=None, std_output_w_init=<sphinx.ext.autodoc.importer._MockObject object>, std_parameterization='exp', layer_normalization=False)[source]

Bases: garage.torch.modules.gaussian_mlp_module.GaussianMLPBaseModule

GaussianMLPModule which has two different mean and std network.

class GaussianMLPTwoHeadedModule(input_dim, output_dim, hidden_sizes=(32, 32), hidden_nonlinearity=<sphinx.ext.autodoc.importer._MockObject object>, hidden_w_init=<sphinx.ext.autodoc.importer._MockObject object>, hidden_b_init=<sphinx.ext.autodoc.importer._MockObject object>, output_nonlinearity=None, output_w_init=<sphinx.ext.autodoc.importer._MockObject object>, output_b_init=<sphinx.ext.autodoc.importer._MockObject object>, learn_std=True, init_std=1.0, min_std=1e-06, max_std=None, std_parameterization='exp', layer_normalization=False)[source]

Bases: garage.torch.modules.gaussian_mlp_module.GaussianMLPBaseModule

GaussianMLPModule which has only one mean network.