garage.tf.models package¶
-
class
CNNModel
(filter_dims, num_filters, strides, padding, name=None, hidden_nonlinearity=<function relu>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>)[source]¶ Bases:
garage.tf.models.base.Model
CNN Model.
Parameters: - filter_dims (tuple[int]) – Dimension of the filters. For example, (3, 5) means there are two convolutional layers. The filter for first layer is of dimension (3 x 3) and the second one is of dimension (5 x 5).
- num_filters (tuple[int]) – Number of filters. For example, (3, 32) means there are two convolutional layers. The filter for the first layer has 3 channels and the second one with 32 channels.
- strides (tuple[int]) – The stride of the sliding window. For example, (1, 2) means there are two convolutional layers. The stride of the filter for first layer is 1 and that of the second layer is 2.
- name (str) – Model name, also the variable scope.
- padding (str) – The type of padding algorithm to use, either ‘SAME’ or ‘VALID’.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
-
class
CNNModelWithMaxPooling
(filter_dims, num_filters, strides, name=None, padding='SAME', pool_strides=(2, 2), pool_shapes=(2, 2), hidden_nonlinearity=<function relu>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>)[source]¶ Bases:
garage.tf.models.base.Model
CNN Model with max pooling.
Parameters: - filter_dims (tuple[int]) – Dimension of the filters. For example, (3, 5) means there are two convolutional layers. The filter for first layer is of dimension (3 x 3) and the second one is of dimension (5 x 5).
- num_filters (tuple[int]) – Number of filters. For example, (3, 32) means there are two convolutional layers. The filter for the first layer has 3 channels and the second one with 32 channels.
- strides (tuple[int]) – The stride of the sliding window. For example, (1, 2) means there are two convolutional layers. The stride of the filter for first layer is 1 and that of the second layer is 2.
- name (str) – Model name, also the variable scope of the cnn.
- padding (str) – The type of padding algorithm to use, either ‘SAME’ or ‘VALID’.
- pool_strides (tuple[int]) – The strides of the pooling layer(s). For example, (2, 2) means that all the pooling layers have strides (2, 2).
- pool_shapes (tuple[int]) – Dimension of the pooling layer(s). For example, (2, 2) means that all the pooling layers have shape (2, 2).
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
-
class
LSTMModel
(output_dim, hidden_dim, name=None, hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, recurrent_nonlinearity=<function sigmoid>, recurrent_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, hidden_state_init=<tensorflow.python.ops.init_ops.Zeros object>, hidden_state_init_trainable=False, cell_state_init=<tensorflow.python.ops.init_ops.Zeros object>, cell_state_init_trainable=False, forget_bias=True, layer_normalization=False)[source]¶ Bases:
garage.tf.models.base.Model
LSTM Model.
Parameters: - output_dim (int) – Dimension of the network output.
- hidden_dim (int) – Hidden dimension for LSTM cell.
- name (str) – Policy name, also the variable scope.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- recurrent_nonlinearity (callable) – Activation function for recurrent layers. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- recurrent_w_init (callable) – Initializer function for the weight of recurrent layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- hidden_state_init (callable) – Initializer function for the initial hidden state. The functino should return a tf.Tensor.
- hidden_state_init_trainable (bool) – Bool for whether the initial hidden state is trainable.
- cell_state_init (callable) – Initializer function for the initial cell state. The functino should return a tf.Tensor.
- cell_state_init_trainable (bool) – Bool for whether the initial cell state is trainable.
- forget_bias (bool) – If True, add 1 to the bias of the forget gate at initialization. It’s used to reduce the scale of forgetting at the beginning of the training.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
Model
(name)[source]¶ Bases:
garage.tf.models.base.BaseModel
Model class for TensorFlow.
A TfModel only contains the structure/configuration of the underlying computation graphs. Connectivity information are all in Network class. A TfModel contains zero or more Network.
When a Network is created, it reuses the parameter from the model and can be accessed by calling model.networks[‘network_name’], If a Network is built without given a name, the name “default” will be used.
* Do not call tf.global_variable_initializers() after building a model as it will reassign random weights to the model. The parameters inside a model will be initialized when calling build(). *
Pickling is handled automatcailly. The target weights should be assigned to self._default_parameters before pickling, so that the newly created model can check if target weights exist or not. When unpickled, the unserialized model will load the weights from self._default_parameters.
The design is illustrated as the following:
- input_1 input_2
============== Model (TfModel)=================== | | | | | | Parameters | | | ============= / ============ | | | default | / | Network2 | | | | (Network) |/ |(Network) | | | ============= ============ | | | | | =================================================
- (model.networks[‘default’].outputs) |
- model.networks[‘Network2’].outputs
Examples are also available in tests/garage/tf/models/test_model.
Parameters: - name (str) – Name of the model. It will also become the variable scope
- the model. Every model should have a unique name. (of) –
-
build
(*inputs, name=None)[source]¶ Build a Network with the given input(s).
* Do not call tf.global_variable_initializers() after building a model as it will reassign random weights to the model. The parameters inside a model will be initialized when calling build(). *
It uses the same, fixed variable scope for all Networks, to ensure parameter sharing. Different Networks must have an unique name.
Parameters: Raises: ValueError when a Network with the same name is already built.
Returns: - Output tensors of the model with the given
inputs.
Return type: outputs (list[tf.Tensor])
-
input
¶ Default input (tf.Tensor) of the model.
When the model is built the first time, by default it creates the ‘default’ network. This property creates a reference to the input of the network.
-
inputs
¶ Default inputs (tf.Tensor) of the model.
When the model is built the first time, by default it creates the ‘default’ network. This property creates a reference to the inputs of the network.
-
name
¶ Name (str) of the model.
This is also the variable scope of the model.
-
network_input_spec
()[source]¶ Network input spec.
Returns: List of key(str) for the network inputs. Return type: *inputs (list[str])
-
network_output_spec
()[source]¶ Network output spec.
Returns: List of key(str) for the network outputs. Return type: *inputs (list[str])
-
networks
¶ Networks of the model.
-
output
¶ Default output (tf.Tensor) of the model.
When the model is built the first time, by default it creates the ‘default’ network. This property creates a reference to the output of the network.
-
outputs
¶ Default outputs (tf.Tensor) of the model.
When the model is built the first time, by default it creates the ‘default’ network. This property creates a reference to the outputs of the network.
-
parameters
¶ Parameters of the model.
-
class
GaussianCNNModel
(output_dim, filter_dims, num_filters, strides, padding, hidden_sizes, name=None, hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, learn_std=True, adaptive_std=False, std_share_network=False, init_std=1.0, min_std=1e-06, max_std=None, std_filter_dims=[], std_num_filters=[], std_strides=[], std_padding='SAME', std_hidden_sizes=(32, 32), std_hidden_nonlinearity=<function tanh>, std_hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, std_hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, std_output_nonlinearity=None, std_output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, std_parameterization='exp', layer_normalization=False)[source]¶ Bases:
garage.tf.models.base.Model
GaussianCNNModel.
Parameters: - filter_dims (tuple[int]) – Dimension of the filters. For example, (3, 5) means there are two convolutional layers. The filter for first layer is of dimension (3 x 3) and the second one is of dimension (5 x 5).
- num_filters (tuple[int]) – Number of filters. For example, (3, 32) means there are two convolutional layers. The filter for the first layer has 3 channels and the second one with 32 channels.
- strides (tuple[int]) – The stride of the sliding window. For example, (1, 2) means there are two convolutional layers. The stride of the filter for first layer is 1 and that of the second layer is 2.
- padding (str) – The type of padding algorithm to use, either ‘SAME’ or ‘VALID’.
- output_dim (int) – Output dimension of the model.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the Convolutional model for mean. For example, (32, 32) means the network consists of two dense layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- learn_std (bool) – Is std trainable.
- init_std (float) – Initial value for std.
- adaptive_std (bool) – Is std a neural network. If False, it will be a parameter.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- std_filter_dims (tuple[int]) – Dimension of the filters. For example, (3, 5) means there are two convolutional layers. The filter for first layer is of dimension (3 x 3) and the second one is of dimension (5 x 5).
- std_num_filters (tuple[int]) – Number of filters. For example, (3, 32) means there are two convolutional layers. The filter for the first layer has 3 channels and the second one with 32 channels.
- std_strides (tuple[int]) – The stride of the sliding window. For example, (1, 2) means there are two convolutional layers. The stride of the filter for first layer is 1 and that of the second layer is 2.
- std_padding (str) – The type of padding algorithm to use in std network, either ‘SAME’ or ‘VALID’.
- std_hidden_sizes (list[int]) – Output dimension of dense layer(s) for the Conv for std. For example, (32, 32) means the Conv consists of two hidden layers, each with 32 hidden units.
- min_std (float) – If not None, the std is at least the value of min_std, to avoid numerical issues.
- max_std (float) – If not None, the std is at most the value of max_std, to avoid numerical issues.
- std_hidden_nonlinearity – Nonlinearity for each hidden layer in the std network.
- std_output_nonlinearity (callable) – Activation function for output dense layer in the std network. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- std_output_w_init (callable) – Initializer function for the weight of output dense layer(s) in the std network.
- std_parameterization (str) –
How the std should be parametrized. There are two options: - exp: the logarithm of the std will be stored, and applied a
exponential transformation- softplus: the std will be computed as log(1+exp(x))
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
GaussianGRUModel
(output_dim, hidden_dim=32, name=None, hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, recurrent_nonlinearity=<function sigmoid>, recurrent_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, hidden_state_init=<tensorflow.python.ops.init_ops.Zeros object>, hidden_state_init_trainable=False, learn_std=True, init_std=1.0, std_share_network=False, layer_normalization=False)[source]¶ Bases:
garage.tf.models.base.Model
GaussianGRUModel.
Parameters: - output_dim (int) – Output dimension of the model.
- hidden_dim (int) – Hidden dimension for GRU cell for mean.
- name (str) – Model name, also the variable scope.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- recurrent_nonlinearity (callable) – Activation function for recurrent layers. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- recurrent_w_init (callable) – Initializer function for the weight of recurrent layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- hidden_state_init (callable) – Initializer function for the initial hidden state. The functino should return a tf.Tensor.
- hidden_state_init_trainable (bool) – Bool for whether the initial hidden state is trainable.
- learn_std (bool) – Is std trainable.
- init_std (float) – Initial value for std.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
GaussianLSTMModel
(output_dim, hidden_dim=32, name=None, hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, recurrent_nonlinearity=<function sigmoid>, recurrent_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, hidden_state_init=<tensorflow.python.ops.init_ops.Zeros object>, hidden_state_init_trainable=False, cell_state_init=<tensorflow.python.ops.init_ops.Zeros object>, cell_state_init_trainable=False, forget_bias=True, learn_std=True, init_std=1.0, std_share_network=False, layer_normalization=False)[source]¶ Bases:
garage.tf.models.base.Model
GaussianLSTMModel.
Parameters: - output_dim (int) – Output dimension of the model.
- hidden_dim (int) – Hidden dimension for LSTM cell for mean.
- name (str) – Model name, also the variable scope.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- recurrent_nonlinearity (callable) – Activation function for recurrent layers. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- recurrent_w_init (callable) – Initializer function for the weight of recurrent layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- hidden_state_init (callable) – Initializer function for the initial hidden state. The functino should return a tf.Tensor.
- hidden_state_init_trainable (bool) – Bool for whether the initial hidden state is trainable.
- cell_state_init (callable) – Initializer function for the initial cell state. The functino should return a tf.Tensor.
- cell_state_init_trainable (bool) – Bool for whether the initial cell state is trainable.
- forget_bias (bool) – If True, add 1 to the bias of the forget gate at initialization. It’s used to reduce the scale of forgetting at the beginning of the training.
- learn_std (bool) – Is std trainable.
- init_std (float) – Initial value for std.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
GaussianMLPModel
(output_dim, name=None, hidden_sizes=(32, 32), hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, learn_std=True, adaptive_std=False, std_share_network=False, init_std=1.0, min_std=1e-06, max_std=None, std_hidden_sizes=(32, 32), std_hidden_nonlinearity=<function tanh>, std_hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, std_hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, std_output_nonlinearity=None, std_output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, std_parameterization='exp', layer_normalization=False)[source]¶ Bases:
garage.tf.models.base.Model
GaussianMLPModel.
Parameters: - output_dim (int) – Output dimension of the model.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- learn_std (bool) – Is std trainable.
- init_std (float) – Initial value for std.
- adaptive_std (bool) – Is std a neural network. If False, it will be a parameter.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- std_hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for std. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- min_std (float) – If not None, the std is at least the value of min_std, to avoid numerical issues.
- max_std (float) – If not None, the std is at most the value of max_std, to avoid numerical issues.
- std_hidden_nonlinearity – Nonlinearity for each hidden layer in the std network.
- std_output_nonlinearity (callable) – Activation function for output dense layer in the std network. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- std_output_w_init (callable) – Initializer function for the weight of output dense layer(s) in the std network.
- std_parameterization (str) –
How the std should be parametrized. There are two options: - exp: the logarithm of the std will be stored, and applied a
exponential transformation- softplus: the std will be computed as log(1+exp(x))
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
GRUModel
(output_dim, hidden_dim, name=None, hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, recurrent_nonlinearity=<function sigmoid>, recurrent_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, hidden_state_init=<tensorflow.python.ops.init_ops.Zeros object>, hidden_state_init_trainable=False, layer_normalization=False)[source]¶ Bases:
garage.tf.models.base.Model
GRU Model.
Parameters: - output_dim (int) – Dimension of the network output.
- hidden_dim (int) – Hidden dimension for GRU cell.
- name (str) – Policy name, also the variable scope.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- recurrent_nonlinearity (callable) – Activation function for recurrent layers. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- recurrent_w_init (callable) – Initializer function for the weight of recurrent layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- hidden_state_init (callable) – Initializer function for the initial hidden state. The functino should return a tf.Tensor.
- hidden_state_init_trainable (bool) – Bool for whether the initial hidden state is trainable.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
MLPDuelingModel
(output_dim, name=None, hidden_sizes=(32, 32), hidden_nonlinearity=<function relu>, hidden_w_init=<function xavier_initializer>, hidden_b_init=<class 'tensorflow.python.ops.init_ops.Zeros'>, output_nonlinearity=None, output_w_init=<function xavier_initializer>, output_b_init=<class 'tensorflow.python.ops.init_ops.Zeros'>, layer_normalization=False)[source]¶ Bases:
garage.tf.models.base.Model
MLP Model with dueling network structure.
Parameters: - output_dim (int) – Dimension of the network output.
- hidden_sizes (list[int]) – Output dimension of dense layer(s). For example, (32, 32) means this MLP consists of two hidden layers, each with 32 hidden units.
- name (str) – Model name, also the variable scope.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
MLPMergeModel
(output_dim, name='MLPMergeModel', hidden_sizes=(32, 32), concat_layer=-2, hidden_nonlinearity=<function relu>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, layer_normalization=False)[source]¶ Bases:
garage.tf.models.base.Model
MLP Merge Model.
Parameters: - output_dim (int) – Dimension of the network output.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s). For example, (32, 32) means this MLP consists of two hidden layers, each with 32 hidden units.
- concat_layer (int) – The index of layers at which to concatenate input_var2 with the network. The indexing works like standard python list indexing. Index of 0 refers to the input layer (input_var1) while an index of -1 points to the last hidden layer. Default parameter points to second layer from the end.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
MLPModel
(output_dim, name='MLPModel', hidden_sizes=(32, 32), hidden_nonlinearity=<function relu>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, layer_normalization=False)[source]¶ Bases:
garage.tf.models.base.Model
MLP Model.
Parameters: - output_dim (int) – Dimension of the network output.
- hidden_sizes (list[int]) – Output dimension of dense layer(s). For example, (32, 32) means this MLP consists of two hidden layers, each with 32 hidden units.
- name (str) – Model name, also the variable scope.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
NormalizedInputMLPModel
(input_shape, output_dim, name='NormalizedInputMLPModel', hidden_sizes=(32, 32), hidden_nonlinearity=<function relu>, hidden_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops.Zeros object>, layer_normalization=False)[source]¶ Bases:
garage.tf.models.mlp_model.MLPModel
NormalizedInputMLPModel based on garage.tf.models.Model class.
This class normalized the inputs and pass the normalized input to a MLP model, which can be used to perform linear regression to the outputs.
Parameters: - input_shape (tuple[int]) – Input shape of the training data.
- output_dim (int) – Output dimension of the model.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
Sequential
(*models, name=None)[source]¶ Bases:
garage.tf.models.base.Model
Sequential Model.
Parameters: - name (str) – Model name, also the variable scope.
- models (list[garage.tf.models.Model]) – The models to be connected in sequential order.
-
input
¶ tf.Tensor input of the model by default.
-
inputs
¶ tf.Tensor inputs of the model by default.
-
output
¶ tf.Tensor output of the model by default.
-
outputs
¶ tf.Tensor outputs of the model by default.
Submodules¶
- garage.tf.models.base module
- garage.tf.models.cnn module
- garage.tf.models.cnn_model module
- garage.tf.models.cnn_model_max_pooling module
- garage.tf.models.gaussian_cnn_model module
- garage.tf.models.gaussian_gru_model module
- garage.tf.models.gaussian_lstm_model module
- garage.tf.models.gaussian_mlp_model module
- garage.tf.models.gru module
- garage.tf.models.gru_model module
- garage.tf.models.lstm module
- garage.tf.models.lstm_model module
- garage.tf.models.mlp module
- garage.tf.models.mlp_dueling_model module
- garage.tf.models.mlp_merge_model module
- garage.tf.models.mlp_model module
- garage.tf.models.normalized_input_mlp_model module
- garage.tf.models.parameter module
- garage.tf.models.sequential module