garage.tf.regressors package¶
Regressors for TensorFlow-based algorithms.
-
class
BernoulliMLPRegressor
(input_shape, output_dim, name='BernoulliMLPRegressor', hidden_sizes=(32, 32), hidden_nonlinearity=<function relu>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=<function sigmoid>, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, optimizer=None, optimizer_args=None, tr_optimizer=None, tr_optimizer_args=None, use_trust_region=True, max_kl_step=0.01, normalize_inputs=True, layer_normalization=False)[source]¶ Bases:
garage.tf.regressors.regressor.StochasticRegressor
Fits data to a Bernoulli distribution, parameterized by an MLP.
Parameters: - input_shape (tuple[int]) – Input shape of the training data. Since an MLP model is used, implementation assumes flattened inputs. The input shape of each data point should thus be of shape (x, ).
- output_dim (int) – Output dimension of the model.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for the network. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (Callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (Callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor. Default is Glorot uniform initializer.
- hidden_b_init (Callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor. Default is zero initializer.
- output_nonlinearity (Callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (Callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor. Default is Glorot uniform initializer.
- output_b_init (Callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor. Default is zero initializer.
- optimizer (garage.tf.Optimizer) – Optimizer for minimizing the negative log-likelihood. Defaults to LbsgsOptimizer
- optimizer_args (dict) – Arguments for the optimizer. Default is None, which means no arguments.
- tr_optimizer (garage.tf.Optimizer) – Optimizer for trust region approximation. Defaults to ConjugateGradientOptimizer.
- tr_optimizer_args (dict) – Arguments for the trust region optimizer. Default is None, which means no arguments.
- use_trust_region (bool) – Whether to use trust region constraint.
- max_kl_step (float) – KL divergence constraint for each iteration.
- normalize_inputs (bool) – Bool for normalizing inputs or not.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
dist_info_sym
(input_var, state_info_vars=None, name=None)[source]¶ Build a symbolic graph of the distribution parameters.
Parameters: Returns: - Output of the symbolic graph of the distribution
parameters.
Return type: dict[tf.Tensor]
-
distribution
¶ Distribution.
Type: garage.tf.distributions.DiagonalGaussian
-
fit
(xs, ys)[source]¶ Fit with input data xs and label ys.
Parameters: - xs (numpy.ndarray) – Input data.
- ys (numpy.ndarray) – Label of input data.
-
log_likelihood_sym
(x_var, y_var, name=None)[source]¶ Build a symbolic graph of the log-likelihood.
Parameters: - x_var (tf.Tensor) – Input tf.Tensor for the input data.
- y_var (tf.Tensor) – Input tf.Tensor for the one hot label of data.
- name (str) – Name of the new graph.
Returns: Output of the symbolic log-likelihood graph.
Return type: tf.Tensor
-
predict
(xs)[source]¶ Predict ys based on input xs.
Parameters: xs (numpy.ndarray) – Input data of shape (samples, input_dim) Returns: - The deterministic predicted ys (one hot vectors)
- of shape (samples, output_dim)
Return type: numpy.ndarray
-
predict_log_likelihood
(xs, ys)[source]¶ Log likelihood of ys given input xs.
Parameters: - xs (numpy.ndarray) – Input data of shape (samples, input_dim)
- ys (numpy.ndarray) – Output data of shape (samples, output_dim)
Returns: The log likelihood of shape (samples, )
Return type: numpy.ndarray
-
class
CategoricalMLPRegressor
(input_shape, output_dim, name='CategoricalMLPRegressor', hidden_sizes=(32, 32), hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=<function softmax_v2>, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, optimizer=None, optimizer_args=None, tr_optimizer=None, tr_optimizer_args=None, use_trust_region=True, max_kl_step=0.01, normalize_inputs=True, layer_normalization=False)[source]¶ Bases:
garage.tf.regressors.regressor.StochasticRegressor
Fits data to a Categorical with parameters are the output of an MLP.
A class for performing regression (or classification, really) by fitting a Categorical distribution to the outputs. Assumes that the output will always be a one hot vector
Parameters: - input_shape (tuple[int]) – Input shape of the training data. Since an MLP model is used, implementation assumes flattened inputs. The input shape of each data point should thus be of shape (x, ).
- output_dim (int) – Output dimension of the model.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for the network. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (Callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a tanh activation.
- hidden_w_init (Callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor. Default is Glorot uniform initializer.
- hidden_b_init (Callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor. Default is zero initializer.
- output_nonlinearity (Callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a softmax activation.
- output_w_init (Callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor. Default is Glorot uniform initializer.
- output_b_init (Callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor. Default is zero initializer.
- optimizer (garage.tf.Optimizer) – Optimizer for minimizing the negative log-likelihood. Defaults to LbsgsOptimizer
- optimizer_args (dict) – Arguments for the optimizer. Default is None, which means no arguments.
- tr_optimizer (garage.tf.Optimizer) – Optimizer for trust region approximation. Defaults to ConjugateGradientOptimizer.
- tr_optimizer_args (dict) – Arguments for the trust region optimizer. Default is None, which means no arguments.
- use_trust_region (bool) – Whether to use trust region constraint.
- max_kl_step (float) – KL divergence constraint for each iteration.
- normalize_inputs (bool) – Bool for normalizing inputs or not.
- layer_normalization (bool) – Bool for using layer normalization or not.
-
distribution
¶ Distribution.
Type: garage.tf.distributions.DiagonalGaussian
-
fit
(xs, ys)[source]¶ Fit with input data xs and label ys.
Parameters: - xs (numpy.ndarray) – Input data.
- ys (numpy.ndarray) – Label of input data.
-
class
ContinuousMLPRegressor
(input_shape, output_dim, name='ContinuousMLPRegressor', hidden_sizes=(32, 32), hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, optimizer=None, optimizer_args=None, normalize_inputs=True)[source]¶ Bases:
garage.tf.regressors.regressor.Regressor
Fits continuously-valued data to an MLP model.
Parameters: - input_shape (tuple[int]) – Input shape of the training data.
- output_dim (int) – Output dimension of the model.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (Callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (Callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (Callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (Callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (Callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (Callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- optimizer (garage.tf.Optimizer) – Optimizer for minimizing the negative log-likelihood.
- optimizer_args (dict) – Arguments for the optimizer. Default is None, which means no arguments.
- normalize_inputs (bool) – Bool for normalizing inputs or not.
-
fit
(xs, ys)[source]¶ Fit with input data xs and label ys.
Parameters: - xs (numpy.ndarray) – Input data.
- ys (numpy.ndarray) – Output labels.
-
predict
(xs)[source]¶ Predict y based on input xs.
Parameters: xs (numpy.ndarray) – Input data. Returns: The predicted ys. Return type: numpy.ndarray
-
class
GaussianCNNRegressor
(input_shape, output_dim, filters, strides, padding, hidden_sizes, hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, name='GaussianCNNRegressor', learn_std=True, init_std=1.0, adaptive_std=False, std_share_network=False, std_filters=(), std_strides=(), std_padding='SAME', std_hidden_sizes=(), std_hidden_nonlinearity=None, std_output_nonlinearity=None, layer_normalization=False, normalize_inputs=True, normalize_outputs=True, subsample_factor=1.0, optimizer=None, optimizer_args=None, use_trust_region=True, max_kl_step=0.01)[source]¶ Bases:
garage.tf.regressors.regressor.StochasticRegressor
Fits a Gaussian distribution to the outputs of a CNN.
Parameters: - input_shape (tuple[int]) – Input shape of the model (without the batch dimension).
- output_dim (int) – Output dimension of the model.
- filters (Tuple[Tuple[int, Tuple[int, int]], ..]) – Number and dimension of filters. For example, ((3, (3, 5)), (32, (3, 3))) means there are two convolutional layers. The filter for the first layer have 3 channels and its shape is (3 x 5), while the filter for the second layer have 32 channels and its shape is (3 x 3).
- strides (tuple[int]) – The stride of the sliding window. For example, (1, 2) means there are two convolutional layers. The stride of the filter for first layer is 1 and that of the second layer is 2.
- padding (str) – The type of padding algorithm to use, either ‘SAME’ or ‘VALID’.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the Convolutional model for mean. For example, (32, 32) means the network consists of two dense layers, each with 32 hidden units.
- hidden_nonlinearity (Callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (Callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (Callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (Callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (Callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (Callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- name – Name of this model (also used as its scope).
- learn_std (bool) – Whether to train the standard deviation parameter of the Gaussian distribution.
- init_std (float) – Initial standard deviation for the Gaussian distribution.
- adaptive_std (bool) – Whether to use a neural network to learn the standard deviation of the Gaussian distribution. Unless True, the standard deviation is learned as a parameter which is not conditioned on the inputs.
- std_share_network (bool) – Boolean for whether the mean and standard deviation models share a CNN network. If True, each is a head from a single body network. Otherwise, the parameters are estimated using the outputs of two indepedent networks.
- std_filters (Tuple[Tuple[int, Tuple[int, int]], ..]) – Number and dimension of filters. For example, ((3, (3, 5)), (32, (3, 3))) means there are two convolutional layers. The filter for the first layer have 3 channels and its shape is (3 x 5), while the filter for the second layer have 32 channels and its shape is (3 x 3).
- std_strides (tuple[int]) – The stride of the sliding window. For example, (1, 2) means there are two convolutional layers. The stride of the filter for first layer is 1 and that of the second layer is 2.
- std_padding (str) – The type of padding algorithm to use in std network, either ‘SAME’ or ‘VALID’.
- std_hidden_sizes (list[int]) – Output dimension of dense layer(s) for the Conv for std. For example, (32, 32) means the Conv consists of two hidden layers, each with 32 hidden units.
- std_hidden_nonlinearity (callable) – Nonlinearity for each hidden layer in the std network.
- std_output_nonlinearity (Callable) – Activation function for output dense layer in the std network. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- layer_normalization (bool) – Bool for using layer normalization or not.
- normalize_inputs (bool) – Bool for normalizing inputs or not.
- normalize_outputs (bool) – Bool for normalizing outputs or not.
- subsample_factor (float) – The factor to subsample the data. By default it is 1.0, which means using all the data.
- optimizer (garage.tf.Optimizer) – Optimizer used for fitting the model.
- optimizer_args (dict) – Arguments for the optimizer. Default is None, which means no arguments.
- use_trust_region (bool) – Whether to use a KL-divergence constraint.
- max_kl_step (float) – KL divergence constraint for each iteration, if use_trust_region is active.
-
dist_info_sym
(input_var, state_info_vars=None, name=None)[source]¶ Create a symbolic graph of the distribution parameters.
Parameters: Returns: - Outputs of the symbolic distribution parameter
graph.
Return type: dict[tf.Tensor]
-
distribution
¶ Distribution.
Type: garage.tf.distributions.DiagonalGaussian
-
fit
(xs, ys)[source]¶ Fit with input data xs and label ys.
Parameters: - xs (numpy.ndarray) – Input data.
- ys (numpy.ndarray) – Label of input data.
-
log_likelihood_sym
(x_var, y_var, name=None)[source]¶ Create a symbolic graph of the log likelihood.
Parameters: - x_var (tf.Tensor) – Input tf.Tensor for the input data.
- y_var (tf.Tensor) – Input tf.Tensor for the label of data.
- name (str) – Name of the new graph.
Returns: Output of the symbolic log-likelihood graph.
Return type: tf.Tensor
-
class
GaussianCNNRegressorModel
(input_shape, output_dim, filters, strides, padding, hidden_sizes, name='GaussianCNNRegressorModel', hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, learn_std=True, adaptive_std=False, std_share_network=False, init_std=1.0, min_std=1e-06, max_std=None, std_filters=(), std_strides=(), std_padding='SAME', std_hidden_sizes=(32, 32), std_hidden_nonlinearity=<function tanh>, std_hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, std_hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, std_output_nonlinearity=None, std_output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, std_parameterization='exp', layer_normalization=False)[source]¶ Bases:
garage.tf.models.gaussian_cnn_model.GaussianCNNModel
GaussianCNNRegressor based on garage.tf.models.Model class.
This class can be used to perform regression by fitting a Gaussian distribution to the outputs.
Parameters: - input_shape (tuple[int]) – Input shape of the model (without the batch dimension).
- filters (Tuple[Tuple[int, Tuple[int, int]], ..]) – Number and dimension of filters. For example, ((3, (3, 5)), (32, (3, 3))) means there are two convolutional layers. The filter for the first layer have 3 channels and its shape is (3 x 5), while the filter for the second layer have 32 channels and its shape is (3 x 3).
- strides (tuple[int]) – The stride of the sliding window. For example, (1, 2) means there are two convolutional layers. The stride of the filter for first layer is 1 and that of the second layer is 2.
- padding (str) – The type of padding algorithm to use, either ‘SAME’ or ‘VALID’.
- output_dim (int) – Output dimension of the model.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the Convolutional model for mean. For example, (32, 32) means the network consists of two dense layers, each with 32 hidden units.
- hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- learn_std (bool) – Is std trainable.
- init_std (float) – Initial value for std.
- adaptive_std (bool) – Is std a neural network. If False, it will be a parameter.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- std_filters (Tuple[Tuple[int, Tuple[int, int]], ..]) – Number and dimension of filters. For example, ((3, (3, 5)), (32, (3, 3))) means there are two convolutional layers. The filter for the first layer have 3 channels and its shape is (3 x 5), while the filter for the second layer have 32 channels and its shape is (3 x 3).
- std_strides (tuple[int]) – The stride of the sliding window. For example, (1, 2) means there are two convolutional layers. The stride of the filter for first layer is 1 and that of the second layer is 2.
- std_padding (str) – The type of padding algorithm to use in std network, either ‘SAME’ or ‘VALID’.
- std_hidden_sizes (list[int]) – Output dimension of dense layer(s) for the Conv for std. For example, (32, 32) means the Conv consists of two hidden layers, each with 32 hidden units.
- min_std (float) – If not None, the std is at least the value of min_std, to avoid numerical issues.
- max_std (float) – If not None, the std is at most the value of max_std, to avoid numerical issues.
- std_hidden_nonlinearity (callable) – Nonlinearity for each hidden layer in the std network.
- std_hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s) in the std network.
- std_hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s) in the std network.
- std_output_nonlinearity (callable) – Activation function for output dense layer in the std network. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- std_output_w_init (callable) – Initializer function for the weight of output dense layer(s) in the std network.
- std_parameterization (str) –
How the std should be parametrized. There are two options: - exp: the logarithm of the std will be stored, and applied a
exponential transformation- softplus: the std will be computed as log(1+exp(x))
- layer_normalization (bool) – Bool for using layer normalization or not.
-
class
GaussianMLPRegressor
(input_shape, output_dim, name='GaussianMLPRegressor', hidden_sizes=(32, 32), hidden_nonlinearity=<function tanh>, hidden_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, hidden_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, output_nonlinearity=None, output_w_init=<tensorflow.python.ops.init_ops_v2.GlorotUniform object>, output_b_init=<tensorflow.python.ops.init_ops_v2.Zeros object>, optimizer=None, optimizer_args=None, use_trust_region=True, max_kl_step=0.01, learn_std=True, init_std=1.0, adaptive_std=False, std_share_network=False, std_hidden_sizes=(32, 32), std_nonlinearity=None, layer_normalization=False, normalize_inputs=True, normalize_outputs=True, subsample_factor=1.0)[source]¶ Bases:
garage.tf.regressors.regressor.StochasticRegressor
Fits data to a Gaussian whose parameters are estimated by an MLP.
Parameters: - input_shape (tuple[int]) – Input shape of the training data.
- output_dim (int) – Output dimension of the model.
- name (str) – Model name, also the variable scope.
- hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- hidden_nonlinearity (Callable) – Activation function for intermediate dense layer(s). It should return a tf.Tensor. Set it to None to maintain a linear activation.
- hidden_w_init (Callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a tf.Tensor.
- hidden_b_init (Callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a tf.Tensor.
- output_nonlinearity (Callable) – Activation function for output dense layer. It should return a tf.Tensor. Set it to None to maintain a linear activation.
- output_w_init (Callable) – Initializer function for the weight of output dense layer(s). The function should return a tf.Tensor.
- output_b_init (Callable) – Initializer function for the bias of output dense layer(s). The function should return a tf.Tensor.
- optimizer (garage.tf.Optimizer) – Optimizer for minimizing the negative log-likelihood.
- optimizer_args (dict) – Arguments for the optimizer. Default is None, which means no arguments.
- use_trust_region (bool) – Whether to use trust region constraint.
- max_kl_step (float) – KL divergence constraint for each iteration.
- learn_std (bool) – Is std trainable.
- init_std (float) – Initial value for std.
- adaptive_std (bool) – Is std a neural network. If False, it will be a parameter.
- std_share_network (bool) – Boolean for whether mean and std share the same network.
- std_hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for std. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
- std_nonlinearity (Callable) – Nonlinearity for each hidden layer in the std network.
- layer_normalization (bool) – Bool for using layer normalization or not.
- normalize_inputs (bool) – Bool for normalizing inputs or not.
- normalize_outputs (bool) – Bool for normalizing outputs or not.
- subsample_factor (float) – The factor to subsample the data. By default it is 1.0, which means using all the data.
-
distribution
¶ Distribution.
Type: garage.tf.distributions.DiagonalGaussian
-
fit
(xs, ys)[source]¶ Fit with input data xs and label ys.
Parameters: - xs (numpy.ndarray) – Input data.
- ys (numpy.ndarray) – Label of input data.
-
class
Regressor
(input_shape, output_dim, name)[source]¶ Bases:
garage.tf.models.module.Module
Regressor base class.
Parameters:
-
class
StochasticRegressor
(input_shape, output_dim, name)[source]¶ Bases:
garage.tf.regressors.regressor.Regressor
,garage.tf.models.module.StochasticModule
StochasticRegressor base class.
-
log_likelihood_sym
(x_var, y_var, name=None)[source]¶ Symbolic graph of the log likelihood.
Parameters: - x_var (tf.Tensor) – Input tf.Tensor for the input data.
- y_var (tf.Tensor) – Input tf.Tensor for the label of data.
- name (str) – Name of the new graph.
Returns: tf.Tensor output of the symbolic log likelihood.
-
Submodules¶
- garage.tf.regressors.bernoulli_mlp_regressor module
- garage.tf.regressors.categorical_mlp_regressor module
- garage.tf.regressors.categorical_mlp_regressor_model module
- garage.tf.regressors.continuous_mlp_regressor module
- garage.tf.regressors.gaussian_cnn_regressor module
- garage.tf.regressors.gaussian_cnn_regressor_model module
- garage.tf.regressors.gaussian_mlp_regressor module
- garage.tf.regressors.gaussian_mlp_regressor_model module
- garage.tf.regressors.regressor module