garage.torch.policies.categorical_cnn_policy

CategoricalCNNPolicy.

class CategoricalCNNPolicy(env, kernel_sizes, hidden_channels, strides=1, hidden_sizes=(32, 32), hidden_nonlinearity=torch.tanh, hidden_w_init=nn.init.xavier_uniform_, hidden_b_init=nn.init.zeros_, paddings=0, padding_mode='zeros', max_pool=False, pool_shape=None, pool_stride=1, output_nonlinearity=None, output_w_init=nn.init.xavier_uniform_, output_b_init=nn.init.zeros_, layer_normalization=False, name='CategoricalCNNPolicy')

Bases: garage.torch.policies.stochastic_policy.StochasticPolicy

Inheritance diagram of garage.torch.policies.categorical_cnn_policy.CategoricalCNNPolicy

CategoricalCNNPolicy.

A policy that contains a CNN and a MLP to make prediction based on a categorical distribution.

It only works with akro.Discrete action space.

Parameters:
  • env (garage.envs) – Environment.
  • kernel_sizes (tuple[int]) – Dimension of the conv filters. For example, (3, 5) means there are two convolutional layers. The filter for first layer is of dimension (3 x 3) and the second one is of dimension (5 x 5).
  • strides (tuple[int]) – The stride of the sliding window. For example, (1, 2) means there are two convolutional layers. The stride of the filter for first layer is 1 and that of the second layer is 2.
  • hidden_channels (tuple[int]) – Number of output channels for CNN. For example, (3, 32) means there are two convolutional layers. The filter for the first conv layer outputs 3 channels
  • hidden_sizes (list[int]) – Output dimension of dense layer(s) for the MLP for mean. For example, (32, 32) means the MLP consists of two hidden layers, each with 32 hidden units.
  • hidden_nonlinearity (callable) – Activation function for intermediate dense layer(s). It should return a torch.Tensor. Set it to None to maintain a linear activation.
  • hidden_w_init (callable) – Initializer function for the weight of intermediate dense layer(s). The function should return a torch.Tensor.
  • hidden_b_init (callable) – Initializer function for the bias of intermediate dense layer(s). The function should return a torch.Tensor.
  • paddings (tuple[int]) – Zero-padding added to both sides of the input
  • padding_mode (str) – The type of padding algorithm to use, either ‘SAME’ or ‘VALID’.
  • max_pool (bool) – Bool for using max-pooling or not.
  • pool_shape (tuple[int]) – Dimension of the pooling layer(s). For example, (2, 2) means that all the pooling layers have shape (2, 2).
  • pool_stride (tuple[int]) – The strides of the pooling layer(s). For example, (2, 2) means that all the pooling layers have strides (2, 2).
  • output_nonlinearity (callable) – Activation function for output dense layer. It should return a torch.Tensor. Set it to None to maintain a linear activation.
  • output_w_init (callable) – Initializer function for the weight of output dense layer(s). The function should return a torch.Tensor.
  • output_b_init (callable) – Initializer function for the bias of output dense layer(s). The function should return a torch.Tensor.
  • layer_normalization (bool) – Bool for using layer normalization or not.
  • name (str) – Name of policy.
name

Name of policy.

Returns:Name of policy
Return type:str
env_spec

Policy environment specification.

Returns:Environment specification.
Return type:garage.EnvSpec
observation_space

Observation space.

Returns:The observation space of the environment.
Return type:akro.Space
action_space

Action space.

Returns:The action space of the environment.
Return type:akro.Space
forward(self, observations)

Compute the action distributions from the observations.

Parameters:observations (torch.Tensor) – Batch of observations on default torch device.
Returns:Batch distribution of actions. dict[str, torch.Tensor]: Additional agent_info, as torch Tensors.
Do not need to be detached, and can be on any device.
Return type:torch.distributions.Distribution
get_action(self, observation)

Get a single action given an observation.

Parameters:observation (np.ndarray) – Observation from the environment. Shape is \(env_spec.observation_space\).
Returns:
  • np.ndarray: Predicted action. Shape is
    \(env_spec.action_space\).
  • dict:
    • np.ndarray[float]: Mean of the distribution
    • np.ndarray[float]: Standard deviation of logarithmic
      values of the distribution.
Return type:tuple
get_actions(self, observations)

Get actions given observations.

Parameters:observations (np.ndarray) – Observations from the environment. Shape is \(batch_dim \bullet env_spec.observation_space\).
Returns:
  • np.ndarray: Predicted actions.
    \(batch_dim \bullet env_spec.action_space\).
  • dict:
    • np.ndarray[float]: Mean of the distribution.
    • np.ndarray[float]: Standard deviation of logarithmic
      values of the distribution.
Return type:tuple
get_param_values(self)

Get the parameters to the policy.

This method is included to ensure consistency with TF policies.

Returns:The parameters (in the form of the state dictionary).
Return type:dict
set_param_values(self, state_dict)

Set the parameters to the policy.

This method is included to ensure consistency with TF policies.

Parameters:state_dict (dict) – State dictionary.
reset(self, do_resets=None)

Reset the policy.

This is effective only to recurrent policies.

do_resets is an array of boolean indicating which internal states to be reset. The length of do_resets should be equal to the length of inputs, i.e. batch size.

Parameters:do_resets (numpy.ndarray) – Bool array indicating which states to be reset.