garage.np.policies

Policies which use NumPy as a numerical backend.

class FixedPolicy(env_spec, scripted_actions, agent_infos=None)[source]

Bases: garage.np.policies.policy.Policy

Inheritance diagram of garage.np.policies.FixedPolicy

Policy that performs a fixed sequence of actions.

Parameters
  • env_spec (garage.envs.env_spec.EnvSpec) – Environment specification.

  • scripted_actions (list[np.ndarray] or np.ndarray) – Sequence of actions to perform.

  • agent_infos (list[dict[str, np.ndarray]] or None) – Sequence of agent_infos to produce.

reset(self, do_resets=None)[source]

Reset policy.

Parameters

do_resets (None or list[bool]) – Vectorized policy states to reset.

Raises

ValueError – If do_resets has length greater than 1.

set_param_values(self, params)[source]

Set param values of policy.

Parameters

params (object) – Ignored.

get_param_values(self)[source]

Return policy params (there are none).

Returns

Empty tuple.

Return type

tuple

get_action(self, observation)[source]

Get next action.

Parameters

observation (np.ndarray) – Ignored.

Raises

ValueError – If policy is currently vectorized (reset was called with more than one done value).

Returns

The action and agent_info

for this time step.

Return type

tuple[np.ndarray, dict[str, np.ndarray]]

get_actions(self, observations)[source]

Get next action.

Parameters

observations (np.ndarray) – Ignored.

Raises

ValueError – If observations has length greater than 1.

Returns

The action and agent_info

for this time step.

Return type

tuple[np.ndarray, dict[str, np.ndarray]]

property env_spec(self)

Policy environment specification.

Returns

Environment specification.

Return type

garage.EnvSpec

property name(self)

Name of policy.

Returns

Name of policy

Return type

str

property observation_space(self)

Observation space.

Returns

The observation space of the environment.

Return type

akro.Space

property action_space(self)

Action space.

Returns

The action space of the environment.

Return type

akro.Space

class Policy[source]

Bases: abc.ABC

Inheritance diagram of garage.np.policies.Policy

Base class for policies based on numpy.

abstract get_action(self, observation)[source]

Get action sampled from the policy.

Parameters

observation (np.ndarray) – Observation from the environment.

Returns

Actions and extra agent

infos.

Return type

Tuple[np.ndarray, dict[str,np.ndarray]]

abstract get_actions(self, observations)[source]

Get actions given observations.

Parameters

observations (torch.Tensor) – Observations from the environment.

Returns

Actions and extra agent

infos.

Return type

Tuple[np.ndarray, dict[str,np.ndarray]]

reset(self, do_resets=None)[source]

Reset the policy.

This is effective only to recurrent policies.

do_resets is an array of boolean indicating which internal states to be reset. The length of do_resets should be equal to the length of inputs, i.e. batch size.

Parameters

do_resets (numpy.ndarray) – Bool array indicating which states to be reset.

property name(self)

Name of policy.

Returns

Name of policy

Return type

str

property env_spec(self)

Policy environment specification.

Returns

Environment specification.

Return type

garage.EnvSpec

property observation_space(self)

Observation space.

Returns

The observation space of the environment.

Return type

akro.Space

property action_space(self)

Action space.

Returns

The action space of the environment.

Return type

akro.Space

class ScriptedPolicy(scripted_actions, agent_env_infos=None)[source]

Bases: garage.np.policies.policy.Policy

Inheritance diagram of garage.np.policies.ScriptedPolicy

Simulates a garage policy object.

Parameters
  • scripted_actions (list or dictionary) – data structure indexed by observation, returns a corresponding action

  • agent_env_infos (list or dictionary) – data structure indexed by observation, returns a corresponding agent_env_info

set_param_values(self, params)[source]

Set param values.

Parameters

params (np.ndarray) – A numpy array of parameter values.

get_param_values(self)[source]

Get param values.

Returns

Values of the parameters evaluated in

the current session

Return type

np.ndarray

get_action(self, observation)[source]

Return a single action.

Parameters

observation (numpy.ndarray) – Observations.

Returns

Action given input observation. dict[dict]: Agent infos indexed by observation.

Return type

int

get_actions(self, observations)[source]

Return multiple actions.

Parameters

observations (numpy.ndarray) – Observations.

Returns

Actions given input observations. dict[dict]: Agent info indexed by observation.

Return type

list[int]

reset(self, do_resets=None)

Reset the policy.

This is effective only to recurrent policies.

do_resets is an array of boolean indicating which internal states to be reset. The length of do_resets should be equal to the length of inputs, i.e. batch size.

Parameters

do_resets (numpy.ndarray) – Bool array indicating which states to be reset.

property name(self)

Name of policy.

Returns

Name of policy

Return type

str

property env_spec(self)

Policy environment specification.

Returns

Environment specification.

Return type

garage.EnvSpec

property observation_space(self)

Observation space.

Returns

The observation space of the environment.

Return type

akro.Space

property action_space(self)

Action space.

Returns

The action space of the environment.

Return type

akro.Space

class UniformRandomPolicy(env_spec)[source]

Bases: garage.np.policies.policy.Policy

Inheritance diagram of garage.np.policies.UniformRandomPolicy

Action taken is uniformly random.

Parameters

env_spec (EnvSpec) – Environment spec to explore.

reset(self, do_resets=None)[source]

Reset the state of the exploration.

Parameters

do_resets (List[bool] or numpy.ndarray or None) – Which vectorization states to reset.

get_action(self, observation)[source]

Get action from this policy for the input observation.

Parameters

observation (numpy.ndarray) – Observation from the environment.

Returns

Actions with noise. List[dict]: Arbitrary policy state information (agent_info).

Return type

np.ndarray

get_actions(self, observations)[source]

Get actions from this policy for the input observation.

Parameters

observations (list) – Observations from the environment.

Returns

Actions with noise. List[dict]: Arbitrary policy state information (agent_info).

Return type

np.ndarray

property name(self)

Name of policy.

Returns

Name of policy

Return type

str

property env_spec(self)

Policy environment specification.

Returns

Environment specification.

Return type

garage.EnvSpec

property observation_space(self)

Observation space.

Returns

The observation space of the environment.

Return type

akro.Space

property action_space(self)

Action space.

Returns

The action space of the environment.

Return type

akro.Space