garage.np.policies
¶
Policies which use NumPy as a numerical backend.
-
class
FixedPolicy
(env_spec, scripted_actions, agent_infos=None)¶ Bases:
garage.np.policies.policy.Policy
Policy that performs a fixed sequence of actions.
Parameters: -
env_spec
¶ Policy environment specification.
Returns: Environment specification. Return type: garage.EnvSpec
-
observation_space
¶ Observation space.
Returns: The observation space of the environment. Return type: akro.Space
-
action_space
¶ Action space.
Returns: The action space of the environment. Return type: akro.Space
-
reset
(self, do_resets=None)¶ Reset policy.
Parameters: do_resets (None or list[bool]) – Vectorized policy states to reset. Raises: ValueError
– If do_resets has length greater than 1.
-
get_param_values
(self)¶ Return policy params (there are none).
Returns: Empty tuple. Return type: tuple
-
get_action
(self, observation)¶ Get next action.
Parameters: observation (np.ndarray) – Ignored. Raises: ValueError
– If policy is currently vectorized (reset was called with more than one done value).Returns: - The action and agent_info
- for this time step.
Return type: tuple[np.ndarray, dict[str, np.ndarray]]
-
get_actions
(self, observations)¶ Get next action.
Parameters: observations (np.ndarray) – Ignored. Raises: ValueError
– If observations has length greater than 1.Returns: - The action and agent_info
- for this time step.
Return type: tuple[np.ndarray, dict[str, np.ndarray]]
-
-
class
Policy
¶ Bases:
abc.ABC
Base class for policies based on numpy.
-
env_spec
¶ Policy environment specification.
Returns: Environment specification. Return type: garage.EnvSpec
-
observation_space
¶ Observation space.
Returns: The observation space of the environment. Return type: akro.Space
-
action_space
¶ Action space.
Returns: The action space of the environment. Return type: akro.Space
-
get_action
(self, observation)¶ Get action sampled from the policy.
Parameters: observation (np.ndarray) – Observation from the environment. Returns: - Actions and extra agent
- infos.
Return type: Tuple[np.ndarray, dict[str,np.ndarray]]
-
get_actions
(self, observations)¶ Get actions given observations.
Parameters: observations (torch.Tensor) – Observations from the environment. Returns: - Actions and extra agent
- infos.
Return type: Tuple[np.ndarray, dict[str,np.ndarray]]
-
reset
(self, do_resets=None)¶ Reset the policy.
This is effective only to recurrent policies.
do_resets is an array of boolean indicating which internal states to be reset. The length of do_resets should be equal to the length of inputs, i.e. batch size.
Parameters: do_resets (numpy.ndarray) – Bool array indicating which states to be reset.
-
-
class
ScriptedPolicy
(scripted_actions, agent_env_infos=None)¶ Bases:
garage.np.policies.policy.Policy
Simulates a garage policy object.
Parameters: -
env_spec
¶ Policy environment specification.
Returns: Environment specification. Return type: garage.EnvSpec
-
observation_space
¶ Observation space.
Returns: The observation space of the environment. Return type: akro.Space
-
action_space
¶ Action space.
Returns: The action space of the environment. Return type: akro.Space
-
set_param_values
(self, params)¶ Set param values.
Parameters: params (np.ndarray) – A numpy array of parameter values.
-
get_param_values
(self)¶ Get param values.
Returns: - Values of the parameters evaluated in
- the current session
Return type: np.ndarray
-
get_action
(self, observation)¶ Return a single action.
Parameters: observation (numpy.ndarray) – Observations. Returns: Action given input observation. dict[dict]: Agent infos indexed by observation. Return type: int
-
get_actions
(self, observations)¶ Return multiple actions.
Parameters: observations (numpy.ndarray) – Observations. Returns: Actions given input observations. dict[dict]: Agent info indexed by observation. Return type: list[int]
-
reset
(self, do_resets=None)¶ Reset the policy.
This is effective only to recurrent policies.
do_resets is an array of boolean indicating which internal states to be reset. The length of do_resets should be equal to the length of inputs, i.e. batch size.
Parameters: do_resets (numpy.ndarray) – Bool array indicating which states to be reset.
-