garage.envs.normalized_env module

An environment wrapper that normalizes action, observation and reward.

class NormalizedEnv(env, scale_reward=1.0, normalize_obs=False, normalize_reward=False, expected_action_scale=1.0, flatten_obs=True, obs_alpha=0.001, reward_alpha=0.001)[source]

Bases: gym.core.Wrapper

An environment wrapper for normalization.

This wrapper normalizes action, and optionally observation and reward.

Parameters:
  • env (garage.envs.GarageEnv) – An environment instance.
  • scale_reward (float) – Scale of environment reward.
  • normalize_obs (bool) – If True, normalize observation.
  • normalize_reward (bool) – If True, normalize reward. scale_reward is applied after normalization.
  • expected_action_scale (float) – Assuming action falls in the range of [-expected_action_scale, expected_action_scale] when normalize it.
  • flatten_obs (bool) – Flatten observation if True.
  • obs_alpha (float) – Update rate of moving average when estimating the mean and variance of observations.
  • reward_alpha (float) – Update rate of moving average when estimating the mean and variance of rewards.
reset(**kwargs)[source]

Reset environment.

Parameters:**kwargs – Additional parameters for reset.
Returns:
  • observation (np.ndarray): The observation of the environment.
  • reward (float): The reward acquired at this time step.
  • done (boolean): Whether the environment was completed at this
    time step.
  • infos (dict): Environment-dependent additional information.
Return type:tuple
step(action)[source]

Feed environment with one step of action and get result.

Parameters:action (np.ndarray) – An action fed to the environment.
Returns:
  • observation (np.ndarray): The observation of the environment.
  • reward (float): The reward acquired at this time step.
  • done (boolean): Whether the environment was completed at this
    time step.
  • infos (dict): Environment-dependent additional information.
Return type:tuple
normalize

alias of garage.envs.normalized_env.NormalizedEnv