An environment wrapper that normalizes action, observation and reward.
NormalizedEnv(env, scale_reward=1.0, normalize_obs=False, normalize_reward=False, expected_action_scale=1.0, flatten_obs=True, obs_alpha=0.001, reward_alpha=0.001)¶
An environment wrapper for normalization.
This wrapper normalizes action, and optionally observation and reward.
env (Environment) – An environment instance.
scale_reward (float) – Scale of environment reward.
normalize_obs (bool) – If True, normalize observation.
normalize_reward (bool) – If True, normalize reward. scale_reward is applied after normalization.
expected_action_scale (float) – Assuming action falls in the range of [-expected_action_scale, expected_action_scale] when normalize it.
flatten_obs (bool) – Flatten observation if True.
obs_alpha (float) – Update rate of moving average when estimating the mean and variance of observations.
reward_alpha (float) – Update rate of moving average when estimating the mean and variance of rewards.
Call reset on wrapped env.
- The first observation conforming to
- dict: The episode-level information.
Note that this is not part of env_info provided in step(). It contains information of he entire episode， which could be needed to determine the first action (e.g. in the case of goal-conditioned or MTRL.)
- Return type
Call step on wrapped env.
akro.Space: The action space specification.
akro.Space: The observation space specification.
EnvSpec: The environment specification.
list: A list of string representing the supported render modes.
Render the wrapped environment.
Creates a visualization of the wrapped environment.
Close the wrapped env.
garage.Environment: The inner environment.