An environment wrapper that normalizes action, observation and reward.
- class NormalizedEnv(env, scale_reward=1.0, normalize_obs=False, normalize_reward=False, expected_action_scale=1.0, flatten_obs=True, obs_alpha=0.001, reward_alpha=0.001)¶
An environment wrapper for normalization.
This wrapper normalizes action, and optionally observation and reward.
env (Environment) – An environment instance.
scale_reward (float) – Scale of environment reward.
normalize_obs (bool) – If True, normalize observation.
normalize_reward (bool) – If True, normalize reward. scale_reward is applied after normalization.
expected_action_scale (float) – Assuming action falls in the range of [-expected_action_scale, expected_action_scale] when normalize it.
flatten_obs (bool) – Flatten observation if True.
obs_alpha (float) – Update rate of moving average when estimating the mean and variance of observations.
reward_alpha (float) – Update rate of moving average when estimating the mean and variance of rewards.
- property action_space¶
The action space specification.
- property observation_space¶
The observation space specification.
- property unwrapped¶
The inner environment.
Call reset on wrapped env.
- The first observation conforming to
- dict: The episode-level information.
Note that this is not part of env_info provided in step(). It contains information of he entire episode， which could be needed to determine the first action (e.g. in the case of goal-conditioned or MTRL.)
- Return type
Call step on wrapped env.
action (np.ndarray) – An action provided by the agent.
The environment step resulting from the action.
- Return type
RuntimeError – if step() is called after the environment has been constructed and reset() has not been called.
Render the wrapped environment.
Creates a visualization of the wrapped environment.
Close the wrapped env.