Skip to content

Table of Contents

mlagents_envs.envs.pettingzoo_env_factory

PettingZooEnvFactory Objects

class PettingZooEnvFactory()

env

 | env(seed: Optional[int] = None, **kwargs: Union[List, int, bool, None]) -> UnityAECEnv

Creates the environment with env_id from unity's default_registry and wraps it in a UnityToPettingZooWrapper

Arguments:

  • seed: The seed for the action spaces of the agents.
  • kwargs: Any argument accepted by UnityEnvironmentclass except file_name

mlagents_envs.envs.unity_aec_env

UnityAECEnv Objects

class UnityAECEnv(UnityPettingzooBaseEnv,  AECEnv)

Unity AEC (PettingZoo) environment wrapper.

__init__

 | __init__(env: BaseEnv, seed: Optional[int] = None)

Initializes a Unity AEC environment wrapper.

Arguments:

  • env: The UnityEnvironment that is being wrapped.
  • seed: The seed for the action spaces of the agents.

step

 | step(action: Any) -> None

Sets the action of the active agent and get the observation, reward, done and info of the next agent.

Arguments:

  • action: The action for the active agent

observe

 | observe(agent_id)

Returns the observation an agent currently can make. last() calls this function.

last

 | last(observe=True)

returns observation, cumulative reward, done, info for the current agent (specified by self.agent_selection)

mlagents_envs.envs.unity_parallel_env

UnityParallelEnv Objects

class UnityParallelEnv(UnityPettingzooBaseEnv,  ParallelEnv)

Unity Parallel (PettingZoo) environment wrapper.

__init__

 | __init__(env: BaseEnv, seed: Optional[int] = None)

Initializes a Unity Parallel environment wrapper.

Arguments:

  • env: The UnityEnvironment that is being wrapped.
  • seed: The seed for the action spaces of the agents.

reset

 | reset() -> Dict[str, Any]

Resets the environment.

mlagents_envs.envs.unity_pettingzoo_base_env

UnityPettingzooBaseEnv Objects

class UnityPettingzooBaseEnv()

Unity Petting Zoo base environment.

observation_spaces

 | @property
 | observation_spaces() -> Dict[str, spaces.Space]

Return the observation spaces of all the agents.

observation_space

 | observation_space(agent: str) -> Optional[spaces.Space]

The observation space of the current agent.

action_spaces

 | @property
 | action_spaces() -> Dict[str, spaces.Space]

Return the action spaces of all the agents.

action_space

 | action_space(agent: str) -> Optional[spaces.Space]

The action space of the current agent.

side_channel

 | @property
 | side_channel() -> Dict[str, Any]

The side channels of the environment. You can access the side channels of an environment with env.side_channel[<name-of-channel>].

reset

 | reset()

Resets the environment.

seed

 | seed(seed=None)

Reseeds the environment (making the resulting environment deterministic). reset() must be called after seed(), and before step().

render

 | render(mode="human")

NOT SUPPORTED.

Displays a rendered frame from the environment, if supported. Alternate render modes in the default environments are 'rgb_array' which returns a numpy array and is supported by all environments outside of classic, and 'ansi' which returns the strings printed (specific to classic environments).

close

 | close() -> None

Close the environment.