Skip to content

zeta1999/PettingZoo

 
 

Repository files navigation

Build Status

PettingZoo is a Python library for conducting research in multi-agent reinforcement learning. It's akin to a multi-agent version of OpenAI's Gym library.

We model environments as Agent Environment Cycle (AEC) games, in order to be able to support all types of multi-agent RL environments under one API.

Our website with comprehensive documentation is pettingzoo.ml

Environment Types and Installation

PettingZoo includes the following sets of games:

To install, use pip install pettingzoo

We support Python 3.6, 3.7 and 3.8, on Linux and macOS.

API

Using environments in PettingZoo is very similar to Gym, i.e. you initialize an environment via:

from pettingzoo.butterfly import pistonball_v0
env = pistonball_v0.env()

Environments can be interacted with in a manner very similar to Gym:

observation = env.reset()
for agent in env.agent_iter():
    reward, done, info = env.last()
    action = policy(observation)
    observation = env.step(action)

For the complete API documentation, please see http://www.pettingzoo.ml/api

SuperSuit

SuperSuit is a library that includes all commonly used wrappers in RL (frame stacking, observation, normalization, etc.) for PettingZoo and Gym environments with a nice API. We developed it in lieu of wrappers built into PettingZoo. https://github.com/PettingZoo-Team/SuperSuit

Release History

Version 1.0.1 (August 12, 2020):

Fixes to continuous made on pistonball and prison environments, along with a bad test that let the problems slip through. Versions bumped on both games.

Version 1.0.0 (August 5th, 2020):

This is the first official stable release of PettingZoo. Any changes to environments after this point will result in incrementing the environment version number. We currently plan to do three more things for PettingZoo beyond general maintenance: write a paper and put it on Arxiv, add Shogi as a classic environment using python-shogi, and add "colosseum"- an online tool for benchmarking competitive environments.

Citation

To cite this project in publication, please use

@misc{pettingZoo2020,
  author = {Terry, Justin K and Black, Benjamin and Jayakumar, Mario  and Hari, Ananth and Santos, Luis and Dieffendahl, Clemens and Williams, Niall and Ravi, Praveen and Lokesh, Yashas and Horsch, Caroline and Patel, Dipam},
  title = {Petting{Z}oo},
  year = {2020},
  publisher = {GitHub},
  note = {GitHub repository},
  howpublished = {\url{https://github.com/PettingZoo-Team/PettingZoo}}
}

OS Support

We support Linux and macOS, and conduct CI testing on both. We will accept PRs related to Windows, but do not officially support it. We're open to help properly supporting Windows.

Reward Program

We have a sort bug/documentation error bounty program, inspired by Donald Knuth's reward checks. People who make mergable PRs which properly address meaningful problems in the code, or which make meaningful improvements to the documentation, can receive a negotiable check for "hexadecimal dollar" ($2.56) mailed to them, or sent to them via PayPal. To redeem this, just send an email to justinkterry@gmail.com with your mailing address or PayPal address. We also pay out 32 cents for small fixes. This reward extends to libraries maintained by the PettingZoo team that PettingZoo depends on.

About

Gym for multi-agent reinforcement learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%