Skip to content

Code for the paper "Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments"

Notifications You must be signed in to change notification settings

openai/robosumo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Status: Archive (code is provided as-is, no updates expected)

RoboSumo

This repository contains a set of competitive multi-agent environments used in the paper Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments.

Installation

RoboSumo depends on numpy, gym, and mujoco_py>=1.5 (if you haven't used MuJoCo before, please refer to the installation guide). Running demos with pre-trained policies additionally requires tensorflow>=1.1.0 and click.

The requirements can be installed via pip as follows:

$ pip install -r requirements.txt

To install RoboSumo, clone the repository and run pip install:

$ git clone https://github.com/openai/robosumo
$ cd robosumo
$ pip install -e .

Demos

You can run demos of the environments using demos/play.py script:

$ python demos/play.py

The script allows you to select different opponents as well as different policy architectures and versions for the agents. For details, please refer to the help:

$ python demos/play.py --help

Usage: play.py [OPTIONS]

Options:
  --env TEXT                    Name of the environment.  [default: RoboSumo-Ant-vs-Ant-v0]
  --policy-names [mlp|lstm]...  Policy names.  [default: mlp, mlp]
  --param-versions INTEGER...   Policy parameter versions.  [default: 1, 1]
  --max_episodes INTEGER        Number of episodes.  [default: 20]
  --help                        Show this message and exit.

About

Code for the paper "Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages