Skip to content

Concise pytorch implements of MARL algorithms, including MAPPO, MADDPG, MATD3, QMIX and VDN.

License

Notifications You must be signed in to change notification settings

Lizhi-sjtu/MARL-code-pytorch

Repository files navigation

MARL-code-pytorch

Concise pytorch implements of MARL algorithms, including MAPPO, MADDPG, MATD3, QMIX and VDN.

Requirements

python==3.7.9
numpy==1.19.4
pytorch==1.5.0
tensorboard==0.6.0
gym==0.10.5
Multi-Agent Particle-World Environment(MPE)
SMAC-StarCraft Multi-Agent Challenge

Trainning results

1. MAPPO in MPE (discrete action space)

image

2. MAPPO in StarCraft II(SMAC)

image

3. QMIX and VDN in StarCraft II(SMAC)

image

4. MADDPG and MATD3 in MPE (continuous action space)

image

Some Details

In order to facilitate switching between discrete action space and continuous action space in MPE environments, we make some small modifications in MPE source code.

1. make_env.py

We add an argument named 'discrete' in 'make_env.py',which is a bool variable. image

2. environment.py

We also add an argument named 'discrete' in 'environment.py'. image

3. How to create a MPE environment?

If your want to use discrete action space mode, you can use 'env=make_env(scenario_name, discrete=True)'
If your want to use continuous action space mode, you can use 'env=make_env(scenario_name, discrete=False)'

About

Concise pytorch implements of MARL algorithms, including MAPPO, MADDPG, MATD3, QMIX and VDN.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages