Skip to content

Using multi-agent Deep Q Learning with LSTM cells (DRQN) to train multiple users in cognitive radio to learn to share scarce resource (channels) equally without communication

License

Notifications You must be signed in to change notification settings

shkrwnd/Deep-Reinforcement-Learning-for-Dynamic-Spectrum-Access

Repository files navigation

Deep-Reinforcement-Learning-for-Dynamic-Spectrum-Access

Dependencies

  1. python link
  2. matplotlib link
  3. tensorflow > 1.0 link
  4. numpy link
  5. jupyter link

We recommend to install with Anaconda

To train the DQN ,run on terminal

git clone https://github.com/shkrwnd/Deep-Reinforcement-Learning-for-Dynamic-Spectrum-Access.git
cd Deep-Reinforcement-Learning-for-Dynamic-Spectrum-Access
python train.py

To understand the code , I have provided jupyter notebooks:

  1. How to use environment.ipynb
  2. How to generate states.ipynb
  3. How_to_create_cluster.ipynb

To run notebook,run on terminal

jupyter notebook

Default browser will open ipynb files. Run each command one by one

This work is an inspiration from the paper

O. Naparstek and K. Cohen, “Deep multi-user reinforcement learning for dynamic spectrum access in multichannel wireless
networks,” to appear in Proc. of the IEEE Global Communications Conference (GLOBECOM), Dec. 2017

About

Using multi-agent Deep Q Learning with LSTM cells (DRQN) to train multiple users in cognitive radio to learn to share scarce resource (channels) equally without communication

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published