Skip to content

This is a Tensorflow implementation of the paper Homogeneous Learning: Self-Attention Decentralized Deep Learning.

Notifications You must be signed in to change notification settings

yuweisunn/homogeneous-learning-tensorflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is a Tensorflow implementation of the paper Homogeneous Learning: Self-Attention Decentralized Deep Learning, IEEE Access 2022

Table of Contents

General information

Homogeneous Learning (HL) is decentralized neural networks based on the Global Workspace Theory for fast learning of novel tasks leverging many expert models knowledge. Different from the attention macanism, we leverage reinforcement learning (RL) to generate the meta agent's policy observing its inner state and surrounding environment’s states, such that the systems can quickly adapt to the given tasks. This is the preliminary study of how the human brain can learn new things very fast based on many models of the world.

Setup instructions

This is a quick guide to get started with the sources.

Dependencies

You will need Python 3 and Tensorflow 2, to run the systems.

Upgrade pip to the latest version, use:

python -m pip install --upgrade pip

Set up other modules and libraries dependencies, use:

pip install -r requirements.txt

Running the systems

There are two components in HL, the decentralized learning system in the file of "environment.py", and the DQN-based RL agent system in the file of "node.py". More detailed information can be found in the Section 3.3 of the Homogeneous Learning paper.

"environment.py" includes the decentralized learning algorithm, which allows the systems to envolve based on the decisions made by RL agents.

"node.py" includes the reinforcement learning algorithm for learning an optimized communication policy based on observations of model parameters and the correlated rewards.

The HL systems can be run from the terminal by simply typing:

python main.py

Note that "main.py" will include a total of 120 episodes' learning of how to train a local foundation model to achieve a desired goal within the minimum steps, and at the same time with less communication cost, where each episode includes a whole training procedure of the decentralized learning algorithm.

Making changes

If you want to make changes to the source, such as the total episodes and the training goal, you are going to need to refer to the Section 4.1, 4.2.1, A.2 in the paper for more information on how these components work with each other.

Citation

If this repository is helpful for your research or you want to refer the provided results in this work, you could cite the work using the following BibTeX entry:

@article{sun2022homolearn,
  author    = {Yuwei Sun and
               Hideya Ochiai},
  title     = {Homogeneous Learning: Self-Attention Decentralized Deep Learning},
  journal   = {IEEE Access},
  year      = {2021}
}

Further readings

Global Workspace Theory

Decentralized ML

About

This is a Tensorflow implementation of the paper Homogeneous Learning: Self-Attention Decentralized Deep Learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages