Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching of previous_state non-compatible with Multi-Agent #80

Open
TibiGG opened this issue May 28, 2022 · 1 comment
Open

Caching of previous_state non-compatible with Multi-Agent #80

TibiGG opened this issue May 28, 2022 · 1 comment

Comments

@TibiGG
Copy link

TibiGG commented May 28, 2022

I discovered while digging through the code that a certain state value called previous_state of the DQN algorithm (and possibly some others) is being cached on the act() and action_distribution methods of the class.

From the little digging that I did, it seems to be related to the side-panel of the rendering, which showcases extra information about the attention heads of the controller vehicles.

Only that, when there are more than one controller vehicles, it seems to be redefined n+1 times, where n is the number of vehicles, during each act() call: once as the tuple of observations of all agents, and once as the observation of each agent, until it gets redefined as the observation of the last controlled vehicle.

Snippet from rl_agents/agents/deep_q_network/abstract.py:

    def act(self, state, step_exploration_time=True):
        """
            Act according to the state-action value model and an exploration policy
        :param state: current state
        :param step_exploration_time: step the exploration schedule
        :return: an action
        """
        self.previous_state = state    #<==========HERE=============
        if step_exploration_time:
            self.exploration_policy.step_time()
        # Handle multi-agent observations
        # TODO: it would be more efficient to forward a batch of states
        if isinstance(state, tuple):
            return tuple(self.act(agent_state, step_exploration_time=False) for agent_state in state)

        # Single-agent setting
        values = self.get_state_action_values(state)
        self.exploration_policy.update(values)
        return self.exploration_policy.sample()

It does not seem like the most pressing issue, but I am just putting it here, in case anyone has a decent idea on how to deal with this. Or for a clearer explanation as to why this variable is important, as I only gave one example of its usefulness.

Thanks!

@eleurent
Copy link
Owner

Yes, everything you said is absolutely correct

  • previous_state is currently only used to render information about the agent's decision-making process. In particular, when we need access to internal information rather than the mere output Q-values / action probs, e.g. the attention scores. It's then easier to forward the model again than to store and
  • we typically expect a single state, so as to render the decision a single agent
  • when I introduced a multi-agent mode, I decided to keep rendering a single agent for simplicity and readability
  • so the act() method is called iteratively for all agents, and only the last one is used for rendering

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants