-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tictactoe compete() plays 1000 almost identical games #145
Comments
Hello! Well, I think your suggestion's reasonable. But personally, the |
Hi there, thanks for providing this great RL resource!
I have a comment / suggestion for the tictactoe.py code:
The
tictactoe.py
code uses acompete()
function to test if the AI players are sufficiently well trained.If they play well enough, each game should end in a tie.
With the default settings in the code, all 1000 games end up in a tie.
However, this is not super informative to whether the AI has learned to play the game well.
Why? Because epsilon is zero for both players, both players follow the learned Q-table greedily, and therefore make identical choices in all states where one move has a dominant Q-value. This is the case for the first six turns. The only variation is after six turns, there are three moves that have equal Q-value, and therefore one of them is randomly chosen.
I think an improvement is to let one player use the Q-table greedily, and the other player select moves randomly.
Regards,
Gertjan
The text was updated successfully, but these errors were encountered: