Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unstable Results Over 500 Round Experiments #4

Open
AndrewMerrow opened this issue Apr 22, 2024 · 3 comments
Open

Unstable Results Over 500 Round Experiments #4

AndrewMerrow opened this issue Apr 22, 2024 · 3 comments

Comments

@AndrewMerrow
Copy link

I have tried running tests on the fedemnist dataset with the default parameters from the runner.sh file. In my 500 round tests, the model's accuracy starts to degrade after approximately round 100.

UTD 500 round graph

I have ran experiments in two separate environments and have tried tweaking some parameters, but the results I am getting all show the same issue. Here are the library versions I am using:

NVIDIA PyTorch Container version 22.12
PyTorch version 1.14.0+410ce96
Python3 version 3.8.10

@TinfoilHat0
Copy link
Owner

Hey Andrew - the plot makes me this this a learning rate problem. Are you decaying the learning rate?

@AndrewMerrow
Copy link
Author

I have not decayed the learning rate. Here are the values I have used:
server_lr: 1
client_lr: 0.1

@AndrewMerrow
Copy link
Author

We used all the parameters straight from the provided runner.sh file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants