-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unstable Results Over 500 Round Experiments #4
Comments
Hey Andrew - the plot makes me this this a learning rate problem. Are you decaying the learning rate? |
I have not decayed the learning rate. Here are the values I have used: |
We used all the parameters straight from the provided runner.sh file. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have tried running tests on the fedemnist dataset with the default parameters from the runner.sh file. In my 500 round tests, the model's accuracy starts to degrade after approximately round 100.
I have ran experiments in two separate environments and have tried tweaking some parameters, but the results I am getting all show the same issue. Here are the library versions I am using:
NVIDIA PyTorch Container version 22.12
PyTorch version 1.14.0+410ce96
Python3 version 3.8.10
The text was updated successfully, but these errors were encountered: