-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
While Reproducing the results #108
Comments
Hello @AubreyCH , I am currently trying to understand the process of training LightGlue, but I encountered a problem during the fine-tuning stage because I don't have enough storage space to store the downloaded MegaDepth dataset. Could you kindly help me describe briefly about the structure and content stored in the MegaDepth dataset used? I would greatly appreciate your feedback. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi! Thank you for your excellent work!
I've been trying to reproduce the results reported in the paper recently. Here's what I got:
By using 2 4090s and following the official config, the results I got from pretraining on the homography dataset:
Finetune on Megadepth
Then I followed settings described in the paper : lr as 1e-5, decay by 0.8 after 10 epochs, and I got the checkpoint_best with loss:
Epoch 48 :New best val: loss/total=0.4931303240458171
the test results are as follows:
I also tried the settings in the official config: lr as 1e-4 , decay by 0.95 after 30 epochs, this is what i got:
Epoch 49: New best val: loss/total=0.3537058917681376
And I've tried some other possible settings and still didn't reach the same results reported in the paper.
Could you plz give more details on how you finetune the model on the megadepth dataset? Or any other suggestions on improving the performance?
The text was updated successfully, but these errors were encountered: