Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Both tf1.14 and tf2.0 version hang during training #2

Open
yuanli12139 opened this issue Dec 3, 2019 · 3 comments
Open

Both tf1.14 and tf2.0 version hang during training #2

yuanli12139 opened this issue Dec 3, 2019 · 3 comments

Comments

@yuanli12139
Copy link

Hi @610265158, I tried to train FaceBoxes detectors using your implementation. However, both tf1.14 and tf2.0 version hang after training started several hundred iterations. No error messages but losses stopped updating. Do you have any idea what might be the cause? Thank you so much!

@610265158
Copy link
Owner

It should be the data provider problem, some err may happened there , i guess.
Which dataset do you use?

@yuanli12139
Copy link
Author

I am using Wider Face despite that I am training 1-channel grayscale models.

@610265158
Copy link
Owner

610265158 commented Dec 5, 2019

Do you have problem with raw 3-channel data?
You could catch some exception in the data provider, and to find out the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants