-
Notifications
You must be signed in to change notification settings - Fork 705
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Huge error on prediction (NER) #81
Comments
Hi @rodmandi, How do you get the 0.38 score? It could be that your model is not actually reloading the weights from disk and is being re-initialized to something random. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I created my own vocabulary and tags, I ran the code with this parameters:
{
"batch_size": 2,
"buffer": 15000,
"chars": "vocab.chars.txt",
"dim": 300,
"dim_chars": 100,
"dropout": 0.3,
"epochs": 25,
"filters": 50,
"glove": "glove.npz",
"kernel_size": 3,
"lstm_size": 100,
"num_oov_buckets": 1,
"tags": "vocab.tags.txt",
"words": "vocab.words.txt"
}
After 8 hours of training I got my results:
Saving dict for global step 9534: acc = 0.96229786, f1 = 0.8532131, global_step = 9534, loss = 36.943344, precision = 0.8494137$
Saving 'checkpoint_path' summary for global step 9534: results/model/model.ckpt-9534
But when I tried to predict with the same test dataset, the model is not predicting, for example in test we have Bora B-NAME B-NAME and when I tried to pass the same sentence, the algorithm is not predicting that entity. The problem is the algorithm is saying that the accuracy is 0.962297 and f1 0.8532131, but in reality is detecting only 38%, so the accuracy is not 0.85 is only 0.38.
What could be the problem?
Do you have an idea?
Thank you!
The text was updated successfully, but these errors were encountered: