-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TF Serving loading error of self trained model (TFv2.x) #1217
Comments
hello, what commands did you use to learn? |
I also met the same problem. Have you solved it? |
this problom has been solved, The method is as follows,:you need to add tf.compat.v1.disable_v2_behavior () to train_tripletloss.py |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
i trained my own model like described in the wiki "Training using the VGGFace2 dataset". I used a recent TFv2.x version and Python 3.8 and replaced all v1 calls with tensorflow.compat.v1 imports. The training was successful and everything works using the resulting model checkpoints.
My main goal however is to use the model in TF Serving, thus i first froze the model using "freeze_graph.py" and then transformed the model into the SavedModel format using the snippet provided in #1055. This all worked without errors, but when i try to load the SavedModel, TF Serving throws this error:
It seems some tensor type is not what is expected but im not sure whats the reason for this.
The same process as above using the 20180402-114759 checkpoints works and can be loaded by TF Serving.
Ive used the latest TF Serving version using the docker-image tensorflow/serving:latest-gpu
Any ideas? Could this come from the fact that i trained the model using a recent TFv2.x version instead of v1.7?
The text was updated successfully, but these errors were encountered: