You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Error "A configuraton of type donut cannot be instantiated because not both encoder and decoder sub-configurations are passed" when run inference after finetuned docvqa without pushing to hugging face?
#289
Open
phuchm opened this issue
Feb 20, 2024
· 0 comments
After I finetuned docvq following guideline:
![Train](https://private-user-images.githubusercontent.com/10350878/306191807-db44f43f-d730-4ff6-ab75-8a18068f1940.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTg4Mjc1NzgsIm5iZiI6MTcxODgyNzI3OCwicGF0aCI6Ii8xMDM1MDg3OC8zMDYxOTE4MDctZGI0NGY0M2YtZDczMC00ZmY2LWFiNzUtOGExODA2OGYxOTQwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjE5VDIwMDExOFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWQ5ODU1OThhMTU2MmI2M2NiZTVkMGNlMTAyYWZjMjgzZTk4MjQxMmIxZTMxMzkzMzQwZWMwODE3OWM5ODQ4MDImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.AfRaravntm89mGxKXpV7YtdfCm8OaXcfBn52eUOcCQc)
I found in "result" folder and there are some files like:
![Data](https://private-user-images.githubusercontent.com/10350878/306192130-1bab18ae-7c87-4433-a68f-bd49d0ff3038.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTg4Mjc1NzgsIm5iZiI6MTcxODgyNzI3OCwicGF0aCI6Ii8xMDM1MDg3OC8zMDYxOTIxMzAtMWJhYjE4YWUtN2M4Ny00NDMzLWE2OGYtYmQ0OWQwZmYzMDM4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjE5VDIwMDExOFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTI4OGI1MDk3ZjAzMWExMjVmNGVjYWFlOWU2NTI4YmY1MWQxNzY0MDhhMjAxODdjZjFlZGEzZjBiYjVhMjZlZDgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.UZYMx8Rf3AMfAQX6gFn9ptSwSzFTxfnpuZ0JjpDXu4k)
It's same when downloading base model from hugging face, but when I run inference with new model after finetuned, I got an error message like:
![Error](https://private-user-images.githubusercontent.com/10350878/306192402-37bd6bab-3180-495c-87ab-c2e719b157b8.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTg4Mjc1NzgsIm5iZiI6MTcxODgyNzI3OCwicGF0aCI6Ii8xMDM1MDg3OC8zMDYxOTI0MDItMzdiZDZiYWItMzE4MC00OTVjLTg3YWItYzJlNzE5YjE1N2I4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjE5VDIwMDExOFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWZiMTNmMzUxODA4OTg3NmI2M2FkNzQ3NWM2OWZmMWYzYTE1YWYyZmRjYTE0MDU4MWRkMDBmMGMxYTI0OTQ3NjgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.1sgOBIxngwze9HOaaq7Gwqn-Wh2eRPqnQX14LuUC-wk)
Then I compared config.json between base model and new model after finetuned, they are different.
How can I call or load model after finetuned?
The text was updated successfully, but these errors were encountered: