Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue Encountered When Loading the Model: "pretrain_videomae_base_patch16_224" #105

Open
bbbdbbb opened this issue Jun 18, 2023 · 5 comments

Comments

@bbbdbbb
Copy link

bbbdbbb commented Jun 18, 2023

Dear [Support Team],

I hope this message finds you well. First of all, I would like to express my appreciation for your work. I have encountered a problem when trying to reproduce your code and load a model using the following line: model = get_model(args).

The default model specified in args is pretrain_videomae_base_patch16_224. However, when running the code, I received an error stating that the model could not be found: RuntimeError: Unknown model (pretrain_videomae_base_patch16_224).

I am currently using timm version 0.4.12. When I attempt to search for the model using model_vit = timm.list_models('*videomae*'), it does not appear in the results.

I kindly request your assistance in resolving this issue. Any guidance or suggestions would be greatly appreciated.

Thank you very much for your attention and support.

@otmalom
Copy link

otmalom commented Jun 24, 2023

Hello, I have encountered the same issue as you. Indeed, the model pretrain_videomae_base_patch16_224 is not available in timm. However, I found it in the file modeling_pretrain.py. I'm curious to know if the author imported the model from there, and why the default function create_model from the timm library is used in the file run_mae_pretraining.py.

@tsw123678
Copy link

Hello,do you solve the problem?

@tsw123678
Copy link

#9

@bbbdbbb
Copy link
Author

bbbdbbb commented Jan 20, 2024

It has been quite some time, and I apologize for the delay in my response. I recently came across your question on GitHub, and after reviewing my previous work, I have found some information that might be helpful to you.

Based on my recollection, it seems that I made modifications to the code responsible for loading the model in the "run_mae_pretraining.py" file. Specifically, I commented out the line model = get_model(args) and instead used the following code:

# model = get_model(args)  
from modeling_pretrain import pretrain_videomae_large_patch16_224  
model = pretrain_videomae_large_patch16_224()  

These adjustments allowed me to successfully run the VideoMAE code, and I achieved promising experimental results. I hope this information proves useful to you in your own work.

@fusu80
Copy link

fusu80 commented Apr 24, 2024

#9

这个貌似解决不了,请问还有其他的办法吗?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants