Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All the ssds method are based on 300, can it changed into 500 or 512? #40

Open
lucasjinreal opened this issue Nov 4, 2018 · 12 comments

Comments

@lucasjinreal
Copy link

Can the input image size be changed into another?

@1453042287
Copy link

there is no fc in the model so you can feed any size

@burhanmudassar
Copy link

You do need to change STEPS and SIZES variable in the config file so that you have the correct anchor sizes for 512 sized images

@kamauz
Copy link

kamauz commented Mar 21, 2019

@burhanmudassar How exactly do you change STEPS and SIZES variables?

You do need to change STEPS and SIZES variable in the config file so that you have the correct anchor sizes for 512 sized images

I changed the STEPS in order to keep them cover the entire image, then I scaled the previous SCALES dividing by 300 and multiplying by the dimension of my dataset. It seems that it is still getting the same poor increases from 0 mAP to 0.04 mAP in 400 epochs
Is there something wrong in this procedure?

Thanks

@1453042287
Copy link

@kamauz can you get the desired result with the default setting?

@kamauz
Copy link

kamauz commented Mar 25, 2019

@1453042287 @burhanmudassar

I can't get good result even with default settings.
Is it possible to make it work with rectangular images (no resize)? Like 640x480 or 1920x1080?

@Damon2019
Copy link

@jinfagang @burhanmudassar @1453042287 @kamauz
i also want to change 300 to 512 or 500. are you succeed?

@kamauz
Copy link

kamauz commented Sep 17, 2019

@jinfagang @burhanmudassar @1453042287 @kamauz
i also want to change 300 to 512 or 500. are you succeed?

@Damon2019
I stopped working with this repository about 5 months ago. But as I remember now, it worked with squared pictures like 300x300, 512x512, 500x500 (and so on). Increasing the dimension it should perform better but slower in the execution and if you are interested I think I found a solution to work with rectangular sizes too:

  • when it gets the feature maps sizes, it considers two times the same dimension because it considers squared images by default. I didn't try to change the code but I suggest you to try

@Damon2019
Copy link

@jinfagang @burhanmudassar @1453042287 @kamauz
i also want to change 300 to 512 or 500. are you succeed?

I stopped working with this repository about 5 months ago. But as I remember now, it worked with squared pictures like 300x300, 512x512, 500x500 (and so on). Increasing the dimension it should perform better but slower in the execution and if you are interested I think I found a solution to work with rectangular sizes too:

  • when it gets the feature maps sizes, it considers two times the same dimension because it considers squared images by default. I didn't try to change the code but I suggest you to try

ok , I will try it.

@Damon2019
Copy link

@kamauz hi , i meet a problem about use pre-trained models and do not use pre-trained models.
I don't know how the following parameters should be set.

TRAINABLE_SCOPE: 'base,norm,extras,loc,conf'
TRAINABLE_SCOPE: 'norm,extras,loc,conf'
Do you remember how you set these parameters?
I will be very happy with any of your suggestions.

@QZ-cmd
Copy link

QZ-cmd commented Jul 10, 2020

@kamauz 您好,我遇到有关使用预训练模型的问题,并且不使用预训练模型。
我不知道应如何设置以下参数。

TRAINABLE_SCOPE:'base,norm,extras,loc,
conf'TRAINABLE_SCOPE:'norm,extras,loc,conf'
您还记得如何设置这些参数吗?
您的任何建议我都会非常满意。

@kamauz hi , i meet a problem about use pre-trained models and do not use pre-trained models.
I don't know how the following parameters should be set.

TRAINABLE_SCOPE: 'base,norm,extras,loc,conf'
TRAINABLE_SCOPE: 'norm,extras,loc,conf'
Do you remember how you set these parameters?
I will be very happy with any of your suggestions.

i have a same problem,can you tell me how to solve this problem,thanks

@kamauz
Copy link

kamauz commented Jul 10, 2020

@kamauz 您好,我遇到有关使用预训练模型的问题,并且不使用预训练模型。
我不知道应如何设置以下参数。

TRAINABLE_SCOPE:'base,norm,extras,loc,
conf'TRAINABLE_SCOPE:'norm,extras,loc,conf'
您还记得如何设置这些参数吗?
您的任何建议我都会非常满意。

@kamauz hi , i meet a problem about use pre-trained models and do not use pre-trained models.
I don't know how the following parameters should be set.

TRAINABLE_SCOPE: 'base,norm,extras,loc,conf'
TRAINABLE_SCOPE: 'norm,extras,loc,conf'
Do you remember how you set these parameters?
I will be very happy with any of your suggestions.

i have a same problem,can you tell me how to solve this problem,thanks

I left this repository and took the official tensorflow repo to run the training phase of my project.
I'm sorry but I just had an idea of how to solve: the method that generates the feature maps seems to generate a squared map by default

@foreverYoungGitHub
Copy link
Collaborator

Not sure for the older version in the master. But for the dev branch code, defintely it can used varient image sizes with different apect ratio.

But for some detector head, since the upsample size is force to 2 times larger to make the conversion easily in onnx and tensorrt, there are some limitation for the input image size. For example, for yolov3 or fpn detection heads, the 19201080 input size has dims different error in concat layer. In this case, the input size needs to be adjust to 19201088.

Please try the dev branch and let me know if you still has this issue. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants