Skip to content
This repository has been archived by the owner on May 1, 2023. It is now read-only.

How to compress my object detection model #533

Open
lrh454830526 opened this issue Aug 12, 2020 · 1 comment
Open

How to compress my object detection model #533

lrh454830526 opened this issue Aug 12, 2020 · 1 comment

Comments

@lrh454830526
Copy link

Hi
Thank you for your team to do such a nice work!
My team have trained a model with torchvision's faster-rcnn,and now we have to compress the model.And after some time's struggle,we finally decided to use distiller to do the work.Now We are facing the problem how to compress or accelerate the model,to prune or to quantize.In fact both of the methods are OK,But for some reason,we do not have so much time to do the work.
1.can you give me some reason how to do the work with less time ? to prune and to quantize ,which will take less time
2.We realize distiller give the api how to prune torchvision's faster-rcnn ,and I want to know how to prune with a different dataset
I'm new in distiller,maybe some expressions are not professional
Thank you for your reply .

@lrh454830526 lrh454830526 changed the title How to compress my object detection with torchvision How to compress my object detection model Aug 12, 2020
@sungh66
Copy link

sungh66 commented Sep 16, 2021

I have compressed my faster rcnn model with my own dataset, you can use the faster rcnn api in pytorch offered itself, add the command --model fasterrcnn_resnet50_fpn,but i met a problem with the compressed model size, whether compression_scheduler.state_dit() save the small model parameters or not? Why my model has the same size with different sparse prunning setting?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants