-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Query about the preprocess of ScanNetpp #13
Comments
Hi, for ScanNet++, I just load the PonderV2 pretrained weight (the official one) and use PPT to finetune it. Finetuning is almost the same as ScanNet. But I jointly f.t. on 4 datasets now. I use the same sampling weight of ScanNet++ with ScanNet (loop=2). The config file is almost the same as this config, but with a few modifications: Generally, I just add a new joint dataset for finetuning, and I use exactly the same dataset config of ScanNet (e.g. augmentations, loop, etc). No hyperparameters have been searched yet. It indeed takes long time to train since the dataset is larger. I took nearly 4 days on 8 A100 GPUs. |
I just followed the ScanNet++ official code to do preprocessing including spliting training pth files to chunks. BTW, when I tried to support ScanNet++ for PonderV2 pretraining, I found there are a lot of bugs in ScanNet++'s official codes... I am still working on it and have not finished debugging preprocessing. After fully check the whole pipeline of preprocess - pretraining - finetuning, I will opensource ScanNet++ part to this codebase. I am busy on other projects these days, so it may take me some time (maybe 2 or 3 weeks?) to rearrange and release the ScanNet++ preprocessing, PonderV2 pretraining and finetuning codes and weights. Stay tuned :) |
Hi, Haoyi, thanks for your kind reply, and looking forward the configs and weight for scannetpp. It seems to be a common fact that training is time consuming. BTW, have you ever had a low utilisation of a GPU during training, I mean it's not a specific card, it's just rotating between different cards. I use 4 4090 GPUs and use the config of PointGroup. |
I'm not sure about that. But I did notice that Pointcept spent a lot time on data loading. I'm not sure whether it's due to data loading bottleneck or it is due to PointGroup itself. |
Hi, Haoyi. Thanks for your great work. Recently, I find that PonderV2 gets a better performance of semantic segmentation on ScanNet++. I also have tried to use the official toolkit to process the data, but find that the training is very slow on Pointcept. I would like to ask whether it is possible to share the code for ScanNet++?
The text was updated successfully, but these errors were encountered: