Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

normalization at inference time vs training time #130

Open
fabricecarles opened this issue Sep 21, 2023 · 2 comments
Open

normalization at inference time vs training time #130

fabricecarles opened this issue Sep 21, 2023 · 2 comments

Comments

@fabricecarles
Copy link

fabricecarles commented Sep 21, 2023

Thank you for this second excellent paper on PoinTr.

As I was examining your code, a question arose concerning the normalization process during training compared to inference time.

During training, you generate a partial point cloud online after applying normalization, as seen at https://github.com/yuxumin/PoinTr/blob/master/datasets/ShapeNet55Dataset.py#L45.

This implies that the centroid and min-max values are calculated prior to cropping and resampling, which seems reasonable.

However, during inference, the centroid and min-max values are calculated on the partially cropped point cloud. From my perspective, this suggests that the shape isn't placed at the origin and scaled according to the training-time procedure.

https://github.com/yuxumin/PoinTr/blob/master/tools/inference.py#L60
Have you considered training and performing inferences with point cloud normalization consistently based on the partial point cloud? In other words, is it possible to improve results by directly applying the real-world inference process during training?

Thank you for your time and insights.

Best regards

@fabricecarles fabricecarles changed the title normalization at inference time normalization at inference time vs training time Sep 21, 2023
@kaali-billi
Copy link

I have proposed a solution to the training process in issue #142, feel free to have a look and let me know if it works for you. My training and accuracy has improves since this application, getting better results than the paper in less than 250 epochs

@fabricecarles
Copy link
Author

Totaly agree @kaali-billi the current implementation is not correct and I also make some changes to train with a normalization only based on input data and not ground truth. I found that results are far better in case of real world scenario (i.e when you make inference without any ground truth).
I think the paper should mention that and author need to fix the code and recompute accuracy benchmarks
@yuxumin could you let me know your point of view please ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants