Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Artifacts on a reomved/black_background data #180

Open
ZirongChan opened this issue Apr 3, 2024 · 10 comments
Open

Artifacts on a reomved/black_background data #180

ZirongChan opened this issue Apr 3, 2024 · 10 comments

Comments

@ZirongChan
Copy link

first of all, thx for the great work and sharing with the community !!!

I had a 360 degree data recording a human in the center, so I was expecting a noisy reconstruction in the background area since there is no enough coverage for it. I tried to run SuGaR with the original images and had got what I expected.

So in order to get a clean reconstructed mesh of my "target", I segmented the foreground human and set the background as empty, namely [0,0,0] for the background pixels. Then I got a result similar to the one described by @ansj11 in the #72 (comment) issue, with black artifacts in the background, mostly around the head and the feet, like below
image

Then I tried to change the vertices_density_quantile = 0.1 in line 42 in the sugar_extractors/coarse_mesh.py to vertices_density_quantile = 0 as the empty background data is similar to the synthetic scenes, the result was not much better, like
image
Though the holes in the back and the head were filled in some way, the artifacts seems to be worse.

I've also tried to set the poisson depth from 10 to 7, it did not help.

I understood that this is mainly because that SuGaR does not support masked data yet. But I remembered that in a other issue slot @Anttwo you mentioned that you've already implemented this and would update the code soon. It is defintely not a "chasing-up" :p haha. Maybe you can give me some advise about how to do this.

Thx again for the great work and I'm looking forward to your reply.

@ZirongChan
Copy link
Author

Hello, @Anttwo, I've noticed that you updated the repository with a white_background option.
I've tried it out by doing:
a) segment my data and cast the background as white
b) training the original GS model using white_bg by setting the corresponding parameters in the ModelParams in GS
c) traning the SuGaR with an extra --white_backgroud True argument in the command line.
here is what I achieved
image

So did I do something wrong using the white_bg mode? Looking forward to your reply, thx in advance

@Anttwo
Copy link
Owner

Anttwo commented Apr 8, 2024

Hello @ZirongChan,

Thank you so much for your nice words, and sorry for the late answer, I'm sooo busy right now!
I will also try to answer issues from other users in the coming hours/days.

Indeed, I added a white background functionality, that should be able to handle masked images.
I tried it on synthetic scenes (especially the Shelly dataset from the Adaptive Shells paper, as it was useful for Gaussian Frosting), and it worked really well.

So, let's investigate why it does not work in your case!

I see two simple differences between the Shelly scenes and your scene:

  1. Shelly scenes are synthetic. But I don't think that the domain gap synthetic/real explains all the background artifacts that you have, so I suppose the main problem is somewhere else.
  2. Shelly images actually do not have a white background, but a transparent background, as they are RGBA images (in PNG format). The code loads the images and replaces the alpha mask with white. I agree that this should be equivalent to having images with white background right from the beginning... But who knows, maybe a part of the code does something weird, and that could be the origin of your problem. Could you try segmenting your images with a transparency mask rather than white, and then rerun the code?

Looking forward to your reply! 😃

@ZirongChan
Copy link
Author

Hi @Anttwo , thanks for your kind answer.

Actually I've tried to comment out the following lines starting from line 102 in the sugar_scene/camera.py
image
as my images are 3 channels (RGB) with white color in the background, so that I did not have to generate new images, thought it would be the same :p

So what I've got in result is follows:
image

better, but still something more than what I expected.

@Anttwo
Copy link
Owner

Anttwo commented Apr 8, 2024

Hi @ZirongChan,

Great, so the results are much better now!
I suppose the remaining artifacts come from the fact that the scene is not synthetic but real; Because the calibration is not perfect as in the synthetic case, some small white artifacts are still produced during the optimization.

Thanks for your experiment, this is very useful, as I learned the following: Just optimizing with a white background in the blending process is enough for synthetic scenes to remove background artifacts as no Gaussians are needed anymore for reconstructing this part of the scene, but in a real scenario, some artifacts remain.

Then, the simplest way to deal with such artifacts is probably just to remove Gaussians (or vertices) based on the mask.
Indeed, since you have masks, then it is possible to "carve" the representation by removing Gaussians (or vertices) whose projections are located in the masked out area in at least one image (or in a few images, for example, as it could be a bit smoother).

There are several ways to do it:

  1. Remove "masked out" Gaussians before or during optimization (but this would require some adjustements, so this is not the simplest strategy)
  2. Remove "masked out" Gaussians after coarse optimization, just before extracting the mesh
  3. Remove "masked out" vertices after extracting the coarse mesh

This should be really straightforward to implement. I think the strategy 3 is the fastest/simplest to implement.
I will try to do it this week, maybe in the coming 1 or 2 days, as it can be very useful.

@Anttwo
Copy link
Owner

Anttwo commented Apr 8, 2024

Also, as you can see, you have many noisy white lines on your mesh right now; But it is very easy to fix this (no re-training needed, just a setting in your rendering software)!
This does not come from SuGaR, but from the interpolation method used for rendering textures in your rendering engine/software.

Please take a look at this issue, in which I give explanations as well as a simple solution to solve this problem and make the texture much cleaner:

#119

Basically, you should look for a "Closest pixel" interpolation method (or something similar) rather than "linear interpolation".

@ZirongChan
Copy link
Author

Also, as you can see, you have many noisy white lines on your mesh right now; But it is very easy to fix this (no re-training needed, just a setting in your rendering software)! This does not come from SuGaR, but from the interpolation method used for rendering textures in your rendering engine/software.

Please take a look at this issue, in which I give explanations as well as a simple solution to solve this problem and make the texture much cleaner:

#119

Basically, you should look for a "Closest pixel" interpolation method (or something similar) rather than "linear interpolation".

Hi, @Anttwo

THX for your kind advise, I've changed a viewer for visualizing the refined_mesh, it was amazing.
image

@ZirongChan
Copy link
Author

ZirongChan commented Apr 9, 2024

Hi @ZirongChan,

Great, so the results are much better now! I suppose the remaining artifacts come from the fact that the scene is not synthetic but real; Because the calibration is not perfect as in the synthetic case, some small white artifacts are still produced during the optimization.

Thanks for your experiment, this is very useful, as I learned the following: Just optimizing with a white background in the blending process is enough for synthetic scenes to remove background artifacts as no Gaussians are needed anymore for reconstructing this part of the scene, but in a real scenario, some artifacts remain.

Then, the simplest way to deal with such artifacts is probably just to remove Gaussians (or vertices) based on the mask. Indeed, since you have masks, then it is possible to "carve" the representation by removing Gaussians (or vertices) whose projections are located in the masked out area in at least one image (or in a few images, for example, as it could be a bit smoother).

There are several ways to do it:

  1. Remove "masked out" Gaussians before or during optimization (but this would require some adjustements, so this is not the simplest strategy)
  2. Remove "masked out" Gaussians after coarse optimization, just before extracting the mesh
  3. Remove "masked out" vertices after extracting the coarse mesh

This should be really straightforward to implement. I think the strategy 3 is the fastest/simplest to implement. I will try to do it this week, maybe in the coming 1 or 2 days, as it can be very useful.

Quick questions:
1, if you take a closer look to my result using segmented forground and black background data, is it possible to rule out the black artifacts by filtering them based on the projection? I think it could be the same effect spent when compared to the white background case.
Which background will you prefer? black background with no extra operations or white background ?

2, regarding to the artifacts, I think they can be caused due to the lack of coverage on the top of the head and the shoes. What is your opion?

3, I wonder if I can focus on the face of the human in SuGaR, like in NeRF I can achieve this by take more intensive samples in the ROI which in the human reconstruction case is the face, but what should I do in GS or SuGaR ? Do you have any ideas ?

@autumn999999
Copy link

Hi @Anttwo , thanks for your kind answer.您好,感谢您的友好答复。

Actually I've tried to comment out the following lines starting from line 102 in the sugar_scene/camera.py实际上,我尝试注释掉 Sugar_scene/camera.py 中从第 102 行开始的以下几行 image as my images are 3 channels (RGB) with white color in the background, so that I did not have to generate new images, thought it would be the same :p 因为我的图像是 3 通道 (RGB),背景为白色,所以我不必生成新图像,认为它会是相同的:p

So what I've got in result is follows:所以我得到的结果如下: image

better, but still something more than what I expected.更好,但仍然超出我的预期。

Hi, thank you for conducting the experiment. I have a question and would appreciate your response. Could you please explain why commenting out those lines of code improved the effect? I was under the impression that the images before and after should be identical.

@ZirongChan
Copy link
Author

Hi @Anttwo , thanks for your kind answer.您好,感谢您的友好答复。
Actually I've tried to comment out the following lines starting from line 102 in the sugar_scene/camera.py实际上,我尝试注释掉 Sugar_scene/camera.py 中从第 102 行开始的以下几行 image as my images are 3 channels (RGB) with white color in the background, so that I did not have to generate new images, thought it would be the same :p 因为我的图像是 3 通道 (RGB),背景为白色,所以我不必生成新图像,认为它会是相同的:p
So what I've got in result is follows:所以我得到的结果如下: image
better, but still something more than what I expected.更好,但仍然超出我的预期。

Hi, thank you for conducting the experiment. I have a question and would appreciate your response. Could you please explain why commenting out those lines of code improved the effect? I was under the impression that the images before and after should be identical.

那几行实际上是将一个4通道的图像,alpha通道作为mask,生成一张3通道的背景颜色为[1,1,1]的图像。而我自己处理过的数据就是这样的3通道的图像,所以没必要重复做这个操作。相反的,做了这个操作的图像可能就不对了,你可以做一下实验。

@autumn999999
Copy link

Hi @Anttwo , thanks for your kind answer.您好,感谢您的友好答复。
Actually I've tried to comment out the following lines starting from line 102 in the sugar_scene/camera.py实际上,我尝试注释掉 Sugar_scene/camera.py 中从第 102 行开始的以下几行 image as my images are 3 channels (RGB) with white color in the background, so that I did not have to generate new images, thought it would be the same :p 因为我的图像是 3 通道 (RGB),背景为白色,所以我不必生成新图像,认为它会是相同的:p实际上,我尝试注释掉 Sugar_scene/camera.py 中从第 102 行开始的以下几行 实际上,我尝试注释掉 Sugar_scene/camera.py 中从第 102 行开始的以下几行 image 因为我的图像是 3 通道 (RGB),背景为白色,所以我不必生成新图像,认为它会是相同的:p 因为我的图像是 3 通道 (RGB),背景为白色,所以我不一定生成新图像,认为它会是相同的:p
So what I've got in result is follows:所以我得到的结果如下: image
better, but still something more than what I expected.更好,但仍然超出我的预期。更好,但仍然超出我的预期。更好,但仍然超出我的预期。

Hi, thank you for conducting the experiment. I have a question and would appreciate your response. Could you please explain why commenting out those lines of code improved the effect? I was under the impression that the images before and after should be identical.您好,感谢您进行实验。我有一个问题,希望得到您的回复。您能解释一下为什么注释掉这些代码行可以改善效果吗?我的印象是前后的图像应该是相同的。

那几行实际上是将一个4通道的图像,alpha通道作为mask,生成一张3通道的背景颜色为[1,1,1]的图像。而我自己处理过的数据就是这样的3通道的图像,所以没必要重复做这个操作。相反的,做了这个操作的图像可能就不对了,你可以做一下实验。

好的非常感谢!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants