-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Artifacts on a reomved/black_background data #180
Comments
Hello, @Anttwo, I've noticed that you updated the repository with a white_background option. So did I do something wrong using the white_bg mode? Looking forward to your reply, thx in advance |
Hello @ZirongChan, Thank you so much for your nice words, and sorry for the late answer, I'm sooo busy right now! Indeed, I added a white background functionality, that should be able to handle masked images. So, let's investigate why it does not work in your case! I see two simple differences between the Shelly scenes and your scene:
Looking forward to your reply! 😃 |
Hi @Anttwo , thanks for your kind answer. Actually I've tried to comment out the following lines starting from line 102 in the sugar_scene/camera.py So what I've got in result is follows: better, but still something more than what I expected. |
Hi @ZirongChan, Great, so the results are much better now! Thanks for your experiment, this is very useful, as I learned the following: Just optimizing with a white background in the blending process is enough for synthetic scenes to remove background artifacts as no Gaussians are needed anymore for reconstructing this part of the scene, but in a real scenario, some artifacts remain. Then, the simplest way to deal with such artifacts is probably just to remove Gaussians (or vertices) based on the mask. There are several ways to do it:
This should be really straightforward to implement. I think the strategy 3 is the fastest/simplest to implement. |
Also, as you can see, you have many noisy white lines on your mesh right now; But it is very easy to fix this (no re-training needed, just a setting in your rendering software)! Please take a look at this issue, in which I give explanations as well as a simple solution to solve this problem and make the texture much cleaner: Basically, you should look for a "Closest pixel" interpolation method (or something similar) rather than "linear interpolation". |
Hi, @Anttwo THX for your kind advise, I've changed a viewer for visualizing the refined_mesh, it was amazing. |
Quick questions: 2, regarding to the artifacts, I think they can be caused due to the lack of coverage on the top of the head and the shoes. What is your opion? 3, I wonder if I can focus on the face of the human in SuGaR, like in NeRF I can achieve this by take more intensive samples in the ROI which in the human reconstruction case is the face, but what should I do in GS or SuGaR ? Do you have any ideas ? |
Hi, thank you for conducting the experiment. I have a question and would appreciate your response. Could you please explain why commenting out those lines of code improved the effect? I was under the impression that the images before and after should be identical. |
那几行实际上是将一个4通道的图像,alpha通道作为mask,生成一张3通道的背景颜色为[1,1,1]的图像。而我自己处理过的数据就是这样的3通道的图像,所以没必要重复做这个操作。相反的,做了这个操作的图像可能就不对了,你可以做一下实验。 |
好的非常感谢! |
first of all, thx for the great work and sharing with the community !!!
I had a 360 degree data recording a human in the center, so I was expecting a noisy reconstruction in the background area since there is no enough coverage for it. I tried to run SuGaR with the original images and had got what I expected.
So in order to get a clean reconstructed mesh of my "target", I segmented the foreground human and set the background as empty, namely [0,0,0] for the background pixels. Then I got a result similar to the one described by @ansj11 in the #72 (comment) issue, with black artifacts in the background, mostly around the head and the feet, like below
Then I tried to change the vertices_density_quantile = 0.1 in line 42 in the sugar_extractors/coarse_mesh.py to vertices_density_quantile = 0 as the empty background data is similar to the synthetic scenes, the result was not much better, like
Though the holes in the back and the head were filled in some way, the artifacts seems to be worse.
I've also tried to set the poisson depth from 10 to 7, it did not help.
I understood that this is mainly because that SuGaR does not support masked data yet. But I remembered that in a other issue slot @Anttwo you mentioned that you've already implemented this and would update the code soon. It is defintely not a "chasing-up" :p haha. Maybe you can give me some advise about how to do this.
Thx again for the great work and I'm looking forward to your reply.
The text was updated successfully, but these errors were encountered: