Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows Implement, Success! #182

Open
3051617781 opened this issue Apr 4, 2024 · 11 comments
Open

Windows Implement, Success! #182

3051617781 opened this issue Apr 4, 2024 · 11 comments

Comments

@3051617781
Copy link

3051617781 commented Apr 4, 2024

1. gaussian_splatting

1.1 run train.py (gaussian_splatting):

This should be easy:
run: python gaussian_splatting/train.py -s <path to COLMAP dataset> --iterations 7000 -m <path to the desired output directory>
my for example: python gaussian_splatting/train.py -s gaussian_splatting\sherioc_dataset\train --iterations 7000 -m gaussian_splatting\sherioc_output\train\

1.2 output example

sherioc_dataset is input folder
sherioc_dataset is output folder
image

2. sugar

2.1 change all the path that for Windows(change all the '/' to '\')

image

2.2 train.py contains four parts:

----- Optimize coarse SuGaR -----
----- Extract mesh from coarse SuGaR -----
----- Refine SuGaR -----
----- Extract mesh and texture from refined SuGaR -----

you can choose run whole script-Ⅰ or run seperately-Ⅱ
Ⅰ. run whole script (train.py)
python train.py -s gaussian_splatting\sherioc_dataset\train -c gaussian_splatting\sherioc_output\train\ -r "density"

Ⅱ. run seperately

----- Optimize coarse SuGaR -----

suppose you choose "density" like me
python train_coarse_density.py -s gaussian_splatting\sherioc_dataset\train -c gaussian_splatting\sherioc_output\train\
image

----- Extract mesh from coarse SuGaR -----

python extract_mesh.py -s gaussian_splatting\sherioc_dataset\train -c gaussian_splatting\sherioc_output\train\ -m output\coarse\train\sugarcoarse_3Dgs7000_densityestim02_sdfnorm02\15000.pt
image

----- Refine SuGaR -----

then run
python train_refined.py -s gaussian_splatting\sherioc_dataset\train -c gaussian_splatting\sherioc_output\train\ -m output\coarse_mesh\train\sugarmesh_3Dgs7000_densityestim02_sdfnorm02_level05_decim1000000.ply
image

----- Extract mesh and texture from refined SuGaR (generate the obj model)-----

python extract_refined_mesh_with_texture.py -s gaussian_splatting\sherioc_dataset\train -c gaussian_splatting\sherioc_output\train\ -m output\refined\train\sugarfine_3Dgs7000_densityestim02_sdfnorm02_level05_decim200000_normalconsistency01_gaussperface1\15000.pt
image
Finally, you should get a result like this:
image

3. viewer on Windows

3.1 obj model

you can observe the ply and obj model online or in blender, this needn't the viewer:
this is the rendered mesh, of course there is a problem that there is a large amount of debris.
image

3.2 viewer

just run conda install -c conda-forge nodejs
after that, change to the sugar_viewer dictory cd sugar_viewer
run npm install
then change to the root dictory cd ..
then run python run_viewer.py -p output\refined_ply\train\<ply model name>
Viewer result:
image

@3051617781 3051617781 reopened this Apr 5, 2024
@3051617781
Copy link
Author

4. Add: For my own dataset

1.1 prepare dataset (pics)

image

1.2 run the colmap script (contained in convert.py) and train.py for gaussian_splatiing

just run python gaussian_spltiing\convert.py -s gaussian_splatting\sherioc_dataset\rongrong

then just run the same scripts like before:
python gaussian_splatting/train.py -s <path to COLMAP dataset> --iterations 7000 -m <path to the desired output directory>

1.3 result

obj mesh, seems okay
image
hybrid presentation
image

@ruim
Copy link

ruim commented Apr 8, 2024

I was able to successfully use this on Windows because of this post! thank you so much :)

@ruim
Copy link

ruim commented Apr 9, 2024

after following your guides I was able to extract a mesh and textures... but like your example images, the quality of the mesh is very low. Do you know of any way to improve mesh quality? The images in the SuGaR repo look amazing, but mine don't look like it at all...
Thanks

@max-well-d
Copy link

Hello, I'm having trouble deploying the environment using Windows. It seems that simply using conda env create -f environment.yml isn't working. What should I do?

@ruim
Copy link

ruim commented Apr 11, 2024

Hello, I'm having trouble deploying the environment using Windows. It seems that simply using conda env create -f environment.yml isn't working. What should I do?

in the installation instructions it also explains how to install all the dependencies if that command fails. You have to do it that way

@3051617781
Copy link
Author

after following your guides I was able to extract a mesh and textures... but like your example images, the quality of the mesh is very low. Do you know of any way to improve mesh quality? The images in the SuGaR repo look amazing, but mine don't look like it at all... Thanks

To be honest, I don't know exactly how to get the high-quality mesh. /cry
I'm not very familiar with the SuGaR project source code and method too. (Also a little white in the CV field). Very Sorry.
In this post the author mentioned, I guess you have read it.
#141 (comment)

I'm studying hard to learn about the project. I hope somedays later I could give you some advice about it.

@3051617781
Copy link
Author

Hello, I'm having trouble deploying the environment using Windows. It seems that simply using conda env create -f environment.yml isn't working. What should I do?

You could install the packages seperately.
image

In addition, there are some problems I encountered during the installation, maybe helpful.
I failed to install the package pytorch3d at first.

  1. clone the pytorch3d repo
    https://github.com/facebookresearch/pytorch3d/
  2. open the VS2019 prompt
    image
  3. cd the directory of pytorch3d
    set DISTUTILS_USE_SDK=1
    set PYTORCH3D_NO_NINJA=1
    python setup.py install
    after some minutes, you could check if the pytorch3d installed using the following commands.

python
import pytorch3d
print(pytorch3d.version)

@max-well-d
Copy link

Hello, I'm having trouble deploying the environment using Windows. It seems that simply using conda env create -f environment.yml isn't working. What should I do?

You could install the packages seperately. image

In addition, there are some problems I encountered during the installation, maybe helpful. I failed to install the package pytorch3d at first.

  1. clone the pytorch3d repo
    https://github.com/facebookresearch/pytorch3d/
  2. open the VS2019 prompt
    image
  3. cd the directory of pytorch3d
    set DISTUTILS_USE_SDK=1
    set PYTORCH3D_NO_NINJA=1
    python setup.py install
    after some minutes, you could check if the pytorch3d installed using the following commands.

python
import pytorch3d
print(pytorch3d.version)

Thank you for your help. I did encounter an issue with installing the package pytorch3d, but I successfully resolved it following your method.

@max-well-d
Copy link

Hello, I'm having trouble deploying the environment using Windows. It seems that simply using conda env create -f environment.yml isn't working. What should I do?

in the installation instructions it also explains how to install all the dependencies if that command fails. You have to do it that way

You are right, I didn't notice that. Thank you for your help. I have successfully deployed the environment.

@Falkonar
Copy link

Is there a way to preceed with digital video. Not recorded on video cam. I creating Gaussian Splats in Lucid Dreamer for backgrounds and searching a way to add mesh. So I don't have COLMAP dataset only .ply or record from viewer

@hxj2580
Copy link

hxj2580 commented Apr 27, 2024

Hello, I'm having trouble deploying the environment using Windows. It seems that simply using conda env create -f environment.yml isn't working. What should I do?

You could install the packages seperately. image

In addition, there are some problems I encountered during the installation, maybe helpful. I failed to install the package pytorch3d at first.

  1. clone the pytorch3d repo
    https://github.com/facebookresearch/pytorch3d/
  2. open the VS2019 prompt
    image
  3. cd the directory of pytorch3d
    set DISTUTILS_USE_SDK=1
    set PYTORCH3D_NO_NINJA=1
    python setup.py install
    after some minutes, you could check if the pytorch3d installed using the following commands.

python
import pytorch3d
print(pytorch3d.version)

Hi,I have a result from 3dgaussaian splatting,include iteration_7000 and iteration_30000,what can i do to extract a mesh from ply?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants