Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can i use this in windows 10? #21

Open
vincentchen2017 opened this issue Sep 22, 2023 · 9 comments
Open

can i use this in windows 10? #21

vincentchen2017 opened this issue Sep 22, 2023 · 9 comments

Comments

@vincentchen2017
Copy link

vincentchen2017 commented Sep 22, 2023

i use this in windows10,conda,python 3.10,when i run pip install -r requirements.txt ,it works well,and then i run python app.py error occur,
Traceback (most recent call last):
File "D:\simon\project\One-2-3-45\demo\app.py", line 639, in
fire.Fire(run_demo)
File "C:\Users\u.conda\envs\one2345\lib\site-packages\fire\core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "C:\Users\u.conda\envs\one2345\lib\site-packages\fire\core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "C:\Users\u.conda\envs\one2345\lib\site-packages\fire\core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "D:\simon\project\One-2-3-45\demo\app.py", line 449, in run_demo
models = init_model(device, os.path.join(code_dir, 'zero123-xl.ckpt'), half_precision=HALF_PRECISION)
File "..\utils\zero123_utils.py", line 45, in init_model
models['turncam'] = torch.compile(load_model_from_config(config, ckpt, device=device)).half()
File "C:\Users\u.conda\envs\one2345\lib\site-packages\torch_init
.py", line 1441, in compile
return torch._dynamo.optimize(backend=backend, nopython=fullgraph, dynamic=dynamic, disable=disable)(model)
File "C:\Users\u.conda\envs\one2345\lib\site-packages\torch_dynamo\eval_frame.py", line 413, in optimize
check_if_dynamo_supported()
File "C:\Users\u.conda\envs\one2345\lib\site-packages\torch_dynamo\eval_frame.py", line 375, in check_if_dynamo_supported
raise RuntimeError("Windows not yet supported for torch.compile")
RuntimeError: Windows not yet supported for torch.compile。
what should i do ?my torch is 2.0.1+cu118

@vincentchen2017
Copy link
Author

vincentchen2017 commented Sep 22, 2023

i change python version to python 3.9,python 3.8,and install it again,but more error occur。

@vincentchen2017 vincentchen2017 changed the title windows环境能用这个吗 can i use this in windows 10? Sep 22, 2023
@Dustinpro
Copy link
Collaborator

You may delete the torch.compile().

models['turncam'] = load_model_from_config(config, ckpt, device=device).half()

@vincentchen2017
Copy link
Author

thanks,i will try it later。now i run this in docker and it works well.

@MHassan1122
Copy link

MHassan1122 commented Jan 26, 2024

@vincentchen2017
can you please tell me about the format of input image i tried with obj and glb format but doesn't work it works for png and jpg 2d images.

@vincentchen2017
Copy link
Author

I use images in png format

@vincentchen2017
Copy link
Author

if you use on windows,do this:

  1. modify One-2-3-45\app\demo.py, change with open('instructions_12345.md', 'r') as f: to
    with open('instructions_12345.md', 'r', encoding='utf-8') as f:

2.modify One-2-3-45\utils\zero123_utils.py,change
if half_precision:
models['turncam'] = torch.compile(load_model_from_config(config, ckpt, device=device)).half()
else:
models['turncam'] = torch.compile(load_model_from_config(config, ckpt, device=device))
to
if half_precision:
# models['turncam'] = torch.compile(load_model_from_config(config, ckpt, device=device)).half()
models['turncam'] = load_model_from_config(config, ckpt, device=device).half()
else:
# models['turncam'] = torch.compile(load_model_from_config(config, ckpt, device=device))
models['turncam'] = load_model_from_config(config, ckpt, device=device)

3.download this file https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net.onnx and put it at C:\Users\your user.u2net

@MHassan1122
Copy link

dear I followed your steps but it give me the this
as you can see in last line mesh saved to the directory but there is no mesh.ply image in the mentioned directory and the other things is 'CUDA_VISIBLE_DEVICES' is not recognized as an internal or external command,
operable program or batch file.
i have cuda toolkit but confused how this not recongnized cuda....

C:\Users\admin\One-2-3-45-master>python run.py --img_path ./building.jpg --half_precision
Instantiating LatentDiffusion...
Loading model from zero123-xl.ckpt
C:\Users\admin\anaconda3\envs\flexicubes\lib\site-packages\torchaudio\backend\utils.py:74: UserWarning: No audio backend is available.
warnings.warn("No audio backend is available.")
Global Step: 122000
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.53 M params.
Keeping EMAs of 688.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Instantiating StableDiffusionSafetyChecker...
C:\Users\admin\anaconda3\envs\flexicubes\lib\site-packages\torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
SAM Time: 3.513s
Data shape for DDIM sampling is (4, 4, 32, 32), eta 1.0
Running DDIM Sampling with 76 timesteps
DDIM Sampler: 100%|████████████████████████████████████████████████████████████████████| 76/76 [00:11<00:00, 6.84it/s]
Data shape for DDIM sampling is (4, 4, 32, 32), eta 1.0
Running DDIM Sampling with 49 timesteps
DDIM Sampler: 100%|████████████████████████████████████████████████████████████████████| 49/49 [00:34<00:00, 1.41it/s]
2024-01-26 20:03:17.630 | INFO | elevation_estimate.utils.elev_est_api:get_feature_matcher:25 - Loading feature matcher...
2024-01-26 20:03:17.787 | INFO | elevation_estimate.utils.elev_est_api:mask_out_bkgd:48 - Image has no alpha channel, using thresholding to mask out background
2024-01-26 20:03:17.789 | INFO | elevation_estimate.utils.elev_est_api:mask_out_bkgd:48 - Image has no alpha channel, using thresholding to mask out background
2024-01-26 20:03:17.791 | INFO | elevation_estimate.utils.elev_est_api:mask_out_bkgd:48 - Image has no alpha channel, using thresholding to mask out background
2024-01-26 20:03:17.794 | INFO | elevation_estimate.utils.elev_est_api:mask_out_bkgd:48 - Image has no alpha channel, using thresholding to mask out background
2024-01-26 20:03:19.368 | WARNING | elevation_estimate.utils.elev_est_api:elev_est_api:199 - K is not provided, using default K
Estimated polar angle: 99
Data shape for DDIM sampling is (4, 4, 32, 32), eta 1.0
Running DDIM Sampling with 76 timesteps
DDIM Sampler: 100%|████████████████████████████████████████████████████████████████████| 76/76 [01:34<00:00, 1.25s/it]
Data shape for DDIM sampling is (4, 4, 32, 32), eta 1.0
Running DDIM Sampling with 49 timesteps
DDIM Sampler: 100%|████████████████████████████████████████████████████████████████████| 49/49 [00:31<00:00, 1.56it/s]
Data shape for DDIM sampling is (4, 4, 32, 32), eta 1.0
Running DDIM Sampling with 49 timesteps
DDIM Sampler: 100%|████████████████████████████████████████████████████████████████████| 49/49 [01:14<00:00, 1.52s/it]
Data shape for DDIM sampling is (4, 4, 32, 32), eta 1.0
Running DDIM Sampling with 49 timesteps
DDIM Sampler: 100%|████████████████████████████████████████████████████████████████████| 49/49 [01:37<00:00, 1.99s/it]
Data shape for DDIM sampling is (4, 4, 32, 32), eta 1.0
Running DDIM Sampling with 49 timesteps
DDIM Sampler: 100%|████████████████████████████████████████████████████████████████████| 49/49 [01:02<00:00, 1.28s/it]
Data shape for DDIM sampling is (4, 4, 32, 32), eta 1.0
Running DDIM Sampling with 49 timesteps
DDIM Sampler: 100%|████████████████████████████████████████████████████████████████████| 49/49 [01:42<00:00, 2.08s/it]
Data shape for DDIM sampling is (4, 4, 32, 32), eta 1.0
Running DDIM Sampling with 49 timesteps
DDIM Sampler: 100%|████████████████████████████████████████████████████████████████████| 49/49 [00:59<00:00, 1.21s/it]
Data shape for DDIM sampling is (4, 4, 32, 32), eta 1.0
Running DDIM Sampling with 49 timesteps
DDIM Sampler: 100%|████████████████████████████████████████████████████████████████████| 49/49 [01:37<00:00, 1.99s/it]
CUDA_VISIBLE_DEVICES=0 python exp_runner_generic_blender_val.py --specific_dataset_name C:\Users\admin\One-2-3-45-master\exp\building --mode export_mesh --conf confs/one2345_lod0_val_demo.conf --resolution 256
'CUDA_VISIBLE_DEVICES' is not recognized as an internal or external command,
operable program or batch file.
Mesh saved to: C:\Users\admin\One-2-3-45-master\exp\building\mesh.ply

@MHassan1122
Copy link

1-2-3-4-5
here you can see in picture it will give me different 2d images but mesh.ply has some problems in saving

@Dustinpro
Copy link
Collaborator

CUDA_VISIBLE_DEVICES=idx led to an error. You may delete it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants