Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Problems getting latest beta to run #488

Open
vredesbyyrd opened this issue Mar 15, 2024 · 1 comment
Open

[BUG] Problems getting latest beta to run #488

vredesbyyrd opened this issue Mar 15, 2024 · 1 comment

Comments

@vredesbyyrd
Copy link

vredesbyyrd commented Mar 15, 2024

Hi, thanks for sharing this awesome tool.

I'm struggling to get the latest beta to run. I installed iopaint in a virtual environment on arch linux. Models were originally installed in /.cache, but I set the --model-dir option in the below command hoping that re-downloading the models would help, but no dice. Any thoughts on what could be causing this? Thanks for your time.

System Info

  • lama-cleaner: 1.2.2
  • pytorch: 2.2.1

EDIT 2:

Downgrading 1 release version I no longer have any errors - pip install --force-reinstall -v "iopaint==1.2.0"

EDIT:

After rebooting my system I am still seeing errors, but there different. I can open the ui, but the model menu is empty.

iopaint start --model=lama --device=cpu --port=8080 --model-dir='/home/clu/build/python-iopaint/iopaint/models'
2024-03-15 18:17:36.514 | INFO     | iopaint.runtime:setup_model_dir:82 - Model directory: /home/clu/build/python-iopaint/iopaint/models
- Platform: Linux-6.7.9-arch1-1-x86_64-with-glibc2.39
- Python version: 3.11.8
- torch: 2.2.1
- torchvision: 0.17.1
- Pillow: 9.5.0
- diffusers: 0.26.3
- transformers: 4.38.2
- opencv-python: 4.9.0.80
- accelerate: 0.28.0
- iopaint: 1.2.2
- rembg: N/A
- realesrgan: N/A
- gfpgan: N/A

[W init.cpp:767] Warning: nvfuser is no longer supported in torch script, use _jit_set_nvfuser_enabled is deprecated and a no-op (function operator())
{
    "host": "127.0.0.1",
    "port": 8080,
    "inbrowser": false,
    "model": "lama",
    "no_half": false,
    "low_mem": false,
    "cpu_offload": false,
    "disable_nsfw_checker": false,
    "local_files_only": false,
    "cpu_textencoder": false,
    "device": "cpu",
    "input": null,
    "output_dir": null,
    "quality": 95,
    "enable_interactive_seg": false,
    "interactive_seg_model": "vit_b",
    "interactive_seg_device": "cpu",
    "enable_remove_bg": false,
    "remove_bg_model": "briaai/RMBG-1.4",
    "enable_anime_seg": false,
    "enable_realesrgan": false,
    "realesrgan_device": "cpu",
    "realesrgan_model": "realesr-general-x4v3",
    "enable_gfpgan": false,
    "gfpgan_device": "cpu",
    "enable_restoreformer": false,
    "restoreformer_device": "cpu"
}
2024-03-15 18:17:44.999 | INFO     | iopaint.model_manager:init_model:38 - Loading model: lama
2024-03-15 18:17:45.000 | INFO     | iopaint.helper:load_jit_model:107 - Loading model from: /home/clu/build/python-iopaint/iopaint/models/torch/hub/checkpoints/big-lama.pt
INFO:     Started server process [17912]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:8080 (Press CTRL+C to quit)
INFO:     127.0.0.1:36072 - "GET /socket.io/?EIO=4&transport=polling&t=Ov4Q6J0 HTTP/1.1" 200 OK
INFO:     127.0.0.1:36096 - "GET /api/v1/inputimage HTTP/1.1" 404 Not Found
INFO:     127.0.0.1:36080 - "GET /api/v1/model HTTP/1.1" 200 OK
INFO:     127.0.0.1:36072 - "GET /api/v1/models HTTP/1.1" 404 Not Found
INFO:     127.0.0.1:36102 - "GET /api/v1/server-config HTTP/1.1" 200 OK
INFO:     127.0.0.1:36102 - "POST /socket.io/?EIO=4&transport=polling&t=Ov4Q6Kw&sid=WiQ9_QRPc1b-CQpHAAAA HTTP/1.1" 200 OK
INFO:     127.0.0.1:36080 - "GET /socket.io/?EIO=4&transport=polling&t=Ov4Q6Kx&sid=WiQ9_QRPc1b-CQpHAAAA HTTP/1.1" 200 OK
INFO:     ('127.0.0.1', 36112) - "WebSocket /socket.io/?EIO=4&transport=websocket&sid=WiQ9_QRPc1b-CQpHAAAA" [accepted]
INFO:     connection open
INFO:     127.0.0.1:36102 - "GET /socket.io/?EIO=4&transport=polling&t=Ov4Q6L5&sid=WiQ9_QRPc1b-CQpHAAAA HTTP/1.1" 200 OK
INFO:     127.0.0.1:36102 - "GET /socket.io/?EIO=4&transport=polling&t=Ov4Q6LH&sid=WiQ9_QRPc1b-CQpHAAAA HTTP/1.1" 200 OK
INFO:     127.0.0.1:36102 - "GET /socket.io/?EIO=4&transport=polling&t=Ov4Q6La&sid=WiQ9_QRPc1b-CQpHAAAA HTTP/1.1" 200 OK
INFO:     127.0.0.1:36102 - "GET /api/v1/models HTTP/1.1" 404 Not Found
INFO:     127.0.0.1:36102 - "GET /api/v1/models HTTP/1.1" 404 Not Found
INFO:     127.0.0.1:36102 - "GET /api/v1/models HTTP/1.1" 404 Not Found

@Sanster
Copy link
Owner

Sanster commented Mar 18, 2024

Model menu displays the models that have already been downloaded. When starting the service, specifying a model will automatically download it. For example, --model runwayml/stable-diffusion-inpainting. You can see more supported models here: https://www.iopaint.com/models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants