Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference.py: error: unrecognized arguments: #62

Open
ruoxuer opened this issue Jan 28, 2024 · 2 comments
Open

inference.py: error: unrecognized arguments: #62

ruoxuer opened this issue Jan 28, 2024 · 2 comments

Comments

@ruoxuer
Copy link

ruoxuer commented Jan 28, 2024

1706436123081
请问,为什么在运行 sh scripts/run_image2video.sh之后,提示找不到这么多的参数呢?模型我已经下载了,放在了文档要求的 checkpoint文件夹。

@catoapowell
Copy link

I am getting the same issue on WSL.

/mnt/d/Projects/AI/VideoCrafter$ sh scripts/run_text2video.sh --ckpt_path checkpoints\base_512_v2\model.ckpt
: not found_text2video.sh: 2:
: not found_text2video.sh: 5:
: not found_text2video.sh: 8:
@CoLVDM Inference: 2024-02-03-23-41-25
usage: inference.py [-h] [--seed SEED] [--mode MODE] [--ckpt_path CKPT_PATH] [--config CONFIG]
[--prompt_file PROMPT_FILE] [--savedir SAVEDIR] [--savefps SAVEFPS] [--n_samples N_SAMPLES]
[--ddim_steps DDIM_STEPS] [--ddim_eta DDIM_ETA] [--bs BS] [--height HEIGHT] [--width WIDTH]
[--frames FRAMES] [--fps FPS] [--unconditional_guidance_scale UNCONDITIONAL_GUIDANCE_SCALE]
[--unconditional_guidance_scale_temporal UNCONDITIONAL_GUIDANCE_SCALE_TEMPORAL]
[--cond_input COND_INPUT]
inference.py: error: unrecognized arguments:
scripts/run_text2video.sh: 10: --seed: not found
scripts/run_text2video.sh: 11: --mode: not found
scripts/run_text2video.sh: 12: --ckpt_path: not found
scripts/run_text2video.sh: 13: --config: not found
scripts/run_text2video.sh: 14: --savedir: not found
scripts/run_text2video.sh: 15: --n_samples: not found
scripts/run_text2video.sh: 16: --bs: not found
scripts/run_text2video.sh: 17: --unconditional_guidance_scale: not found
scripts/run_text2video.sh: 18: --ddim_steps: not found
scripts/run_text2video.sh: 19: --ddim_eta: not found
scripts/run_text2video.sh: 20: --prompt_file: not found
scripts/run_text2video.sh: 21: --fps: not found

@catoapowell
Copy link

catoapowell commented Feb 4, 2024

I potentially found some temporary fixes for this.

open the file scripts\evaluation\Inference.py in a text editor

Change:
args = parser.parse_args()
to:
args, unknown = parser.parse_known_args()

But now I am getting a new error:

Traceback (most recent call last):
File "/mnt/d/Projects/AI/VideoCrafter/scripts/evaluation/inference.py", line 132, in
run_inference(args, gpu_num, rank)
File "/mnt/d/Projects/AI/VideoCrafter/scripts/evaluation/inference.py", line 21, in run_inference
config = OmegaConf.load(args.config)
File "/home/echo/.local/lib/python3.10/site-packages/omegaconf/omegaconf.py", line 188, in load
raise TypeError("Unexpected file type")

Edit: I solved the new issue, the main .sh file is failing to set the settings for the program.

open scripts\evaluation\Inference.py in a text editor again, and replace all the parser.add_argument functions with:

parser.add_argument("--seed", type=int, default=20230211, help="seed for seed_everything")
parser.add_argument("--mode", default="base", type=str, help="which kind of inference mode: {'base', 'i2v'}")
parser.add_argument("--ckpt_path", type=str, default="checkpoints/base_512_v2/model.ckpt", help="checkpoint path")
parser.add_argument("--config", type=str, default="configs/inference_t2v_512_v2.0.yaml", help="config (yaml) path")
parser.add_argument("--prompt_file", type=str, default="prompts/test_prompts.txt", help="a text file containing many prompts")
parser.add_argument("--savedir", type=str, default="results/base_512_v2", help="results saving path")
parser.add_argument("--savefps", type=str, default=10, help="video fps to generate")
parser.add_argument("--n_samples", type=int, default=1, help="num of samples per prompt",)
parser.add_argument("--ddim_steps", type=int, default=50, help="steps of ddim if positive, otherwise use DDPM",)
parser.add_argument("--ddim_eta", type=float, default=1.0, help="eta for ddim sampling (0.0 yields deterministic sampling)",)
parser.add_argument("--bs", type=int, default=1, help="batch size for inference")
parser.add_argument("--height", type=int, default=512, help="image height, in pixel space")
parser.add_argument("--width", type=int, default=512, help="image width, in pixel space")
parser.add_argument("--frames", type=int, default=-1, help="frames num to inference")
parser.add_argument("--fps", type=int, default=24)
parser.add_argument("--unconditional_guidance_scale", type=float, default=1.0, help="prompt classifier-free guidance")
parser.add_argument("--unconditional_guidance_scale_temporal", type=float, default=None, help="temporal consistency guidance")
parser.add_argument("--cond_input", type=str, default=None, help="data dir of conditional input")

Once again I have gotten a new error, but this could just be because I am using WSL:

DDIM scale True
ddim device cuda:0
Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory
Aborted

Edit:
Its a cuda error, but I tried installing cuda on WSL and that didnt fix the issue.
I added:
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
to the top of:
scripts/run_text2video.sh

That fixed that issue, but now I am getting torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 8.00 GiB total capacity; 7.10 GiB already allocated; 0 bytes free; 7.26 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.

(Maybe your gpu can handle it and you didnt get this issue, but I have two gpus and want to split the load between them)
Anyways I'll keep working on this for other people who have the same issue.

Edit: I didn't find a permanent solution, however, if you have multiple GPU's like me, I recommend going into device manager and disabling your weaker GPU's so it defaults to your more powerful GPU. That seemed to fully fix the issue for me. You can also try to reduce some of the settings you pasted into scripts\evaluation\Inference.py to reduce the usage of your gpu.

I hope this helps you figure it out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants