Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError - assert conversion_footprint and optimizer_footprint when converting a model with size>512 #842

Open
gilsegev opened this issue Dec 21, 2023 · 2 comments

Comments

@gilsegev
Copy link

Describe the bug
I tried converting epicphotogasm_lastUnicorn with 768x768 or 1024x1024 and the conversion fails. The model converted successfully with 512x512

To Reproduce
convert epicphotogasm model with size larger than 512

Expected behavior
Model converts successfully

Olive config
Add Olive configurations here.

Olive logs
Optimizing unet
[2023-12-20 22:46:50,424] [INFO] [engine.py:179:setup_accelerators] Running workflow on accelerator specs: gpu-dml
[2023-12-20 22:46:50,437] [INFO] [engine.py:929:_run_pass] Running pass convert:OnnxConversion
[2023-12-20 22:46:56,836] [ERROR] [engine.py:1002:_run_pass] Pass run failed.
Traceback (most recent call last):
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\olive\engine\engine.py", line 990, in _run_pass
output_model_config = host.run_pass(p, input_model_config, data_root, output_model_path, pass_search_point)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\olive\systems\local.py", line 32, in run_pass
output_model = the_pass.run(model, data_root, output_model_path, point)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\olive\passes\olive_pass.py", line 371, in run
output_model = self._run_for_config(model, data_root, config, output_model_path)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\olive\passes\onnx\conversion.py", line 121, in _run_for_config
return self._convert_model_on_device(model, data_root, config, output_model_path, device, torch_dtype)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\olive\passes\onnx\conversion.py", line 343, in _convert_model_on_device
converted_onnx_model = OnnxConversion._export_pytorch_model(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\olive\passes\onnx\conversion.py", line 216, in _export_pytorch_model
torch.onnx.export(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\onnx\utils.py", line 504, in export
_export(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\onnx\utils.py", line 1529, in _export
graph, params_dict, torch_out = _model_to_graph(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\onnx\utils.py", line 1111, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\onnx\utils.py", line 987, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\onnx\utils.py", line 891, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\jit_trace.py", line 1184, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\jit_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\jit_trace.py", line 118, in wrapper
outs.append(self.inner(*trace_inputs))
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 1075, in forward
sample, res_samples = downsample_block(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1160, in forward
hidden_states = attn(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\transformer_2d.py", line 392, in forward
hidden_states = block(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\attention.py", line 323, in forward
attn_output = self.attn2(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\attention_processor.py", line 522, in forward
return self.processor(
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\attention_processor.py", line 736, in call
key = attn.to_k(encoder_hidden_states, *args)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\lora.py", line 430, in forward
out = super().forward(hidden_states)
File "F:\Olive\stable-diffusion-webui-directml\extensions-builtin\Lora\networks.py", line 429, in network_Linear_forward
return originals.Linear_forward(self, input)
File "F:\Olive\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
File "F:\Olive\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 39, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: forward(op, args, kwargs))
File "F:\Olive\stable-diffusion-webui-directml\modules\dml\amp\autocast_mode.py", line 13, in forward
return op(*args, **kwargs)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x1280 and 768x320)
[2023-12-20 22:46:57,434] [WARNING] [engine.py:912:_run_passes] Skipping evaluation as model was pruned
[2023-12-20 22:46:57,438] [INFO] [engine.py:610:create_pareto_frontier_footprints] Output all 0 models
[2023-12-20 22:46:57,438] [INFO] [engine.py:357:run] Run history for gpu-dml:
[2023-12-20 22:46:57,440] [INFO] [engine.py:636:dump_run_history] Please install tabulate for better run history output
[2023-12-20 22:46:57,441] [INFO] [engine.py:372:run] No packaging config provided, skip packaging artifacts
*** Error completing request
*** Arguments: ('F:\SD new\stable-diffusion-webui-directml\models\Stable-diffusion\epicphotogasm_lastUnicorn.safetensors', '', 'vae', 'epicphotogasm_lastUnicorn', 'epicphotogasm_lastUnicorn', 'runwayml/stable-diffusion-v1-5', '', 'vae', 'stable-diffusion-v1-5', 'stable-diffusion-v1-5', True, True, True, True, True, True, True, True, True, True, 'euler', True, 1024, False, '', '', '') {}
Traceback (most recent call last):
File "F:\Olive\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "F:\Olive\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "F:\Olive\stable-diffusion-webui-directml\modules\ui.py", line 2098, in optimize
return optimize_sd_from_ckpt(
File "F:\Olive\stable-diffusion-webui-directml\modules\sd_olive_ui.py", line 94, in optimize_sd_from_ckpt
optimize(
File "F:\Olive\stable-diffusion-webui-directml\modules\sd_olive_ui.py", line 363, in optimize
assert conversion_footprint and optimizer_footprint
AssertionError

Other information
config_safety_checker.json
config_text_encoder.json
config_unet.json
config_vae_decoder.json
config_vae_encoder.json

  • OS: win10
  • Olive version:
  • ONNXRuntime package and version: [e.g. onnxruntime-gpu: 1.16.1]

Additional context
Add any other context about the problem here.

@trajepl
Copy link
Contributor

trajepl commented Dec 25, 2023

Quick question: what do you mean by 512x512, 768x768 or 1024x1024? Are these input dimensions?

@gilsegev
Copy link
Author

Quick question: what do you mean by 512x512, 768x768 or 1024x1024? Are these input dimensions?

Sorry I should have been clearer. I meant the image size selection in the checkpoint optimization screen. I am able to optimize the model with 512x512 and generate 512x512 with ~20 it/s, 768x768 with 8.6 it/s and 1024x1024 5 it/s.
I assumed that optimizing the model for 768x768 or 1024x1024 would increase performance with these resolutions but it fails.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants