Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: LoraInjectedLinear.forward() got an unexpected keyword argument 'scale' #259

Open
ethvedbitdesjan opened this issue Sep 15, 2023 · 8 comments

Comments

@ethvedbitdesjan
Copy link

ethvedbitdesjan commented Sep 15, 2023

I got this error while running two different scripts: run_lora_db_unet_only.sh and use_face_conditioning_example.sh

I only made changes regarding my own data and output directory. Also, for the second one I set use_template to "style" but everything else remained the same.

I was running this on kaggle notebooks, so I am not sure whether that is the problem.

Below are the full error descriptions by running use_face_conditioning.sh:

PTI : has 288 lora
PTI : Before training:
Steps:   0%|                                           | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/opt/conda/bin/lora_pti", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/cli_lora_pti.py", line 1040, in main
    fire.Fire(train)
  File "/opt/conda/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/opt/conda/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/opt/conda/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/cli_lora_pti.py", line 1012, in train
    perform_tuning(
  File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/cli_lora_pti.py", line 591, in perform_tuning
    loss = loss_step(
  File "/opt/conda/lib/python3.10/site-packages/lora_diffusion/cli_lora_pti.py", line 322, in loss_step
    model_pred = unet(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 956, in forward
    sample, res_samples = downsample_block(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py", line 1086, in forward
    hidden_states = attn(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/diffusers/models/transformer_2d.py", line 315, in forward
    hidden_states = block(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/diffusers/models/attention.py", line 197, in forward
    attn_output = self.attn1(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 420, in forward
    return self.processor(
  File "/opt/conda/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 1019, in __call__
    query = attn.to_q(hidden_states, scale=scale)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
TypeError: LoraInjectedLinear.forward() got an unexpected keyword argument 'scale'

The error is similar to the one I get on running run_lora_db_unet_only.sh

I am not sure how to resolve this. Thank you for your time. I am just a beginner so sorry for the trouble.

@shohog
Copy link

shohog commented Sep 18, 2023

Build the environment from source.

@ethvedbitdesjan
Copy link
Author

Build the environment from source.

I did run

!pip install git+https://github.com/cloneofsimo/lora.git

Is this not building from source?

@xwhzz
Copy link

xwhzz commented Sep 24, 2023

You may need to lower the version of the Python package, such as using diffusers==0.11.0 and transformers==4.25.1.

@justin-prnd
Copy link

query = attn.to_q(hidden_states, scale=scale)

The latest diffusers adopts LoRACompatibleLinear module rather than nn.Linear.

The forward method of LoRACompatibleLinear, however, has an additional kwargs scale which is not included in LoraInjectedLinear.

Since LoraInjectedLinear already has a scale parameter as an instance variable, simply modifying the definition of the forward method of LoraInjectedLinear will resolve the issue.

class LoraInjectedLinear(nn.Module):
  ...
  def forward(self, input, scale: float = 1.0):  # FIXED: add dummy scale argument

@haodong2000
Copy link

thanks @justin-prnd

@justin-prnd
Copy link

Modification for inference

After training, I faced a similar issue for inference.

The patch_pipe method would fail with the following error.

...
line 784, in monkeypatch_or_replace_lora_extended
    _module._modules[name] = _tmp
UnboundLocalError: local variable '_tmp' referenced before assignment

I figured out that the error occurred because the monkeypatch_or_replace_lora_extended did not handle the LoRACompatibleLinear module properly.

There would be a better solution, but I fixed the issue by simply adding the LoRACompatibleLinear module on the search target.

def monkeypatch_or_replace_lora_extended(
    model,
    loras,
    target_replace_module=DEFAULT_TARGET_REPLACE,
    r: Union[int, List[int]] = 4,
):
    for _module, name, _child_module in _find_modules(
        model,
        target_replace_module,
        search_class=[nn.Linear, LoraInjectedLinear, LoRACompatibleLinear, nn.Conv2d, LoraInjectedConv2d],
    ):
        _tmp = None

        if _child_module.__class__ in {nn.Linear, LoraInjectedLinear, LoRACompatibleLinear}:
           ...

@ethvedbitdesjan
Copy link
Author

@justin-prnd where is the LoRACompatibleLinear defined? I get module not defined error?

@justin-prnd
Copy link

@justin-prnd where is the LoRACompatibleLinear defined? I get module not defined error?

Defined at
https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/lora.py#L181
and can be imported as

from diffusers.models.lora import LoRACompatibleLinear

if diffusers >= v0.16.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants