Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG/Help] <title>line 316, in extract_weight_to_float func( TypeError: 'NoneType' object is not callable #1467

Open
1 task done
xiaoming521 opened this issue Mar 22, 2024 · 0 comments

Comments

@xiaoming521
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 80, in forward
weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache)
File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 316, in extract_weight_to_float
func(
TypeError: 'NoneType' object is not callable

Expected Behavior

File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 80, in forward
weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache)
File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 316, in extract_weight_to_float
func(
TypeError: 'NoneType' object is not callable

Steps To Reproduce

File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 80, in forward
weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache)
File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 316, in extract_weight_to_float
func(
TypeError: 'NoneType' object is not callable

Environment

- OS:Window7
- Python:3.8
- Transformers:4.27.1
- PyTorch: 1.10
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :no

Anything else?

File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 80, in forward
weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache)
File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 316, in extract_weight_to_float
func(
TypeError: 'NoneType' object is not callable

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant