Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dependency issue #3980

Open
robhheise opened this issue Apr 2, 2024 · 2 comments
Open

Dependency issue #3980

robhheise opened this issue Apr 2, 2024 · 2 comments
Assignees

Comments

@robhheise
Copy link

Describe the bug
When running the example here I am getting an exception of raise ImportError(
ImportError: Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bi
To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Warnings and other logs:
Loading large language model...
Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes, retrying in 1 seconds...
Loading large language model...
Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes, retrying in 2.1122417637009163 seconds...
Loading large language model...
Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes, retrying in 5.089839704704617 seconds...
Loading large language model...
Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes, retrying in 11.00884440709334 seconds...
Loading large language model...
Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes, retrying in 22.967673403551867 seconds...
Loading large language model...
Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes, retrying in 46.62241237591277 seconds...
Loading large language model...
Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes, retrying in 93.39312220005493 seconds...
Loading large language model...
Traceback (most recent call last):
File "/Users/robertheise/Documents/SD/kh-accel/ge-llm.py", line 287, in
results = model_ft_v1.train(dataset=birdi_df)
File "/Users/robertheise/Documents/SD/kh-accel/venv/lib/python3.10/site-packages/ludwig/api.py", line 634, in train
self.model = LudwigModel.create_model(self.config_obj, random_seed=random_seed)
File "/Users/robertheise/Documents/SD/kh-accel/venv/lib/python3.10/site-packages/ludwig/api.py", line 2083, in create_model
return model_type(config_obj, random_seed=random_seed)
File "/Users/robertheise/Documents/SD/kh-accel/venv/lib/python3.10/site-packages/ludwig/models/llm.py", line 100, in init
self.model = load_pretrained_from_config(self.config_obj, model_config=self.model_config)
File "/Users/robertheise/Documents/SD/kh-accel/venv/lib/python3.10/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), **kw)
File "/Users/robertheise/Documents/SD/kh-accel/venv/lib/python3.10/site-packages/retry/api.py", line 73, in retry_decorator
return __retry_internal(partial(f, *args, **kwargs), exceptions, tries, delay, max_delay, backoff, jitter,
File "/Users/robertheise/Documents/SD/kh-accel/venv/lib/python3.10/site-packages/retry/api.py", line 33, in __retry_internal
return f()
File "/Users/robertheise/Documents/SD/kh-accel/venv/lib/python3.10/site-packages/ludwig/utils/llm_utils.py", line 76, in load_pretrained_from_config
model: PreTrainedModel = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, **load_kwargs)
File "/Users/robertheise/Documents/SD/kh-accel/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
File "/Users/robertheise/Documents/SD/kh-accel/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3049, in from_pretrained
hf_quantizer.validate_environment(
File "/Users/robertheise/Documents/SD/kh-accel/venv/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 62, in validate_environment
raise ImportError(
ImportError: Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes

Expected behavior
I was hoping that the example would work.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):
I have tried on a MacBook and jupyter hub

  • OS: [e.g. iOS]
  • Version [e.g. 22]
  • Python version
    3.10.11
  • Ludwig version
    Name: ludwig
    Version: 0.10.2

Additional context
It would be great if the examples provided on predibase website where actually working: https://predibase.com/blog/ludwig-10k-stars-llm-fine-tuning-hackathon-winners

@alexsherstinsky alexsherstinsky self-assigned this Apr 2, 2024
@alexsherstinsky
Copy link
Collaborator

Hi @robhheise -- I am not seeing any error stack traces in the notebook attached. Could you please run it and include one with the error you are seeing so we can take the next steps to troubleshoot? Happy to pair to do it together, if makes sense. Also, while we include only the code that runs at the time of the publication, sometimes new issues come up when the various dependency libraries evolve. If you find such situations, please do file an issue, and we would be happy to take a look (ideally together!). Thank you very much.

@alexsherstinsky
Copy link
Collaborator

@robhheise I just ran this notebook in my personal account in Google Colab with the T4 GPU. There were no issues. I stopped the first and second fine-tuning jobs and cleared the memory (you can see those cells), and ran the third (best) fully. Please note that I did not uncomment any cells -- I just ran them as they are written. I have done this previously as well (when the competition results were judged), and there were no issues back then as there were none now. Could you please try to run it again under the same conditions as I described and see if you can replicate my results? Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants