Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IndexError: tuple index out of range #532

Open
jaskirat8 opened this issue Oct 25, 2023 · 7 comments
Open

IndexError: tuple index out of range #532

jaskirat8 opened this issue Oct 25, 2023 · 7 comments

Comments

@jaskirat8
Copy link

To get bootstrapped, I tried to use the example from Readme

from transformers import AutoTokenizer
from petals import AutoDistributedModelForCausalLM
import torch 

# Choose any model available at https://health.petals.dev
model_name = "petals-team/StableBeluga2"  # This one is fine-tuned Llama 2 (70B)

# Connect to a distributed network hosting model layers
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoDistributedModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float32)

# Run the model as if it were on your computer
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(input_ids=inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))  # A cat sat on a mat...

The torch_dtype=torch.float32 was added due to CPU support warning but apart from that rest is the same as the original example yet I am facing the error and unable to complete the inference.

Screenshot from 2023-10-25 10-30-08

OS : Ubuntu 22.04
CPU : i7-7700K
GPU: Nvidia 1070

Please guide if i am missing something here.

@AIAnytime
Copy link

Same error... this doesn't work anymore. Creators are also not active.

@daspartho
Copy link

+1

running into the same error while trying to generate by calling model.generate() method in the getting started colab notebook.

Screenshot 2023-11-05 at 1 11 24 PM

@daspartho
Copy link

found relevant issue huggingface/transformers#10160

@daspartho
Copy link

this seems to be an issue with the petals library itself instead of the transformers library since replacing AutoDistributedModelForCausalLM with AutoModelForCausalLM seems to work fine

@jaskirat8
Copy link
Author

@daspartho, I have the same thoughts as I have been using the same models for months directly, and it works. I just wanted to validate that it is not something due to my misconfiguration or overlooking some setting; we need to isolate the culprit and work towards PR since other folks are also facing this.

@daspartho
Copy link

yes i agree!

also gently pinging @borzunov here

@justheuristic
Copy link
Collaborator

[working on it]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants