Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

decapoda-research/llama-13b-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' #310

Open
sleepwalker2017 opened this issue Mar 7, 2024 · 7 comments
Labels
question Further information is requested

Comments

@sleepwalker2017
Copy link

sleepwalker2017 commented Mar 7, 2024

Here is my code:

model=/data/vicuna-13b/vicuna-13b-v1.5/

docker run --gpus all --shm-size 1g -p 8080:80 -v /data/:/data \
    ghcr.io/predibase/lorax:latest --model-id $model --sharded true --num-shard 2 \
    --adapter-id baruga/alpaca-lora-13b

Here is the error message:

I need some help, thank you!

Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 270, in hf_raise_for_status
    response.raise_for_status()
  File "/opt/conda/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/decapoda-research/llama-13b-hf/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/transformers/utils/hub.py", line 398, in cached_file
    resolved_file = hf_hub_download(
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1374, in hf_hub_download
    raise head_call_error
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1247, in hf_hub_download
    metadata = get_hf_file_metadata(
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1624, in get_hf_file_metadata
    r = _request_wrapper(
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 402, in _request_wrapper
    response = _request_wrapper(
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 426, in _request_wrapper
    hf_raise_for_status(response)
  File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 320, in hf_raise_for_status
    raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-65e963a4-3554a4166f7d0bb863c8811c;ee0307fe-42dd-4029-9740-003ce45080e2)

Repository Not Found for url: https://huggingface.co/decapoda-research/llama-13b-hf/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

Why does it have to access the huggingface.co even if I supplied the local folder path?

@sleepwalker2017
Copy link
Author

When I try this

model=codellama/CodeLlama-13b-hf

docker run --gpus all --shm-size 1g -p 8080:80 -v /data/:/data \
    ghcr.io/predibase/lorax:latest --model-id $model --sharded true --num-shard 2 \
    --adapter-id shibing624/llama-13b-belle-zh-lora

The error becomes this:

OSError: decapoda-research/llama-13b-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
 rank=1

I wonder, where does this decapoda-research/llama-13b-hf come from??

I didn't specify that.

At last, I want to know, how can I serve my local vicuna-13B model? Is that supported?

@sleepwalker2017
Copy link
Author

Another question: I want to specify multiple local loras when luanching server, How can I achieve this?

@thincal
Copy link
Contributor

thincal commented Mar 7, 2024

how can I serve my local vicuna-13B model? Is that supported?

Yes, it's supported. you can specify another params --adapter-source local which indicating the --adapter-id is a local filepath pointed to your local model.

@thincal
Copy link
Contributor

thincal commented Mar 7, 2024

specify multiple local loras when luanching server

it seems this feature is already in roadmap (LoRA blending): #57

@tgaddair
Copy link
Contributor

Hi @sleepwalker2017, I think the issue with loading the adapter is the same as the issue in #311, which should be fixed by #317. There should be no issue with serving your local vicuna-13b model following this change, but let me know if you run into more issues.

Regarding your second question: when you say you want to specify multiple local LoRAs, do you mean you wish to merge them together? If so, we support this, just not during initialization. We have some docs on LoRA merging here. But if you mean you want to be able to call different adapters for each request, you can do so by specifying the adapter_id (can be a local file) in the request instead of during initialization.

@tgaddair tgaddair added the question Further information is requested label Mar 10, 2024
@sleepwalker2017
Copy link
Author

Hi @sleepwalker2017, I think the issue with loading the adapter is the same as the issue in #311, which should be fixed by #317. There should be no issue with serving your local vicuna-13b model following this change, but let me know if you run into more issues.

Regarding your second question: when you say you want to specify multiple local LoRAs, do you mean you wish to merge them together? If so, we support this, just not during initialization. We have some docs on LoRA merging here. But if you mean you want to be able to call different adapters for each request, you can do so by specifying the adapter_id (can be a local file) in the request instead of during initialization.

Thank you, I checked this, the 1st issue has been solved.

About the 2nd question, when I launch server with this:
lorax-launcher --model-id /data/vicuna-13b/vicuna-13b-v1.5/ --sharded true --num-shard 2

And then I send request with different lora_ids like this:

adapters = ['mattreid/alpaca-lora-13b', "merror/llama_13b_lora_beauty", 'shibing624/llama-13b-belle-zh-lora', 'shibing624/ziya-llama-13b-medical-lora']
def build_request(output_len):
    global req_cnt
    idx = req_cnt % len(test_data)
    lora_id = idx % 4

    input_dict = {
        "inputs": test_data[idx],
        "parameters": {
            "adapter_id": adapters[lora_id],
            "max_new_tokens": 256,
            "top_p": 0.7
        }
    }
    req_cnt += 1
    return input_dict

I want to know whether the adapter computation of these requests are merged.

@sleepwalker2017
Copy link
Author

Hi @sleepwalker2017, I think the issue with loading the adapter is the same as the issue in #311, which should be fixed by #317. There should be no issue with serving your local vicuna-13b model following this change, but let me know if you run into more issues.

Regarding your second question: when you say you want to specify multiple local LoRAs, do you mean you wish to merge them together? If so, we support this, just not during initialization. We have some docs on LoRA merging here. But if you mean you want to be able to call different adapters for each request, you can do so by specifying the adapter_id (can be a local file) in the request instead of during initialization.

When I use my code to benchmark lora-x on A30*2, I got the poor benchmark.

image

There must be something wrong. Please take a look, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants