New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in Prompt.load(from_hf) : model_card (NoneType) is not iterable #613
Comments
Same issue, I get the error message when I try to pass in other hugging face models as well. |
@remiconnesson and @ajarcik - thanks for sharing this issue so we can fix it. For the example above, please remove the 'from_hf' flag, and everything should work fine, e.g.: prompter = Prompt().load_model(model_name) with model_name = "llmware/bling-1b-0.1" (We will update the example code too.) When the 'from_hf =True' is set, we pass the model name and pull directly from HF/transformers Auto, and then take the instantiated HF object and wrap the HFGenerative class around it - in the course of doing that, we missed a safety check on a null config setting - which we are fixing in parallel - and should be merged in the code later today. As an alternative to the 'from_hf' approach, you can register any custom model (Pytorch or GGUF) in the llmware ModelCatalog with a one-line registration process, and then pull directly with .load_model without using the from_hf flag. Will keep the issue open until you confirm back that all good. |
No activity - closing due to stale issue, which seems to have been resolved in prior message on April 20. The example code was updated too. If any ongoing problems, please raise a new issue. |
the snippet from the video https://www.youtube.com/watch?v=JjgqOZ2v5oU
yields
The text was updated successfully, but these errors were encountered: