Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Querying Llama3 70b using BedrockChat returns empty response if prompt is long #21037

Closed
5 tasks done
fedor-intercom opened this issue Apr 29, 2024 · 1 comment
Closed
5 tasks done
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@fedor-intercom
Copy link

fedor-intercom commented Apr 29, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

from langchain.chains import LLMChain
from langchain.prompts import HumanMessagePromptTemplate
from langchain.prompts.chat import ChatPromptTemplate
from langchain_community.chat_models import BedrockChat
import langchain

langchain.debug = True

def get_llama3_bedrock(
    model_id="meta.llama3-70b-instruct-v1:0",
    max_gen_len=2048,
    top_p=0.0,
    temperature=0.0,
):
    model_kwargs = {
        "top_p": top_p,
        "max_gen_len": max_gen_len,
        "temperature": temperature,
    }
    return BedrockChat(model_id=model_id, model_kwargs=model_kwargs)

prompt_poem = """
This is a poem by William Blake

============
Never seek to tell thy love
Love that never told can be 
For the gentle wind does move
Silently invisibly

I told my love I told my love 
I told her all my heart 
Trembling cold in ghastly fears
Ah she doth depart

Soon as she was gone from me
A traveller came by
Silently invisibly 
O was no deny 
============

What did the lady do?

"""
langchain_prompt = ChatPromptTemplate.from_messages([
        HumanMessagePromptTemplate.from_template(prompt_poem)
        ]
)
print("Response 1:", LLMChain(llm=get_llama3_bedrock(), prompt=langchain_prompt).run(dict()))
#Responds: ''

prompt_simple_question = """What is the capital of China?"""
langchain_prompt = ChatPromptTemplate.from_messages([
        HumanMessagePromptTemplate.from_template(prompt_simple_question)
        ]
)
print("Response 2:", LLMChain(llm=get_llama3_bedrock(), prompt=langchain_prompt).run(dict()))
#Responds: 'Beijing.'

Error Message and Stack Trace (if applicable)

No response

Description

I am trying to use BedrockChat to call Llama3 on our AWS account.

Here is the issue:

  • I try to pass a long-ish (multiline) prompt and it returns an empty string.
  • Passing the same long-ish prompt directly on the AWS Console generate the expected answer.
image - Passing a single line questions like `What is the capital of China?` return the expected answer `Beijing.`

System Info

platform: mac
python version: 3.11.7

langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.1.46
langchain-text-splitters==0.0.1

@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Apr 29, 2024
@fedor-intercom fedor-intercom changed the title Querying Llama3 70b using LangchainChatBedrock returns empty response if prompt is long Querying Llama3 70b using BedrockChat returns empty response if prompt is long Apr 30, 2024
@fedor-intercom
Copy link
Author

fedor-intercom commented Apr 30, 2024

Raised an issue and PR in langchain-aws repo: langchain-ai/langchain-aws#31

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

1 participant