Azure OpenAI gpt-4 version turbo-2024-04-09 responses are cut off for vision queries #2607
-
What happened?I recently upgraded the It looks like vision responses in particular are getting cut off. Steps to Reproduce
What browsers are you seeing the problem on?No response Relevant log output2024-05-02T18:25:48.885Z debug: [BaseClient] Loading history:
{
conversationId: "809e7cf2-108a-4145-9786-4793fe347742",
parentMessageId: "00000000-0000-0000-0000-000000000000",
}
2024-05-02T18:25:49.464Z debug: [BaseClient] Context Count (1/2)
{
remainingContextTokens: 7408,
maxContextTokens: 8187,
}
2024-05-02T18:25:49.466Z debug: [BaseClient] Context Count (2/2)
{
remainingContextTokens: 7408,
maxContextTokens: 8187,
}
2024-05-02T18:25:49.466Z debug: [BaseClient] tokenCountMap:
{
ce7b73ac-8a6c-4afb-b428-3c5eb155e8a7: 776,
}
2024-05-02T18:25:49.468Z debug: [BaseClient]
{
promptTokens: 779,
remainingContextTokens: 7408,
payloadSize: 1,
maxContextTokens: 8187,
}
2024-05-02T18:25:49.469Z debug: [BaseClient] tokenCountMap
{
ce7b73ac-8a6c-4afb-b428-3c5eb155e8a7: 776,
instructions: undefined,
}
2024-05-02T18:25:49.502Z debug: [BaseClient] userMessage
{
messageId: "ce7b73ac-8a6c-4afb-b428-3c5eb155e8a7",
parentMessageId: "00000000-0000-0000-0000-000000000000",
conversationId: "809e7cf2-108a-4145-9786-4793fe347742",
sender: "User",
text: "Describe this image in great detail.",
isCreatedByUser: true,
// 1 image_url(s)
image_urls: [{"type":"image_url","image_url":{"url":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAwAAAAP/CAIAAA... [truncated]],
tokenCount: 776,
}
2024-05-02T18:25:51.450Z debug: [OpenAIClient] chatCompletion
{
baseURL: "https://instancename.openai.azure.com/openai/deployments/gpt-4/chat/completions?api-version=202... [truncated]",
modelOptions.model: "gpt-4",
modelOptions.temperature: 1,
modelOptions.top_p: 1,
modelOptions.presence_penalty: 0,
modelOptions.frequency_penalty: 0,
modelOptions.stop: undefined,
modelOptions.user: "660ce7e4eca51f37c251afe0",
modelOptions.stream: true,
// 1 message(s)
modelOptions.messages: [{"role":"user","content":[{"type":"text","text":"Describe this image in great detail."},{"type":"ima... [truncated]],
}
2024-05-02T18:25:56.872Z debug: [OpenAIClient] chatCompletion response
{
object: "chat.completion",
// 2 prompt_filter_result(s)
prompt_filter_results: [{"prompt_index":0,"content_filter_result":{"jailbreak":{"filtered":false,"detected":false},"custom_b... [truncated],{"prompt_index":1,"content_filter_result":{"sexual":{"filtered":false,"severity":"safe"},"violence":... [truncated]],
id: "chatcmpl-9KVPv1j18ZTCYZ8VsMh1jsg9wbcV7",
// 1 choice(s)
choices: [{"content_filter_results":{},"message":{"role":"assistant","content":"This image captures a tranquil... [truncated]],
created: 1714674355,
model: "gpt-4-turbo-2024-04-09",
}
2024-05-02T18:25:56.874Z debug: [spendTokens] conversationId: 809e7cf2-108a-4145-9786-4793fe347742 | Context: message | Token usage:
{
promptTokens: 779,
completionTokens: 16,
}
2024-05-02T18:25:57.383Z debug: [AskController] Request closed
2024-05-02T18:25:57.461Z debug: [OpenAIClient] chatCompletion
{
baseURL: "https://openai-gpt4-canada-instance.openai.azure.com/openai/deployments/gpt-35-turbo/chat/completion... [truncated]",
modelOptions.model: "gpt-35-turbo",
modelOptions.temperature: 0.2,
modelOptions.top_p: 1,
modelOptions.presence_penalty: 0,
modelOptions.frequency_penalty: 0,
modelOptions.stop: undefined,
modelOptions.user: "660ce7e4eca51f37c251afe0",
modelOptions.max_tokens: 16,
// 1 message(s)
modelOptions.messages: [{"role":"system","content":"Please generate a concise, 5-word-or-less title for the conversation, us... [truncated]],
}
2024-05-02T18:25:58.605Z debug: [OpenAIClient] chatCompletion response
{
// 1 choice(s)
choices: [{"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false... [truncated]],
created: 1714674358,
id: "chatcmpl-9KVPyWhpjQ4q9ymQm4UgHUFi2hKJt",
model: "gpt-35-turbo",
object: "chat.completion",
// 1 prompt_filter_result(s)
prompt_filter_results: [{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":... [truncated]],
system_fingerprint: "fp_2f57f81c11",
usage.completion_tokens: 8,
usage.prompt_tokens: 101,
usage.total_tokens: 109,
}
2024-05-02T18:25:58.606Z debug: [spendTokens] conversationId: 809e7cf2-108a-4145-9786-4793fe347742 | Context: title | Token usage:
{
promptTokens: 98,
completionTokens: 8,
}
2024-05-02T18:25:58.608Z debug: [OpenAIClient] Convo Title: Tranquil Night Scene with Palm Trees ScreenshotsCode of Conduct
|
Beta Was this translation helpful? Give feedback.
Answered by
danny-avila
May 2, 2024
Replies: 1 comment 1 reply
-
For Azure config, the identifying name for the model is important to get the right context length. Your config probably looks like this: - group: "some-region"
apiKey: "${some_api_key_var}"
instanceName: "some-instance-name"
version: "2024-03-01-preview"
deploymentName: "gpt-35-turbo"
# the keys of the models object are the identifying names
models:
gpt-4:
deploymentName: "gpt-4"
# or if boolean value
models:
gpt-4: true What you should do, is keeping everything the same, change the key identifying name under - group: "some-region"
apiKey: "${some_api_key_var}"
instanceName: "some-instance-name"
version: "2024-03-01-preview"
deploymentName: "gpt-35-turbo"
# the keys of the models object are the identifying names
models:
gpt-4-turbo: # only this changed
deploymentName: "gpt-4"
# or if boolean value
models:
gpt-4-turbo: true # or only this changed From your logs: maxContextTokens: 8187,
# ...
modelOptions.model: "gpt-4", It correctly matches the model name to Once you make the changes above, you should see this: maxContextTokens: 127990,
# ...
modelOptions.model: "gpt-4-turbo", |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
illgitthat
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
For Azure config, the identifying name for the model is important to get the right context length.
Your config probably looks like this:
What you should do, is keeping everything the same, change the key identifying name under
models
togpt-4-turbo
: