New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ConversationChain not suppot GPT-4 Turbo with Vision model(gpt-4-turbo-2024-04-09), with image inputs #5326
Comments
To address the issue of passing an image along with text to the GPT-4 Turbo with Vision model using LangChain.js, you'll need to ensure your input is correctly formatted. The model expects a
For converting images to base64 in Node.js, you can use the import fs from "node:fs/promises";
const image = await fs.readFile("path_to_your_image.jpg");
const base64Image = image.toString("base64"); This method allows you to leverage the multimodal capabilities of the GPT-4 Turbo with Vision model within LangChain.js effectively. For further details, refer to the LangChain.js documentation and examples.
|
@dosu your example do not use ConversationChain,i want to use ConversationChain |
To effectively use Since
This approach allows you to leverage the capabilities of GPT-4 Turbo with Vision by manually bridging the gap between vision and text processing. Remember, the key is to handle the image processing separately and then feed the resulting text into the For processing images, you might use a separate instance or method specifically designed for handling vision tasks, then integrate the output into your conversation flow. This ensures you can still utilize the advanced capabilities of GPT-4 Turbo with Vision within the constraints of
|
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
When executing this code, an error will be reported。
i find langchainjs code,find input only support string,but above code input is a Array
System Info
platform:mac
node:v20.11.1
The text was updated successfully, but these errors were encountered: