-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes while integrating the latest Dexter back into Dexa #10
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
I removed this from the ChatModel because it's not really calling the model any differently, it's just adding more logic before and after the call. I think we should add a more general solution to export function createChain<
Args extends Record<string, any>,
Result extends any
>(args: {
model: Model.Chat.IModel;
prompt: Prompt.Template<Args>;
functions?: Prompt.AiFunction[];
validator: Prompt.Validator<Result>;
retries?: number;
}): Prompt.Chain<Args, Result> {
return async (promptArgs: Args): Promise<Result> => {
const { model, prompt, validator, retries = 0 } = args;
let attempts = 0;
const messages = await prompt(promptArgs);
while (attempts <= retries) {
attempts++;
const response = await model.run({
messages,
functions: args?.functions?.map((func) => func.spec) || undefined,
});
const message = response.choices[0].message;
try {
if (args.functions && Msg.isFuncCall(message)) {
const { name, arguments: callArgs } = message.function_call;
const func = args.functions.find((func) => func.name === name);
if (!func) {
throw new Error(`There is no function named "${name}"`);
}
return func.parseArgs(callArgs);
// TODO: validate function
} else {
return await validator(response.choices[0].message);
}
} catch (error) {
const VAL_MSG = `There was an error validating the response. Please ready the error message and try again.\nError:\n`;
const errMessage = getErrorMsg(error);
messages.push(response.choices[0].message);
messages.push(Msg.user(`${VAL_MSG}${errMessage}`));
}
}
const lastError = messages[messages.length - 1].content!;
throw new Error(
`Validation failed after ${attempts} attempts: ${lastError}`
);
};
} |
@rileytomasek I took your example, sprinkled some sauce, and ended up with See |
prompt: ({ location }: { location: string }) => [ | ||
Msg.user(`What is the weather in ${location}?`), | ||
], | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this example kind of pointless because it could be done with just the single function? Why not add another tool, or make the user request clearly require two calls of the function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmmm there's nothing about a chain which requires the use of functions or multiple functions. it just wraps one or more LLM calls with optional validation and optional tool invocation.
it's quite a bit simpler than the ai-function.ts
example if you compare them side-by-side as it handles all the tool invocation and LLM back & forth for you. I'd rather keep the example as simple as possible.
cached: boolean; | ||
latency?: number; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this still optional with the changes above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure; the app-level fields vs intrinsic response fields are a bit of a mess atm
} else if (Msg.isAssistant(message)) { | ||
if (schema) { | ||
// TODO: we're passing a schema, but we're not actually telling the | ||
// chat model anything about the expected output format | ||
if (schema instanceof z.ZodObject) { | ||
return extractZodObject({ json: message.content, schema }); | ||
} else { | ||
// TODO: support arrays, boolean, number, etc. | ||
// validate string output | ||
return schema.parse(message.content); | ||
} | ||
} else { | ||
return message.content as Result; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that we aren't actually telling the model about this and most of these chains will terminate in many different ways, I don't think it makes sense for this to take on the complexity of parsing the response. It will be much cleaner this this always returns a string and parsing can be done on the chains output. Am I missing something here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmmm you wanted this to replace runWithValidation
, and the only thing that really did was the output validation / parsing w/ retries, so that's why I have this in here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could go either way with this for createAIChain
, but we need some built-in ability to parse structured output with retries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One option would be to tell the model about the expected output if a schema
is passed via an additional system
message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i didnt realize this was to replace runWithValidation
. i dont think the validation component is the same as this, which i see as being responsible for making the calls and validating the function arguments.
what do you think of having a separate validator that is meant to go after the model generates a response?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
an added benefit of validating in the loop like this is that its much easier to rerun with the messages for context than if you fully break out of the loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This look awesome :) left a few comments — i think we can simplify a bit.
return ''; | ||
} | ||
|
||
return JSON.stringify(jsonObject, null, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i commented elsewhere, but i think this too restrictive given our confidence in the impact.
// TODO: ideally we'd differentiate between tool argument validation | ||
// errors versus errors thrown from the tool implementation. Errors | ||
// from the underlying tool could be things like network errors, which | ||
// should be retried locally without re-calling the LLM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im in favor of adding a custom ZodValidationError
that serves as both an indicator that it was a validation error and a way to narrow to the zod error type.
Prettify<Model.Chat.Run & Model.Chat.Config>, | ||
'functions' | 'tools' | 'function_call' | 'tool_choice' | ||
>; | ||
prompt?: Prompt.Template<Params>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do you think of something like input
or content
. from my use of this in other code, its usually a message directly from the user, or raw data that you are extracting things from.
edit: i understand whats going on now — i was confused. wouldnt it be simpler for this just to accept messages and let the caller create their own message first? it seems like this function is trying to do so much that its a bit confusing what its actually for and will become coupled to many other concepts.
Closing in favor of #13 |
createAIChain
which mirrorscreateAIFunction
examples/ai-function.ts
andexamples/ai-chain.ts
stringifyForModel