You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to report issues with batch processing that I encountered and request support.
Problems with context collection during batch processing:
In the task I have at hands, I need to read some frontmatter fields and Dataview properties for the templating engine to compile a prompt for further AI request. I could also use other data, including, e.g., file title, text under a specific header, etc., to be used during prompt compilation.
To be more specific, I need AI to find in publicly available sources and return emails of certain people (professionals, their secretaries or personal assistants), to establish personal communication. For each person, I have a separate Obsidian note (file), which holds their name, aliases with variations of their name. It also holds a short list of titles with scientific articles, which the person has been a co-author of in a previous year. With all this information, I could just google for the person in question and identify publicly available information. But since I have plans to contact hundreds of such people, I would definitely need automation.
I need to avoid using the whole file content as a contest for the AI request during batch requests. Only generated template should be sent in the request to AI.
The response (hopefully, with email addresses, as expected) should be placed into the same note, for example under a certain heading, e.g. "## Emails". For this, QuickAdd plugin could be used.
But there should be a clearly described integration between Text-Gen and QuickAdd, as QuickAdd could operate on pre-defined variables, e.g. "{{VALUE:output}}". Thus, it would be desirable for Text-Gen to be able to put its received response into a variable, accessible by QuickAdd for further processing.
Finally, a free account on OpenAI limits rates of requests. Thus, a rate limiter should be available for configuration to match the rules for the number of requests per minute (3 requests per minute, as I understand).
The text was updated successfully, but these errors were encountered:
I'd like to report issues with batch processing that I encountered and request support.
In the task I have at hands, I need to read some frontmatter fields and Dataview properties for the templating engine to compile a prompt for further AI request. I could also use other data, including, e.g., file title, text under a specific header, etc., to be used during prompt compilation.
To be more specific, I need AI to find in publicly available sources and return emails of certain people (professionals, their secretaries or personal assistants), to establish personal communication. For each person, I have a separate Obsidian note (file), which holds their name, aliases with variations of their name. It also holds a short list of titles with scientific articles, which the person has been a co-author of in a previous year. With all this information, I could just google for the person in question and identify publicly available information. But since I have plans to contact hundreds of such people, I would definitely need automation.
I need to avoid using the whole file content as a contest for the AI request during batch requests. Only generated template should be sent in the request to AI.
The response (hopefully, with email addresses, as expected) should be placed into the same note, for example under a certain heading, e.g.
"## Emails"
. For this, QuickAdd plugin could be used.But there should be a clearly described integration between Text-Gen and QuickAdd, as QuickAdd could operate on pre-defined variables, e.g. "{{VALUE:output}}". Thus, it would be desirable for Text-Gen to be able to put its received response into a variable, accessible by QuickAdd for further processing.
Finally, a free account on OpenAI limits rates of requests. Thus, a rate limiter should be available for configuration to match the rules for the number of requests per minute (3 requests per minute, as I understand).
The text was updated successfully, but these errors were encountered: