Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with batch processing #228

Open
voltel opened this issue Dec 20, 2023 · 0 comments
Open

Issues with batch processing #228

voltel opened this issue Dec 20, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@voltel
Copy link

voltel commented Dec 20, 2023

I'd like to report issues with batch processing that I encountered and request support.

  1. Problems with context collection during batch processing:
    In the task I have at hands, I need to read some frontmatter fields and Dataview properties for the templating engine to compile a prompt for further AI request. I could also use other data, including, e.g., file title, text under a specific header, etc., to be used during prompt compilation.

To be more specific, I need AI to find in publicly available sources and return emails of certain people (professionals, their secretaries or personal assistants), to establish personal communication. For each person, I have a separate Obsidian note (file), which holds their name, aliases with variations of their name. It also holds a short list of titles with scientific articles, which the person has been a co-author of in a previous year. With all this information, I could just google for the person in question and identify publicly available information. But since I have plans to contact hundreds of such people, I would definitely need automation.

I need to avoid using the whole file content as a contest for the AI request during batch requests. Only generated template should be sent in the request to AI.

  1. The response (hopefully, with email addresses, as expected) should be placed into the same note, for example under a certain heading, e.g. "## Emails". For this, QuickAdd plugin could be used.
    But there should be a clearly described integration between Text-Gen and QuickAdd, as QuickAdd could operate on pre-defined variables, e.g. "{{VALUE:output}}". Thus, it would be desirable for Text-Gen to be able to put its received response into a variable, accessible by QuickAdd for further processing.

  2. Finally, a free account on OpenAI limits rates of requests. Thus, a rate limiter should be available for configuration to match the rules for the number of requests per minute (3 requests per minute, as I understand).

@haouarihk haouarihk added the bug Something isn't working label Jan 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants