-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Open-Sesame To Parse Multiple Inputs? #24
Labels
Comments
Yes, I have noticed this as well. I think most of the startup time is loading the models into memory and once that is done it should be quicker. My plan was to wrap the models in a simple TCP/HTTP server to query against. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm currently on a project that's attempting to use Open-Sesame to parse multiple text inputs at once--calling Open-Sesame to predict on all these sentences one by one from the command line has proved to be prohibitively slow, but I've noticed that most of the time from predicting on a sentence seems to come from model loading rather than from actual prediction. Is there any way to have open-sesame loaded and continuously running somewhere so that it can be called to predict on text inputs without having to be loaded every time? I believe this was done for another ASRL package, SEMAFOR, whose website (although it's currently down) seemed to be running SEMAFOR on a separate server, where it was always loaded, which it then queried to get parses for inputs without the degree of delay that calling open-sesame from the command line has--is that possible to replicate here?
The text was updated successfully, but these errors were encountered: