-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python read_sql() crash: Too many open files when used with multiprocessing #565
Labels
bug
Something isn't working
Comments
I rewrote the loop to not to do |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What language are you using?
Python
What version are you using?
What database are you using?
PosgreSQL.
Ubuntu Linux 22.04.
What dataframe are you using?
Arrow 2
Can you describe your bug?
I am running a loop that exports data from the database in slices.
The query I am using looks like:
The loop is using
multiprocessing
module, but this is not touching ConnectionX, so I suspect some kind of interaction between these two.After running a script for a while I get:
What are the steps to reproduce the behavior?
Run the export script that issues
read_sql
multiple times for long time.I checked using
lsof
and it seems like (nameless?) FIFO pipes are increasing with each loop.If there are ways to "reset" ConnectorX Python bindings and internals, I can see if this would help e.g. by manually purging/deleting any OS resources ConnectorX might hold.
The text was updated successfully, but these errors were encountered: