-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in Aurora Postgres since 3.9.1 #1435
Comments
Have you tried using It would help to have some details about what is consuming memory, can you get that from your DBA? In the 3.9.1 release notes, there is a link to #577 |
Thanks for looking into this! Haven't tried
The statement wasn't closed indeed. We were doing flow<Row> {
val preparedStatement = prepare(sql).coAwait()
try {
val cursor = preparedStatement.cursor(args)
do cursor.read(pageSize).coAwait().forEach { emit(it) }
while (cursor.hasMore())
} finally {
preparedStatement.close().coAwait()
}
} It looks promising so far, will see if we can keep connections open indefinitely. Based on the doc, I'm assuming that cursor doesn't have to be closed since we read it until the end. |
Good news, keep us posted please. |
Fully removed max lifetime today - DB memory looks good. Thanks for your help! |
Version
3.9.1, 4.5.7
Context
Our app has been on vertx-pg-client 3.9.0 for a while until recently when we migrated to version 4.5.7.
Shortly after the upgrade we noticed our AWS Aurora PG cluster is running out of freeable memory and rebooting. It is directly correlated to the load on the service - during peak times DB can run out of memory (60GB) in 45 minutes. We employed the
maxLifetime
parameter for the pool, which helps but is not ideal as it causes latency spikes. We prewarm the pool on deployment so all the connections expire at about the same time.On 3.9.0, there was no issue with DB memory consumption. We rolled back the upgrade to confirm this.
On 3.9.1, confirmed the issue is present.
This thread has a report of the same issue on 3.9.13 as well as 4.x branch.
Do you have a reproducer?
I don't have a reproducer; this might be AWS/aurora-specific. I can test potential solutions if that can help.
Extra
Pretty much all our queries are
preparedQuery
+execute
orexecuteBatch
. We also use standaloneprepare
with a cursor for paginated reads. Pool configuration: https://gist.github.com/al-kudryavtsev/3e6eeb3cfd200afc66df5a12932bba25The text was updated successfully, but these errors were encountered: