-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trino Query getting Hung #21974
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I have 5 worker nodes (8vcpu/64GB) and a coordinator (6vcpu/32GB) in a cluster with trino 389 installed on it. When I am running a query which reads data from a table having 3B rows in it, the query goes hung after reading 300~400M records. I did create a JFR and from Jprofiler I can see many threads are in blocked state. Below are the blocked stated screen shot and also JFR attached. Could you please help me know what configuration I have to optimize or what new configuration I have to add. Thanks in advance.
myrecording.txt (change the extension to jfr)
Below are the JVM configs
-server
-Xmx28G
-XX:-UseBiasedLocking
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+ExplicitGCInvokesConcurrent
-XX:+ExitOnOutOfMemoryError
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-XX:ReservedCodeCacheSize=512M
-XX:PerMethodRecompilationCutoff=10000
-XX:PerBytecodeRecompilationCutoff=10000
-Djdk.attach.allowAttachSelf=true
-Djdk.nio.maxCachedBufferSize=2000000
-Dlogback.configurationFile=/etc/trino/conf/trino-ranger-plugin-logback.xml
Trino Config below:
query.max-memory=110GB
log.max-total-size=20GB
http-server.http.port=8285
memory.heap-headroom-per-node=6GB
log.max-size=10GB
node-scheduler.include-coordinator=false
query.execution-policy=phased
task.concurrency=32
query.max-total-memory=120GB
task.max-worker-threads=128
query.client.timeout=4h
http-server.https.enabled=false
query.max-memory-per-node=22GB
The text was updated successfully, but these errors were encountered: