-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QST] Additional GPU mem reservation when creating a Dataset
causes OOM when allocating all GPU mem to the LocalCUDACluster
#1863
Comments
@piojanu what helps with the OOM issues with NVT is the part_size and the row group memory size of your parquet file(s). you can also repartition your dataset and save back to disk, and that might help with OOM. for if you have a single GPU you can try to set the row group size of your files and that would help without LocalCudaCluster. There is a LocalCudaCluster example here: |
Hi! I have follow up questions:
Thanks for help :) |
By accident, I've found out that
|
Hi!
I've run into such a problem:
I run this code in JupyterLab on the GCP VM with NVIDIA V100 16GB GPU.
I've also tried
nvtabular.utils.set_dask_client
and it didn't solve the problem.Questions:
The text was updated successfully, but these errors were encountered: