-
Notifications
You must be signed in to change notification settings - Fork 324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NeurIPS2018 no speedup with increasing batch size #48
Labels
question
Further information is requested
Comments
It seems that SB3 is not optimized for GPU. If you are dealing with compute-intensive cases, ElegantRL may be a good choice. |
thx,I will try on it
…---Original---
From: "Xiao-Yang ***@***.***>
Date: Mon, Mar 27, 2023 12:51 PM
To: ***@***.***>;
Cc: "Optimus ***@***.******@***.***>;
Subject: Re: [AI4Finance-Foundation/FinRL-Tutorials] NeurIPS2018 no speedupwith increasing batch size (Issue #48)
It seems that SB3 is not optimized for GPU. If you are dealing with compute-intensive cases, ElegantRL may be a good choice.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I tested demo NeurIPS2018 with stablebaseline3, I used SAC agent, and I trained with GPU. While I increase batch size from 128 to 512, I found no changing for GPU memories and utilization rate.
The version I used as below:
stable-baselines3==1.5.0
torch==1.10.0
Training time has no change with chaning batch size, what would be the problem?
The text was updated successfully, but these errors were encountered: