Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NeurIPS2018 no speedup with increasing batch size #48

Open
charliedream1 opened this issue Mar 20, 2023 · 2 comments
Open

NeurIPS2018 no speedup with increasing batch size #48

charliedream1 opened this issue Mar 20, 2023 · 2 comments
Labels
question Further information is requested

Comments

@charliedream1
Copy link

I tested demo NeurIPS2018 with stablebaseline3, I used SAC agent, and I trained with GPU. While I increase batch size from 128 to 512, I found no changing for GPU memories and utilization rate.

The version I used as below:
stable-baselines3==1.5.0
torch==1.10.0

Training time has no change with chaning batch size, what would be the problem?

@YangletLiu
Copy link
Contributor

It seems that SB3 is not optimized for GPU. If you are dealing with compute-intensive cases, ElegantRL may be a good choice.

@YangletLiu YangletLiu added the question Further information is requested label Mar 27, 2023
@charliedream1
Copy link
Author

charliedream1 commented Mar 27, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants