Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for fp8 (H100) #387

Open
tgaddair opened this issue Apr 4, 2024 · 2 comments
Open

Add support for fp8 (H100) #387

tgaddair opened this issue Apr 4, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@tgaddair
Copy link
Contributor

tgaddair commented Apr 4, 2024

No description provided.

@tgaddair tgaddair added the enhancement New feature or request label Apr 4, 2024
@tgaddair
Copy link
Contributor Author

tgaddair commented Apr 4, 2024

Native support in PyTorch is experimental:

https://github.com/pytorch-labs/float8_experimental

We could consider adding this, or wait for official support.

@tgaddair
Copy link
Contributor Author

tgaddair commented Apr 9, 2024

Initial results using the PyTorch codebase are not good. About 10x decrease in throughput vs fp16 on H100.

https://github.com/predibase/lorax/tree/fp8

Will need to investigate transformer engine or dig into the PyTorch implementation in more detail. Definitely would appear that there is too much conversion between types happening at the moment (as opposed to everything happening in fp8 natively).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant