Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
Nvidia's FasterTransformer now supports fast Swin V2 inference on A100 and T4 GPUs
  • Loading branch information
ancientmooner committed Dec 29, 2022
1 parent ad1c947 commit f92123a
Showing 1 changed file with 6 additions and 0 deletions.
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,10 @@ This repo is the official implementation of ["Swin Transformer: Hierarchical Vis
## Updates

***12/29/2022***

1. **Nvidia**'s [FasterTransformer](https://github.com/NVIDIA/FasterTransformer/blob/main/docs/swin_guide.md) now supports Swin Transformer V2 inference, which have significant speed improvements on `T4 and A100 GPUs`.

***11/30/2022***

1. Models and codes of **Feature Distillation** are released. Please refer to [Feature-Distillation](https://github.com/SwinTransformer/Feature-Distillation) for details, and the checkpoints (FD-EsViT-Swin-B, FD-DeiT-ViT-B, FD-DINO-ViT-B, FD-CLIP-ViT-B, FD-CLIP-ViT-L).
Expand Down Expand Up @@ -258,6 +262,8 @@ Note: <sup>*</sup> indicates multi-scale testing.

(`Note please report accuracy numbers and provide trained models in your new repository to facilitate others to get sense of correctness and model behavior`)

[12/29/2022] Swin Transformers (V2) inference implemented in FasterTransformer: [FasterTransformer](https://github.com/NVIDIA/FasterTransformer/blob/main/docs/swin_guide.md)

[06/30/2022] Swin Transformers (V1) inference implemented in FasterTransformer: [FasterTransformer](https://github.com/NVIDIA/FasterTransformer/blob/main/docs/swin_guide.md)

[05/12/2022] Swin Transformers (V1) implemented in TensorFlow with the pre-trained parameters ported into them. Find the implementation,
Expand Down

0 comments on commit f92123a

Please sign in to comment.