Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add GigaSpeech dataset in SpeechBrain #2405

Draft
wants to merge 40 commits into
base: develop
Choose a base branch
from

Conversation

Adel-Moumen
Copy link
Collaborator

@Adel-Moumen Adel-Moumen commented Feb 10, 2024

What does this PR do?

Work in progress. Do not review/merge.

This PR aims at adding GigaSpeech dataset inside of SpeechBrain.

Webdataset support ?

At first, I designed the data prep script so that it supports out of the box webdataset. However, I found in my experiments that the way we were using webdataset was not optimal. Indeed, and as described in the Google Colab tutorial, we first create shards and then apply our traditional padding function, etc. The issue with that is that the shards may contain audio files with very long sequences and very short sequences resulting in a lot of padding. Furthermore, many of our SpeechBrain features are not working easily with shards. For instance, training a simple label encoder will require some engineering tricks to make it work on shards. Padding as well. Also, webdataset has evolved and it would require us to freeze the dependence on a very old version of webdataset. I had some time to look at other toolkit implementations and found that NeMo, for instance, was first sorting audio files, and THEN creating shards. I also found that Lhoste has its own webdataset implementation which seems more tailored to the speech modality. (Maybe we should have a closer look). Thus, I decided temporarily to remove webdataset from this PR. I think most of the people that will be using GigaSpeech XL split will have access to more than 1 TB of storage and it won't be an issue at all. I am open to discussion but it will require some design discussion.

General Todo

To do:

  • add opus -> wav function if users wants to change codec
  • data prep
  • use parallel_map so that it is very fast
  • whisper training and yaml recipe
  • train whisper and report back results
  • add recipe tests
  • M and XL subsets

CTC

To do:

Maybe one day:

  • webdataset for data prep
  • webdataset for training

Reference

k2 Icefall
Model | Dev | Test
zipformer | 10.25 | 10.38
conformer_ctc | 10.47 | 10.58
pruned_transducer_stateless2 | 10.40 | 10.51

Before submitting
  • Did you read the contributor guideline?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Does your code adhere to project-specific code style and conventions?

PR review

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified
  • Confirm that the changes adhere to compatibility requirements (e.g., Python version, platform)
  • Review the self-review checklist to ensure the code is ready for review

@Adel-Moumen Adel-Moumen self-assigned this Feb 10, 2024
@Adel-Moumen Adel-Moumen added enhancement New feature or request work in progress Not ready for merge recipes Changes to recipes only (add/edit) labels Feb 10, 2024
@Adel-Moumen Adel-Moumen added this to the v1.0.2 milestone May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request recipes Changes to recipes only (add/edit) work in progress Not ready for merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant