Skip to content

ace19-dev/tensorflow-speech-recognition-challenge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

Kaggle Competitions

TensorFlow Speech Recognition Challenge

Data

  • The dataset is designed to let you build basic but useful voice interfaces for applications, with common words like “Yes”, “No”, digits, and directions included.

Data File Descriptions

  • train.7z - Contains a few informational files and a folder of audio files. The audio folder contains subfolders with 1 second clips of voice commands, with the folder name being the label of the audio clip. There are more labels that should be predicted. The labels you will need to predict in Test are yes, no, up, down, left, right, on, off, stop, go. Everything else should be considered either unknown or silence. The folder background_noise contains longer clips of "silence" that you can break up and use as training input.
    • The files contained in the training audio are not uniquely named across labels, but they are unique if you include the label folder. For example, 00f0204f_nohash_0.wav is found in 14 folders, but that file is a different speech command in each folder.
    • The files are named so the first element is the subject id of the person who gave the voice command, and the last element indicated repeated commands. Repeated commands are when the subject repeats the same word multiple times. Subject id is not provided for the test data, and you can assume that the majority of commands in the test data were from subjects not seen in train.
    • You can expect some inconsistencies in the properties of the training data (e.g., length of the audio).
  • test.7z - Contains an audio folder with 150,000+ files in the format clip_000044442.wav. The task is to predict the correct label. Not all of the files are evaluated for the leaderboard score.
  • sample_submission.csv - A sample submission file in the correct format.
  • link_to_gcp_credits_form.txt - Provides the URL to request $500 in GCP credits, provided to the first 500 requestors.

Evaluation

  • Submissions are evaluated on Multiclass Accuracy, which is simply the average number of observations with the correct label.
    • There are only 12 possible labels for the Test set: yes, no, up, down, left, right, on, off, stop, go, silence, unknown.
    • The unknown label should be used for a command that is not one one of the first 10 labels or that is not silence.
  • For audio clip in the test set, you must predict the correct label
  • The submission file should contain a header and have the following format:

Prerequisite

Question

  • How can we classify voice data?
  • If you use CNN, a typical network, how can you preprocess voice data? (main idea: Wave -> spectrogram)
  • Is it better to use the RNN network because it contains data with time axis?
  • What else is there? (WaveNet)
  • How can we effectively reduce learning data without loss?
  • How can I extract voice only from real life data?

Solutions

  • Customize Small ConvNet Models
  • Data pre-processing (Prepare a best spectrogram image for learning)
    • wav volume normalization
    • find the section of the word based on volume dB level efficiently
    • create the spectrogram png using by wav_to_spectrogram
    • each spectrogram png size change to same size
    • data augmentation

Result

Quick Start

  • It's simple.
    • set your own parameter
    • python new_train.py

Additional work

After reaching a satisfactory level, we might try other resolutions

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •