New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backdoor attack HuggingFace Model Automatic Speech Recognition via HuggingFaceClassifierPytorch ART #2377
Comments
In my case, for example, I'm using Hugging Face's Wav2Vec2ForCTC template, which expects input in a specific format. However, I'm providing input with a shape (124, 129, 1) that matches my data, which I think is causing the mismatch. To solve this problem, I guess I need to adjust the shape of the input to match what the model expects. According to the Hugging Face documentation, the Wav2Vec2ForCTC model expects input in the form of (batch_size, sequence_length) I've already tried this but still get the same error. |
Hi guys, thanks! I just had to customize this ART classifier and transpose my data to 3 channels. @i'll be making the notebook public soon, just reorganize it .... : ) |
Hi @OrsonTyphanel93, Thanks for bringing this up! So HuggingFaceClassifierPyTorch will try and perform a forward pass to determine the model structure - something that is often needed for poisoning attacks and defences. To do so a dummy input sample will be created based on the supplied The code snippet below should work. What's the motivation in your case which requires the
|
notebook HugginFace Backdoor link HugginFace Backdoor attack hi guys @beat-buesser ! here is the final notebook you can now test it with codecov please , i think it has a very fast optimization , I've tested it with all the audio models available on HugginFace, and they've all been 'backdoored'! as far as I know, you can keep the classifier as it is. I've customized the classifier in this code so that users who play with audio data won't have any trouble using your classifier. thanks again guys! |
Thank you very much Dear @GiulioZizzo ! , for your intervention! Some particular requirements, such as the following, may be the reason for HuggingFaceClassifierPyTorch to specify an alternate input format instead of the normal Wav2Vec2ForCTC format: |
Hello(s) Dear, @f4str , @GiulioZizzo , @beat-buesser ! is it possible to dynamically parameterize the face of the classifier HuggingFaceClassifierPyTorch otherwise, it doesn't seem as dynamic as the other ART classifiers because it has very fixed channels! I'd like to use this classifier to launch backdoor attacks on Hubert models, Wav2Vec2 etc.,
I've already managed to poison them (see attached image Wav2vec2 model), now I'd like to train them on this poisoned data, but I'm having problems with reshaping the data to fit the classifier, see attached error.
The text was updated successfully, but these errors were encountered: