Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the special token map #274

Open
RAY-RaY-R opened this issue Nov 16, 2023 · 1 comment
Open

Question about the special token map #274

RAY-RaY-R opened this issue Nov 16, 2023 · 1 comment

Comments

@RAY-RaY-R
Copy link

Hello, just have a quick questions about the "special_tokens_map.json" file. After I fine-tuned rvlcdip task (claasification) on my own dataset, the additional_special_token key only shows one value:
{"additional_special_tokens": ["<s_rvlcdip>"]}

When I check the same file in rvlcdip-pretrained-official it has all the custom class names as well :
{"additional_special_tokens": ["</s_class>", "<advertisement/>", "<budget/>", "<email/>", "<file_folder/>", "<form/>", "<handwritten/>", "<invoice/>", "<letter/>", "<memo/>", "<news_article/>", "<presentation/>", "<questionnaire/>", "<resume/>", "<s_class>", "<s_iitcdip>", "<s_rvlcdip>", "<s_synthdog>", "<scientific_publication/>", "<scientific_report/>", "<specification/>"]}

The model shows great accuracy, but I'm just a bit concerned. Is this a problem? If I add those tokens manually the accuracy of the model drops a lot.

@felixvor
Copy link

Hey, I did not develop the rvlcdip model but from working with donut for a bit I understood that the authors add classes as their own tokens so the model learns them from scratch and assigns a unique ID to them instead of using the wordpiece tokens to piece them together.

In a bit more detail, when generating an output the decoder uses the vocabulary to put one token after the other, like any other generative transformer. So let's say the model should predict "scientific_publication" as a class, it needs to check the vocabulary for tokens like for example: ["<", "scientficic", "_", "publication", "/>"] and piece them together. Each of these tokens has it's own ID and is part of the predicted output sequence. This works fine but the model needs to "forget and relearn" what these tokens mean during finetuning and it also needs to predict different amounts of tokens for each class. So what you could to is to add a new token ID for "<scientific_publication/>". Then the model can learn what this new token means and will just have to predict a single token ID to make a classification. You can do this by calling add_tokenson the tokenizer and resize_token_embeddings on the decoder before training, but for more information you should check out the example notebooks from NielsRogge.

Hope this helps and good luck with your experiments!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants