New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to adjust hyperparameter for finetune llm embed #743
Comments
This script uses the huggingface trainer to do fine-tuning, so you can use the hyper-arguments on this page: https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments |
How can I create a new task like llm embedded task (with separate instructions for query and separate instructions for key) ? |
llm embed has the following training script. I don't know how to adjust hyperparameters like train_batch_size, learning rate, warmup_ratio, ...
torchrun --nproc_per_node=8 run_dense.py
--output_dir data/outputs/tool
--train_data llm-embedder:tool/toolbench/train.json
--eval_data llm-embedder:tool/toolbench/test.json
--corpus llm-embedder:tool/toolbench/corpus.json
--key_template {text}
--metrics ndcg
--eval_steps 2000
--save_steps 2000
--max_steps 2000
--data_root /data/llm-embedder
The text was updated successfully, but these errors were encountered: