Issues: pytorch/torchtune
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Sanity check: How to run inference after fine-tuning Llama-3 on chat data?
#1001
opened May 17, 2024 by
julianstastny
Is it possible to full fine tuning llama3 70B with torchtune on a machine with 8*A100 80G ?
#987
opened May 16, 2024 by
jaywongs
(Windows 11) cross_entropy_loss(): RuntimeError: expected scalar type Long but found Int
#981
opened May 14, 2024 by
Joshua-Yu
Generate with KV-cache enabled vs. not enabled gives different results
#959
opened May 10, 2024 by
joecummings
Wrongly mask out eos tokens during training
bug
Something isn't working
#949
opened May 7, 2024 by
jxmsML
Duplicate results in the result generate by the model fine-tuned by lora.
#939
opened May 5, 2024 by
gulizhoutao
How I can find all the checkpoints and merge it manually? (Lora)
#922
opened May 2, 2024 by
monk1337
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.