Japanese-LLM-LoRA References This project is based on LLaMA, Stanford Alpaca, Alpaca LoRA, cabrita Data Finetuning We just followed Alpaca LoRA and cabrita. We could run finetuning step using Google Colab PRO+. It took 6.5 hours for finetuning. Example outputs Good Examples Bad Examples