Skip to content

A japanese finetuned instruction LLaMA fork form japanese-alpaca-lora, then maintenance and improvement by dnimo

License

Notifications You must be signed in to change notification settings

dnimo/LongText_Husky

 
 

Repository files navigation

Japanese-LLM-LoRA

References

This project is based on LLaMA, Stanford Alpaca, Alpaca LoRA, cabrita

Data

Finetuning

We just followed Alpaca LoRA and cabrita. We could run finetuning step using Google Colab PRO+. It took 6.5 hours for finetuning.

Example outputs

Good Examples

Bad Examples

About

A japanese finetuned instruction LLaMA fork form japanese-alpaca-lora, then maintenance and improvement by dnimo

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 60.7%
  • Python 39.3%