Skip to content

LLaMA-BitNet is a repository dedicated to empowering users to train their own BitNet models built upon LLaMA 2 model, inspired by the groundbreaking paper 'The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits'.

License

Notifications You must be signed in to change notification settings

dhakalnirajan/LLaMA-BitNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to LLaMA-BitNet

Welcome to the LLaMA-BitNet repository, where you can dive into the fascinating world of BitNet models. Our repository is your gateway to training your very own BitNet model, as highlighted in the groundbreaking paper The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits. Built upon the cutting-edge LLaMA 2 architecture, this project allows you to unleash the potential of a model wielding approximately 78 million parameters, trained on a staggering corpus of around 1.5 billion tokens.


Note: You need to have access to LLaMA model if you wish to run code without modifications. To get access to LLaMA family of models, you need to go to https://llama.meta.com/llama-downloads/ and provide credentials which you use in Hugging Face. After that, you will receive mail to either download weights directly to your device or to use LLaMA through API.


Python PyTorch Hugging Face GitHub License Repository Stars GitHub forks Dataset on HF Follow me on HFX (formerly Twitter) URL HuggingFace Logo

Easy Installation

Getting started with LLaMA-BitNet is a breeze! Follow these simple steps to install all the necessary modules:

pip install -r requirements.txt

Intuitive File Structure

Our repository boasts a clear and intuitive file structure designed for effortless navigation and customization:

LLaMA-BitNet                    (root folder)
|
│   ├── inference.py            (Run inference with the trained BitNet model)
│   ├── LICENSE                 (MIT License)
│   ├── README.md
│   ├── requirements.txt        (List of required modules for installation)
│   ├── train.py                (Run the training process)
│   └── utils.py                (Contains utility functions)

Empowering Training Data

Harness the power of a 15% subset of the OpenWebText2 dataset meticulously prepared for training. This subset, tokenized with a context length of 256 for seamless testing, offers unparalleled versatility. However, our code also facilitates manual tokenization, allowing you to train on datasets of your choice effortlessly.

Streamlined Dependencies

We've curated a set of essential dependencies listed in the requirements.txt file, ensuring a seamless installation process:

transformers
datasets
torch
wandb
huggingface_hub

Unleash the Full Potential of BitNet

Our BitNet architecture is engineered for excellence, drawing inspiration from the meticulous design laid out in the training details manuscript, The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ.pdf. By seamlessly integrating BitLinear and leveraging HuggingFace's LlamaForCasualLM, we empower you to unlock the true power of BitNet.

Explore, train, and revolutionize with LLaMA-BitNet!

About

LLaMA-BitNet is a repository dedicated to empowering users to train their own BitNet models built upon LLaMA 2 model, inspired by the groundbreaking paper 'The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits'.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages