Skip to content

Investigation of utilizing Mamba in the FinQ&A domain, stacking it up against traditional transformers.

License

Notifications You must be signed in to change notification settings

PheelaV/mlp-g066-finqa-mamba

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Financial Q&A Using Mamba/Pythia

This report explores the application of the Mamba model, a novel state space model (SSM), in the domain of financial question answering (Q&A), challenging the dominance of transformer-based architectures such as FinBERT, FinGPT, and BloombergGPT. Overall, the results show that fine-tuning Mamba with the typical financial datasets, did not outperform the traditional transformer architectures. Despite facing challenges, initial findings indicate areas for improvement and further research.

Training

The training has been done through training/train.py (which is formed from the Jupyter Notebook train.ipynb). To run, the training environment must be setup with training_env.sh.

Evaluation

The benchmarking is done within FinGPT-benchmarks/benchmarks as well as EleutherAI/lm-evaluation-harness. There are some scripts to run both Mamba and Pythia. Much of this code has been taken from the open-source project FinGPT and modified after the fact to suit our needs. The main file is benchmarks2.py which enriches the benchmarking suite with the help of Language Model Evaluation Harness

About

Investigation of utilizing Mamba in the FinQ&A domain, stacking it up against traditional transformers.

Topics

Resources

License

Stars

Watchers

Forks