Skip to content

Latest commit

 

History

History
148 lines (136 loc) · 4.92 KB

File metadata and controls

148 lines (136 loc) · 4.92 KB
license model-index
apache-2.0
name results
apricot-wildflower-20
task dataset metrics source
type name
text-generation
Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot)
ai2_arc
ARC-Challenge
test
num_few_shot
25
type value name
acc_norm
59.64
normalized accuracy
task dataset metrics source
type name
text-generation
Text Generation
name type split args
HellaSwag (10-Shot)
hellaswag
validation
num_few_shot
10
type value name
acc_norm
81.76
normalized accuracy
task dataset metrics source
type name
text-generation
Text Generation
name type config split args
MMLU (5-Shot)
cais/mmlu
all
test
num_few_shot
5
type value name
acc
63.38
accuracy
task dataset metrics source
type name
text-generation
Text Generation
name type config split args
TruthfulQA (0-shot)
truthful_qa
multiple_choice
validation
num_few_shot
0
type value
mc2
41.76
task dataset metrics source
type name
text-generation
Text Generation
name type config split args
Winogrande (5-shot)
winogrande
winogrande_xl
validation
num_few_shot
5
type value name
acc
77.9
accuracy
task dataset metrics source
type name
text-generation
Text Generation
name type config split args
GSM8k (5-shot)
gsm8k
main
test
num_few_shot
5
type value name
acc
33.97
accuracy

apricot-wildflower-20

This model is the Mistral-7b model finetuned for 1k steps with a combined lm loss and distillation loss on Openwebtext2 with a >=20 reddit score filter with training logits from Mixtral. I'm not going to pretend it was a big project I did it in a dream and woke up and replicated the code without any actual reason, idk how well it fares in benchmarks.

(update: not very good)

model avg arc hellaswag mmlu truthfulqa winogrande gsm8k
apricot-wildflower-20 59.74 59.64 81.76 63.38 41.76 77.9 33.97
mistralai/Mistral-7B-v0.1 60.97 59.98 83.31 64.16 42.15 78.37 37.83

use

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "crumb/apricot-wildflower-20"
tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, device_map="auto", load_in_8bit=True)

text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Hello my name is Katie and I am a 20 year old student from the UK. I am currently studying for a degree in English Literature and Creative Writing at the University of Leeds. I am a huge fan of the Harry Potter series and have been since I was 10 years old. I have read the books countless times and have seen the films many times too. I am a huge fan of the Harry Potter fandom and have been a member of the Harry Potter forums for a few years now. I am also a member of the Harry Potter fan club and have been for a few years now. I

Detailed results can be found here

Metric Value
Avg. 59.74
AI2 Reasoning Challenge (25-Shot) 59.64
HellaSwag (10-Shot) 81.76
MMLU (5-Shot) 63.38
TruthfulQA (0-shot) 41.76
Winogrande (5-shot) 77.90
GSM8k (5-shot) 33.97