Skip to content

Latest commit

 

History

History
287 lines (249 loc) · 7.38 KB

File metadata and controls

287 lines (249 loc) · 7.38 KB
license tags model-index
apache-2.0
moe
merge
mergekit
Solar Moe
Solar
Lumosia
name results
Lumosia-MoE-4x10.7
task dataset metrics source
type name
text-generation
Text Generation
name type config split args
AI2 Reasoning Challenge (25-Shot)
ai2_arc
ARC-Challenge
test
num_few_shot
25
type value name
acc_norm
68.34
normalized accuracy
task dataset metrics source
type name
text-generation
Text Generation
name type split args
HellaSwag (10-Shot)
hellaswag
validation
num_few_shot
10
type value name
acc_norm
87.13
normalized accuracy
task dataset metrics source
type name
text-generation
Text Generation
name type config split args
MMLU (5-Shot)
cais/mmlu
all
test
num_few_shot
5
type value name
acc
64.38
accuracy
task dataset metrics source
type name
text-generation
Text Generation
name type config split args
TruthfulQA (0-shot)
truthful_qa
multiple_choice
validation
num_few_shot
0
type value
mc2
63.81
task dataset metrics source
type name
text-generation
Text Generation
name type config split args
Winogrande (5-shot)
winogrande
winogrande_xl
validation
num_few_shot
5
type value name
acc
82.95
accuracy
task dataset metrics source
type name
text-generation
Text Generation
name type config split args
GSM8k (5-shot)
gsm8k
main
test
num_few_shot
5
type value name
acc
51.02
accuracy

image/png

Lumosia-MoE-4x10.7

"Lumosia" was selected as its a MoE of Multiple SOLAR Merges so it really "Lights the way".... its 3am.

This is a very experimantal model. its a MoE of all good performing Solar models (based off of personal experiance not open leaderboard), The models goal was to make a good all rounder, in chat/logic/rp

Why? Dunno whated to see what would happen

context is 4k but coherent up to 16k

Quants by @thebloke (thank you)

https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF

https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GPTQ

Update: (Done) Lumosia v1.5 has been uploaded.

Update 2:

A Lumosia Personality tavern card has been added

Come join the Discord: ConvexAI

Template:

### System:

### USER:{prompt}

### Assistant:

Settings:

Temp: 1.0
min-p: 0.02-0.1

Lumosia-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models:

Evals:

  • Avg: 69.61
  • ARC: 68.34
  • HellaSwag: 87.13
  • MMLU: 64.38
  • T-QA: 63.81
  • Winogrande: 82.95
  • GSM8K: 51.02

Examples:

Example 1:

User:
Hello lumi, what is the meaning of life
in around 100 words.

Lumosia:
The meaning of life can vary greatly
depending on individual beliefs,
values, and experiences. One common
perspective suggests that the primary
purpose of life is to seek happiness,
fulfillment, and personal growth
through relationships
accomplishments, and spiritual
awareness. Living with integrity.
compassion, and gratitude while
making a positive impact on others
and the world around us can also
contribute to finding meaning in life
UItimately, the meaning of life is
subjective and unique to each
individual, who must define it for
themselves through exploration
reflection, and personal choice.

Example 2:

User:
Explain a MoE (Mixture of Experts) in
around 100 words

Lumosia:
A MoE, short for Mixture of Experts, is
a machine learning technique utilized
in ensemble learning. It involves
combining multiple specialist models
(experts) to optimize predictive
performance. Each expert focuses on
a subset of the input features or data
domains, making them specialized in
particular areas. Through a weighting
mechanism, like Gaussian Mixture
Model, the predictions of experts are
blended to produce a final result. MoE
models demonstrate effectiveness in
handling complex or ambiguous
inputs where a single model might
struggle. They are commonly used in
natural language processing.
computer vision, and speech synthesis.

🧩 Configuration

yamlbase_model: DopeorNope/SOLARC-M-10.7B
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: DopeorNope/SOLARC-M-10.7B
    positive_prompts: [""]
  - source_model: maywell/PiVoT-10.7B-Mistral-v0.2-RP
    positive_prompts: [""]
  - source_model: kyujinpy/Sakura-SOLAR-Instruct
    positive_prompts: [""]
  - source_model: jeonsworld/CarbonVillain-en-10.7B-v1
    positive_prompts: [""]

💻 Usage

python
!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Steelskull/Lumosia-MoE-4x10.7"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Detailed results can be found here

Metric Value
Avg. 69.61
AI2 Reasoning Challenge (25-Shot) 68.34
HellaSwag (10-Shot) 87.13
MMLU (5-Shot) 64.38
TruthfulQA (0-shot) 63.81
Winogrande (5-shot) 82.95
GSM8k (5-shot) 51.02