Skip to content

Latest commit

 

History

History
13 lines (8 loc) · 1.16 KB

File metadata and controls

13 lines (8 loc) · 1.16 KB
license
llama2

This is an interleaved merge of Xwin-longLORA-70b-rope8-32k-fp16 and Euryale-1.3-longLORA-70b-rope8-32k-fp16, using the same merge formula as alpindale's goliath-120b.

There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).

ChuckMcSneed did a benchmark here, indicating 30% degradation with 8x the context length.

A 6-bit EXL2 quantization is available here. More EXL2 quants here, thanks to aikitoria.

See this discussion for how the original 70B merges were created with longLORA.