Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assertion failure on quantization of Meta-Llama-3-70B-Instruct from f16 to various quantization types. #7215

Closed
tigran123 opened this issue May 11, 2024 · 9 comments

Comments

@tigran123
Copy link

First I have downloaded meta-llama/Meta-Llama-3-70B-Instruct model from HF. Then I converted it using convert.py script from llama.cpp to f16, like this:

python3.12 ~/Software/AI/llama.cpp/convert.py Meta-Llama-3-70B-Instruct/ --outfile Meta-Llama-3-70B-Instruct.f16.gguf --outtype f16 --vocab-type bpe

This worked fine and produced a 108GB file. Unfortunately, I could not load it in my server, because it only has 128GB RAM and RTX 2080 Ti with 11GB VRAM, so there was no way to load it either with or without -ngl option. So, I converted the original HF files to Q8_0 instead (again using convert.py) and it also could not be loaded. Then I decided to quantize the f16 .gguf file using the quantize utility from llama.cpp and this is where the problems started. I naturally started from the highest quality Q6_K:

$ ./quantize /data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf /data/work/Meta-Llama-3-70B-Instruct.Q6_K.gguf Q6_K 12
main: build = 2840 (25c6e82e)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: quantizing '/data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf' to '/data/work/Meta-Llama-3-70B-Instruct.Q6_K.gguf' as Q6_K using 12 threads
llama_model_loader: loaded meta data with 21 key-value pairs and 595 tensors from /data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = .
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   5:                          llama.block_count u32              = 80
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 1
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,128256]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - type  f32:  132 tensors
llama_model_loader: - type  f16:  463 tensors
GGML_ASSERT: llama.cpp:14705: (qs.n_attention_wv == 0 || qs.n_attention_wv == (int)model.hparams.n_layer) && "n_attention_wv is unexpected"
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f314e2ea42f in __GI___wait4 (pid=1819, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30	../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
#0  0x00007f314e2ea42f in __GI___wait4 (pid=1819, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30	in ../sysdeps/unix/sysv/linux/wait4.c
#1  0x000055761fcd440b in ggml_print_backtrace ()
#2  0x000055761fd73f4a in llama_model_quantize_internal(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, llama_model_quantize_params const*) ()
#3  0x000055761fd74914 in llama_model_quantize ()
#4  0x000055761fcd1911 in main ()
[Inferior 1 (process 1807) detached]
Aborted

Then I tried Q5_K_M (omitting number of threads, which made no difference):

$ ./quantize /data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf /data/work/Meta-Llama-3-70B-Instruct.Q5_K_M.gguf Q5_K_M
main: build = 2840 (25c6e82e)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: quantizing '/data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf' to '/data/work/Meta-Llama-3-70B-Instruct.Q5_K_M.gguf' as Q5_K_M
llama_model_loader: loaded meta data with 21 key-value pairs and 595 tensors from /data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = .
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   5:                          llama.block_count u32              = 80
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 1
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,128256]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - type  f32:  132 tensors
llama_model_loader: - type  f16:  463 tensors
GGML_ASSERT: llama.cpp:14705: (qs.n_attention_wv == 0 || qs.n_attention_wv == (int)model.hparams.n_layer) && "n_attention_wv is unexpected"
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f17e16ea42f in __GI___wait4 (pid=1916, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30	../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
#0  0x00007f17e16ea42f in __GI___wait4 (pid=1916, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30	in ../sysdeps/unix/sysv/linux/wait4.c
#1  0x0000559cc1b2740b in ggml_print_backtrace ()
#2  0x0000559cc1bc6f4a in llama_model_quantize_internal(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, llama_model_quantize_params const*) ()
#3  0x0000559cc1bc7914 in llama_model_quantize ()
#4  0x0000559cc1b24911 in main ()
[Inferior 1 (process 1904) detached]
Aborted

And so on, I tried a few more types, which all failed likewise:

$ ./quantize /data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf /data/work/Meta-Llama-3-70B-Instruct.Q4_K_M.gguf Q4_K_M
main: build = 2840 (25c6e82e)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: quantizing '/data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf' to '/data/work/Meta-Llama-3-70B-Instruct.Q4_K_M.gguf' as Q4_K_M
llama_model_loader: loaded meta data with 21 key-value pairs and 595 tensors from /data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = .
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   5:                          llama.block_count u32              = 80
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 1
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,128256]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - type  f32:  132 tensors
llama_model_loader: - type  f16:  463 tensors
GGML_ASSERT: llama.cpp:14705: (qs.n_attention_wv == 0 || qs.n_attention_wv == (int)model.hparams.n_layer) && "n_attention_wv is unexpected"
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f7d046ea42f in __GI___wait4 (pid=1959, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30	../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
#0  0x00007f7d046ea42f in __GI___wait4 (pid=1959, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30	in ../sysdeps/unix/sysv/linux/wait4.c
#1  0x00005626045b840b in ggml_print_backtrace ()
#2  0x0000562604657f4a in llama_model_quantize_internal(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, llama_model_quantize_params const*) ()
#3  0x0000562604658914 in llama_model_quantize ()
#4  0x00005626045b5911 in main ()
[Inferior 1 (process 1947) detached]
Aborted

The version of llama.cpp is very recent -- cloned yesterday evening.

@tigran123
Copy link
Author

tigran123 commented May 11, 2024

Actually, let's also look into the reason why the original (i.e. created by convert.py) model did not load. Maybe the answer lies there:


$ ./server --host 192.168.1.4 -m /data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf -t 12
{"tid":"140151253651456","timestamp":1715426510,"level":"INFO","function":"main","line":2931,"msg":"build info","build":2840,"commit":"25c6e82e"}
{"tid":"140151253651456","timestamp":1715426510,"level":"INFO","function":"main","line":2936,"msg":"system info","n_threads":12,"n_threads_batch":-1,"total_threads":12,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "}
llama_model_loader: loaded meta data with 21 key-value pairs and 595 tensors from /data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = .
llama_model_loader: - kv   2:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv   3:                       llama.context_length u32              = 8192
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 8192
llama_model_loader: - kv   5:                          llama.block_count u32              = 80
llama_model_loader: - kv   6:                  llama.feed_forward_length u32              = 28672
llama_model_loader: - kv   7:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 64
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  10:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  11:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 1
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,128256]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                      tokenizer.ggml.merges arr[str,280147]  = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  20:                    tokenizer.chat_template str              = {% set loop_messages = messages %}{% ...
llama_model_loader: - type  f32:  132 tensors
llama_model_loader: - type  f16:  463 tensors
llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab:                                             
llm_load_vocab: ************************************        
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!        
llm_load_vocab: CONSIDER REGENERATING THE MODEL             
llm_load_vocab: ************************************        
llm_load_vocab:                                             
llm_load_vocab: special tokens definition check successful ( 256/128256 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 128256
llm_load_print_meta: n_merges         = 280147
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 8192
llm_load_print_meta: n_head           = 64
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_layer          = 80
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 8
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 28672
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 70B
llm_load_print_meta: model ftype      = F16
llm_load_print_meta: model params     = 57.52 B
llm_load_print_meta: model size       = 107.15 GiB (16.00 BPW) 
llm_load_print_meta: general.name     = .
llm_load_print_meta: BOS token        = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 128001 '<|end_of_text|>'
llm_load_print_meta: LF token         = 128 'Ä'
llm_load_print_meta: EOT token        = 128009 '<|eot_id|>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:   no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes
llm_load_tensors: ggml ctx size =    0.32 MiB
llama_model_load: error loading model: check_tensor_dims: tensor 'output_norm.weight' not found
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '/data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf'
{"tid":"140151253651456","timestamp":1715426511,"level":"ERR","function":"load_model","line":687,"msg":"unable to load model","model":"/data/Llama/Meta-Llama-3-70B-Instruct.f16.gguf"}

I expected to see the out of memory type of errors, but instead we get tensor 'output_norm.weight' not found. So, maybe the convert.py script did not produce the correct output?

@ggerganov
Copy link
Owner

Use the convert-hf-to-gguf.py script instead:

python3.12 ~/Software/AI/llama.cpp/convert-hf-to-gguf.py Meta-Llama-3-70B-Instruct/ --outfile Meta-Llama-3-70B-Instruct.f16.gguf --outtype f16

@tigran123
Copy link
Author

Thank you, I tried that just now and it failed with the following error:

Traceback (most recent call last):
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 2511, in <module>
    main()
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 2505, in main
    model_instance.write()
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 273, in write
    self.write_tensors()
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 1287, in write_tensors
    super().write_tensors()
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 220, in write_tensors
    for name, data_torch in self.get_tensors():
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 142, in get_tensors
    raise ValueError(f"Mismatch between weight map and model parts for tensor names: {sym_diff}")
ValueError: Mismatch between weight map and model parts for tensor names: {'model.layers.75.input_layernorm.weight', 'model.layers.66.self_attn.o_proj.weight', 'model.layers.78.self_attn.k_proj.weight', 'model.layers.79.self_attn.k_proj.weight', 'model.layers.73.input_layernorm.weight', 'model.layers.66.input_layernorm.weight', 'model.layers.76.input_layernorm.weight', 'model.layers.73.self_attn.o_proj.weight', 'model.layers.79.mlp.up_proj.weight', 'model.layers.73.self_attn.q_proj.weight', 'model.layers.74.self_attn.k_proj.weight', 'model.layers.69.mlp.up_proj.weight', 'model.layers.72.self_attn.o_proj.weight', 'model.layers.68.mlp.up_proj.weight', 'model.layers.67.input_layernorm.weight', 'model.layers.69.self_attn.o_proj.weight', 'model.layers.67.mlp.down_proj.weight', 'model.layers.75.self_attn.k_proj.weight', 'model.layers.74.self_attn.v_proj.weight', 'model.layers.71.self_attn.o_proj.weight', 'model.layers.70.mlp.up_proj.weight', 'model.layers.72.mlp.down_proj.weight', 'model.layers.66.self_attn.v_proj.weight', 'model.layers.71.input_layernorm.weight', 'model.layers.73.mlp.gate_proj.weight', 'model.layers.79.self_attn.o_proj.weight', 'model.layers.78.post_attention_layernorm.weight', 'model.layers.76.self_attn.v_proj.weight', 'model.layers.76.self_attn.q_proj.weight', 'model.layers.74.mlp.gate_proj.weight', 'model.layers.73.mlp.up_proj.weight', 'model.layers.77.self_attn.q_proj.weight', 'model.layers.79.mlp.down_proj.weight', 'model.layers.77.mlp.down_proj.weight', 'model.layers.70.post_attention_layernorm.weight', 'model.layers.78.mlp.down_proj.weight', 'model.layers.66.mlp.down_proj.weight', 'model.layers.71.self_attn.v_proj.weight', 'model.layers.66.self_attn.q_proj.weight', 'model.layers.72.mlp.up_proj.weight', 'model.layers.75.self_attn.o_proj.weight', 'model.layers.70.mlp.gate_proj.weight', 'model.layers.74.self_attn.q_proj.weight', 'model.layers.79.post_attention_layernorm.weight', 'model.norm.weight', 'model.layers.67.self_attn.q_proj.weight', 'model.layers.76.mlp.gate_proj.weight', 'model.layers.73.post_attention_layernorm.weight', 'model.layers.68.mlp.down_proj.weight', 'model.layers.75.mlp.gate_proj.weight', 'model.layers.73.self_attn.k_proj.weight', 'model.layers.72.self_attn.q_proj.weight', 'model.layers.79.mlp.gate_proj.weight', 'model.layers.72.self_attn.k_proj.weight', 'model.layers.72.self_attn.v_proj.weight', 'model.layers.67.post_attention_layernorm.weight', 'model.layers.68.self_attn.o_proj.weight', 'model.layers.70.self_attn.q_proj.weight', 'model.layers.79.input_layernorm.weight', 'model.layers.67.mlp.gate_proj.weight', 'model.layers.76.self_attn.o_proj.weight', 'model.layers.77.mlp.gate_proj.weight', 'model.layers.76.mlp.up_proj.weight', 'model.layers.69.self_attn.k_proj.weight', 'model.layers.70.input_layernorm.weight', 'model.layers.70.mlp.down_proj.weight', 'model.layers.72.post_attention_layernorm.weight', 'model.layers.71.mlp.gate_proj.weight', 'model.layers.71.self_attn.k_proj.weight', 'model.layers.75.self_attn.q_proj.weight', 'model.layers.67.mlp.up_proj.weight', 'model.layers.70.self_attn.v_proj.weight', 'model.layers.67.self_attn.k_proj.weight', 'model.layers.77.self_attn.o_proj.weight', 'model.layers.66.self_attn.k_proj.weight', 'model.layers.68.self_attn.v_proj.weight', 'model.layers.78.self_attn.q_proj.weight', 'model.layers.79.self_attn.q_proj.weight', 'model.layers.76.post_attention_layernorm.weight', 'model.layers.77.self_attn.k_proj.weight', 'model.layers.71.mlp.up_proj.weight', 'model.layers.71.post_attention_layernorm.weight', 'model.layers.69.input_layernorm.weight', 'model.layers.68.self_attn.q_proj.weight', 'model.layers.74.self_attn.o_proj.weight', 'model.layers.78.mlp.up_proj.weight', 'model.layers.69.self_attn.q_proj.weight', 'model.layers.78.self_attn.o_proj.weight', 'model.layers.69.post_attention_layernorm.weight', 'model.layers.78.input_layernorm.weight', 'model.layers.73.mlp.down_proj.weight', 'model.layers.66.mlp.up_proj.weight', 'model.layers.72.input_layernorm.weight', 'model.layers.76.self_attn.k_proj.weight', 'model.layers.69.self_attn.v_proj.weight', 'model.layers.66.mlp.gate_proj.weight', 'model.layers.77.input_layernorm.weight', 'model.layers.75.mlp.down_proj.weight', 'model.layers.68.input_layernorm.weight', 'model.layers.76.mlp.down_proj.weight', 'model.layers.71.mlp.down_proj.weight', 'model.layers.70.self_attn.k_proj.weight', 'model.layers.77.post_attention_layernorm.weight', 'model.layers.75.self_attn.v_proj.weight', 'model.layers.77.mlp.up_proj.weight', 'model.layers.68.post_attention_layernorm.weight', 'model.layers.70.self_attn.o_proj.weight', 'model.layers.67.self_attn.o_proj.weight', 'model.layers.74.mlp.down_proj.weight', 'model.layers.74.mlp.up_proj.weight', 'model.layers.78.self_attn.v_proj.weight', 'model.layers.67.self_attn.v_proj.weight', 'model.layers.75.post_attention_layernorm.weight', 'model.layers.79.self_attn.v_proj.weight', 'model.layers.73.self_attn.v_proj.weight', 'model.layers.69.mlp.gate_proj.weight', 'model.layers.66.post_attention_layernorm.weight', 'model.layers.75.mlp.up_proj.weight', 'model.layers.77.self_attn.v_proj.weight', 'model.layers.68.self_attn.k_proj.weight', 'model.layers.72.mlp.gate_proj.weight', 'model.layers.71.self_attn.q_proj.weight', 'model.layers.69.mlp.down_proj.weight', 'model.layers.68.mlp.gate_proj.weight', 'model.layers.74.input_layernorm.weight', 'model.layers.74.post_attention_layernorm.weight', 'model.layers.78.mlp.gate_proj.weight'}

@tigran123
Copy link
Author

Note that there is no original subdirectory in the model's directory. I thought it is superfluous and deleted it. It contained the *.pth files. But all the safetensor files and everything else is in the model directory, i.e. it looks like this:

$ l Meta-Llama-3-70B-Instruct
total 114412648
-rw-rw-r-- 1 tigran tigran        654 May 11 08:56 config.json
-rw-rw-r-- 1 tigran tigran        187 May 11 08:59 generation_config.json
-rw-rw-r-- 1 tigran tigran       7801 May 11 08:59 LICENSE
-rw-rw-r-- 1 tigran tigran 4584408808 May 11 09:01 model-00001-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664167376 May 11 08:53 model-00002-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4999711704 May 11 08:59 model-00003-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4966157032 May 11 08:52 model-00004-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664134408 May 11 08:52 model-00005-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664167408 May 11 08:50 model-00006-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664167408 May 11 08:56 model-00007-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4999711728 May 11 08:58 model-00008-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4966157056 May 11 09:04 model-00009-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664134408 May 11 08:59 model-00010-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664167408 May 11 08:54 model-00011-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664167408 May 11 08:50 model-00012-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4999711728 May 11 09:02 model-00013-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4966157056 May 11 08:48 model-00014-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664134408 May 11 09:00 model-00015-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664167408 May 11 08:54 model-00016-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664167408 May 11 08:51 model-00017-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4999711728 May 11 08:48 model-00018-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4966157056 May 11 08:56 model-00019-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664134408 May 11 08:55 model-00020-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664167408 May 11 09:01 model-00021-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4664167408 May 11 08:57 model-00022-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4999711728 May 11 08:49 model-00023-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 4966157056 May 11 09:03 model-00024-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran 2101346432 May 11 09:02 model-00030-of-00030.safetensors
-rw-rw-r-- 1 tigran tigran      59615 May 11 08:54 model.safetensors.index.json
-rw-rw-r-- 1 tigran tigran      37811 May 11 08:54 README.md
-rw-rw-r-- 1 tigran tigran         73 May 11 09:03 special_tokens_map.json
-rw-rw-r-- 1 tigran tigran      50982 May 11 08:54 tokenizer_config.json
-rw-rw-r-- 1 tigran tigran    9085698 May 11 08:54 tokenizer.json
-rw-rw-r-- 1 tigran tigran       4696 May 11 09:03 USE_POLICY.md

@tigran123
Copy link
Author

log.txt

This is the complete log.txt with the stdout + stderr of the above invocation of convert-hf-to-gguf.py.

@Galunid
Copy link
Collaborator

Galunid commented May 11, 2024

@compilade Mind taking a look at the log.txt attached above?

INFO:hf-to-gguf:gguf: loading model part 'model-00030-of-00030.safetensors'
INFO:hf-to-gguf:output.weight,               torch.bfloat16 --> float16, shape = {8192, 128256}
Traceback (most recent call last):
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 2511, in <module>
    main()
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 2505, in main
    model_instance.write()
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 273, in write
    self.write_tensors()
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 1287, in write_tensors
    super().write_tensors()
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 220, in write_tensors
    for name, data_torch in self.get_tensors():
  File "/home/tigran/Software/AI/llama.cpp/convert-hf-to-gguf.py", line 142, in get_tensors
    raise ValueError(f"Mismatch between weight map and model parts for tensor names: {sym_diff}")
ValueError: Mismatch between weight map and model parts for tensor names: {'model.layers.75.input_layernorm.weight', 'model.layers.66.self_attn.o_proj.weight', 'model.layers.78.self_attn.k_proj.weight', 'model.layers.79.self_attn.k_proj.weight', 'model.layers.73.input_layernorm.weight', 'model.layers.66.input_layernorm.weight', 'model.layers.76.input_layernorm.weight', 'model.layers.73.self_attn.o_proj.weight', 'model.layers.79.mlp.up_proj.weight', 'model.layers.73.self_attn.q_proj.weight', 'model.layers.74.self_attn.k_proj.weight', 'model.layers.69.mlp.up_proj.weight', 'model.layers.72.self_attn.o_proj.weight', 'model.layers.68.mlp.up_proj.weight', 'model.layers.67.input_layernorm.weight', 'model.layers.69.self_attn.o_proj.weight', 'model.layers.67.mlp.down_proj.weight', 'model.layers.75.self_attn.k_proj.weight', 'model.layers.74.self_attn.v_proj.weight', 'model.layers.71.self_attn.o_proj.weight', 'model.layers.70.mlp.up_proj.weight', 'model.layers.72.mlp.down_proj.weight', 'model.layers.66.self_attn.v_proj.weight', 'model.layers.71.input_layernorm.weight', 'model.layers.73.mlp.gate_proj.weight', 'model.layers.79.self_attn.o_proj.weight', 'model.layers.78.post_attention_layernorm.weight', 'model.layers.76.self_attn.v_proj.weight', 'model.layers.76.self_attn.q_proj.weight', 'model.layers.74.mlp.gate_proj.weight', 'model.layers.73.mlp.up_proj.weight', 'model.layers.77.self_attn.q_proj.weight', 'model.layers.79.mlp.down_proj.weight', 'model.layers.77.mlp.down_proj.weight', 'model.layers.70.post_attention_layernorm.weight', 'model.layers.78.mlp.down_proj.weight', 'model.layers.66.mlp.down_proj.weight', 'model.layers.71.self_attn.v_proj.weight', 'model.layers.66.self_attn.q_proj.weight', 'model.layers.72.mlp.up_proj.weight', 'model.layers.75.self_attn.o_proj.weight', 'model.layers.70.mlp.gate_proj.weight', 'model.layers.74.self_attn.q_proj.weight', 'model.layers.79.post_attention_layernorm.weight', 'model.norm.weight', 'model.layers.67.self_attn.q_proj.weight', 'model.layers.76.mlp.gate_proj.weight', 'model.layers.73.post_attention_layernorm.weight', 'model.layers.68.mlp.down_proj.weight', 'model.layers.75.mlp.gate_proj.weight', 'model.layers.73.self_attn.k_proj.weight', 'model.layers.72.self_attn.q_proj.weight', 'model.layers.79.mlp.gate_proj.weight', 'model.layers.72.self_attn.k_proj.weight', 'model.layers.72.self_attn.v_proj.weight', 'model.layers.67.post_attention_layernorm.weight', 'model.layers.68.self_attn.o_proj.weight', 'model.layers.70.self_attn.q_proj.weight', 'model.layers.79.input_layernorm.weight', 'model.layers.67.mlp.gate_proj.weight', 'model.layers.76.self_attn.o_proj.weight', 'model.layers.77.mlp.gate_proj.weight', 'model.layers.76.mlp.up_proj.weight', 'model.layers.69.self_attn.k_proj.weight', 'model.layers.70.input_layernorm.weight', 'model.layers.70.mlp.down_proj.weight', 'model.layers.72.post_attention_layernorm.weight', 'model.layers.71.mlp.gate_proj.weight', 'model.layers.71.self_attn.k_proj.weight', 'model.layers.75.self_attn.q_proj.weight', 'model.layers.67.mlp.up_proj.weight', 'model.layers.70.self_attn.v_proj.weight', 'model.layers.67.self_attn.k_proj.weight', 'model.layers.77.self_attn.o_proj.weight', 'model.layers.66.self_attn.k_proj.weight', 'model.layers.68.self_attn.v_proj.weight', 'model.layers.78.self_attn.q_proj.weight', 'model.layers.79.self_attn.q_proj.weight', 'model.layers.76.post_attention_layernorm.weight', 'model.layers.77.self_attn.k_proj.weight', 'model.layers.71.mlp.up_proj.weight', 'model.layers.71.post_attention_layernorm.weight', 'model.layers.69.input_layernorm.weight', 'model.layers.68.self_attn.q_proj.weight', 'model.layers.74.self_attn.o_proj.weight', 'model.layers.78.mlp.up_proj.weight', 'model.layers.69.self_attn.q_proj.weight', 'model.layers.78.self_attn.o_proj.weight', 'model.layers.69.post_attention_layernorm.weight', 'model.layers.78.input_layernorm.weight', 'model.layers.73.mlp.down_proj.weight', 'model.layers.66.mlp.up_proj.weight', 'model.layers.72.input_layernorm.weight', 'model.layers.76.self_attn.k_proj.weight', 'model.layers.69.self_attn.v_proj.weight', 'model.layers.66.mlp.gate_proj.weight', 'model.layers.77.input_layernorm.weight', 'model.layers.75.mlp.down_proj.weight', 'model.layers.68.input_layernorm.weight', 'model.layers.76.mlp.down_proj.weight', 'model.layers.71.mlp.down_proj.weight', 'model.layers.70.self_attn.k_proj.weight', 'model.layers.77.post_attention_layernorm.weight', 'model.layers.75.self_attn.v_proj.weight', 'model.layers.77.mlp.up_proj.weight', 'model.layers.68.post_attention_layernorm.weight', 'model.layers.70.self_attn.o_proj.weight', 'model.layers.67.self_attn.o_proj.weight', 'model.layers.74.mlp.down_proj.weight', 'model.layers.74.mlp.up_proj.weight', 'model.layers.78.self_attn.v_proj.weight', 'model.layers.67.self_attn.v_proj.weight', 'model.layers.75.post_attention_layernorm.weight', 'model.layers.79.self_attn.v_proj.weight', 'model.layers.73.self_attn.v_proj.weight', 'model.layers.69.mlp.gate_proj.weight', 'model.layers.66.post_attention_layernorm.weight', 'model.layers.75.mlp.up_proj.weight', 'model.layers.77.self_attn.v_proj.weight', 'model.layers.68.self_attn.k_proj.weight', 'model.layers.72.mlp.gate_proj.weight', 'model.layers.71.self_attn.q_proj.weight', 'model.layers.69.mlp.down_proj.weight', 'model.layers.68.mlp.gate_proj.weight', 'model.layers.74.input_layernorm.weight', 'model.layers.74.post_attention_layernorm.weight', 'model.layers.78.mlp.gate_proj.weight'}

@compilade
Copy link
Collaborator

@tigran123 It looks like model parts from model-0025-of-00030.safetensors to model-00029-of-00030.safetensors are missing from your model directory. This is what is causing the missing tensors.

@tigran123
Copy link
Author

tigran123 commented May 11, 2024

Oh dear, I am so sorry -- I should have noticed that! Oh, I am so embarrassed, I must be really getting old, to miss such a trivial thing... I will download the missing files and will report if there are any problems with conversion or/and quantizing.

@tigran123
Copy link
Author

Just to confirm that @compilade was absolutely correct -- after downloading the required files everything worked correctly -- the f16 model was generated, failed to load because it required 138GB and I only had 128GB RAM and so I quantised it to Q8_0 which loaded just fine. So this issue can be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants