Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gibberish response from server and main exits on M1 macstudio ultra with gpu (cpu ok) #7159

Closed
jrozentur opened this issue May 9, 2024 · 4 comments

Comments

@jrozentur
Copy link

Please include information about your system, the steps to reproduce the bug, and the version of llama.cpp that you are using. If possible, please provide a minimal code example that reproduces the bug.

I had llama.cpp working on this system, unsure what changed, maybe os update
I am using 2 models: codellama-7b.Q4_0.gguf, and derived from it DuckDB-NSQL-7B-v0.1-q8_0.gguf
First started to see gibberish/binary response from standard queries on llama-cpp-python server, like 'what is the capital of France?'

  • Installed fresh llama.cpp from GitHub (4426e29), built with make
  • downloaded fresh CodeLlama-7b-hf
  • converted and quantized using built tools: python3 convert.py model/CodeLlama-7b-hf/ --outtype f16; /quantize model/ggml-model-f16.gguf ggml-model-f16-q4.0.gguf Q4_0
  • ./main -m ggml-model-f16-q4.0.gguf
    *** this exits immediately
  • ./server -m ggml-model-f16-q4.0.gguf
    *** launches and displays home page, but query returns garbage chars

log from ./main:

(llama) julianrozentur@Julians-Mac-Studio llama.cpp % ./main -m ggml-model-f16-q4.0.gguf
Log start
main: build = 2824 (4426e29)
main: built with Apple clang version 15.0.0 (clang-1500.3.9.4) for arm64-apple-darwin23.4.0
main: seed = 1715228873
llama_model_loader: loaded meta data with 23 key-value pairs and 291 tensors from .... (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = model_base
llama_model_loader: - kv 2: llama.vocab_size u32 = 32016
llama_model_loader: - kv 3: llama.context_length u32 = 16384
llama_model_loader: - kv 4: llama.embedding_length u32 = 4096
llama_model_loader: - kv 5: llama.block_count u32 = 32
llama_model_loader: - kv 6: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 7: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 8: llama.attention.head_count u32 = 32
llama_model_loader: - kv 9: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 10: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 11: llama.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 12: general.file_type u32 = 2
llama_model_loader: - kv 13: tokenizer.ggml.model str = llama
llama_model_loader: - kv 14: tokenizer.ggml.tokens arr[str,32016] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 15: tokenizer.ggml.scores arr[f32,32016] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 16: tokenizer.ggml.token_type arr[i32,32016] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q4_0: 225 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: mismatch in special tokens definition ( 264/32016 vs 259/32016 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32016
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 16384
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 16384
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name = model_base
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.30 MiB
ggml_backend_metal_log_allocated_size: allocated buffer, size = 3577.62 MiB, ( 3577.69 / 49152.00)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: Metal buffer size = 3577.61 MiB
llm_load_tensors: CPU buffer size = 70.35 MiB
..................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Ultra
ggml_metal_init: picking default device: Apple M1 Ultra
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/julianrozentur/llamacpp_server/llamacpp_src/llama.cpp/ggml-metal.metal'
ggml_metal_init: GPU name: Apple M1 Ultra
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
ggml_metal_init: simdgroup reduction support = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 51539.61 MB
llama_kv_cache_init: Metal KV buffer size = 256.00 MiB
llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.12 MiB
llama_new_context_with_model: Metal compute buffer size = 70.53 MiB
llama_new_context_with_model: CPU compute buffer size = 9.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
ggml_metal_graph_compute: command buffer 0 failed with status 0

system_info: n_threads = 16 / 20 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
sampling:
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 2048, n_predict = -1, n_keep = 1

ggml_metal_graph_compute: command buffer 0 failed with status 0
...
ggml_metal_graph_compute: command buffer 0 failed with status 0
▅ggml_metal_graph_compute: command buffer 0 failed with status 0
ggml_metal_graph_compute: command buffer 0 failed with status 0
ggml_metal_graph_compute: command buffer 0 failed with status 0
▅ggml_metal_graph_compute: command buffer 0 failed with status 0
ggml_metal_graph_compute: command buffer 0 failed with status 0
▅ggml_metal_graph_compute: command buffer 0 failed with status 0
ggml_metal_graph_compute: command buffer 0 failed with status 0
▅ggml_metal_graph_compute: command buffer 0 failed with status 0
ggml_metal_graph_compute: command buffer 0 failed with status 0
ggml_metal_graph_compute: command buffer 0 failed with status 0
[end of text]

llama_print_timings: load time = 52.61 ms
llama_print_timings: sample time = 1.78 ms / 149 runs ( 0.01 ms per token, 83707.87 tokens per second)
llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second)
llama_print_timings: eval time = 49.78 ms / 149 runs ( 0.33 ms per token, 2993.41 tokens per second)
llama_print_timings: total time = 65.96 ms / 150 tokens
ggml_metal_free: deallocating
Log end

@ggerganov
Copy link
Owner

Can you chack if this change is the reason: 26458af

@jrozentur
Copy link
Author

The change 26458af made no difference, but the problem resolved itself after a reboot... And it now works with downloaded models and those I converted manually. Same binaries also worked on MacBook Air m1
What can it be then - apple drivers, hardware or subtle memory bug? model files slightly corrupt and you need to hit the right spot?

@arnfaldur
Copy link

It's hard to say. You should close the issue since it seems to have been a fluke and it's resolved.

@jrozentur
Copy link
Author

Will keep an eye on it and see if other stacks are runnable when it happens
Closing for now since it can no longer be reproduced

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants