Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

any support for MUL_MAT operation in unsupported GPU feature? (macos intel/amd gpu) #796

Open
posojeg opened this issue Apr 14, 2024 · 2 comments

Comments

@posojeg
Copy link

posojeg commented Apr 14, 2024

ggml_metal_graph_compute_block_invoke: error: unsupported op 'MUL_MAT'
GGML_ASSERT: /Users/edwin/sd.cpp/ggml/src/ggml-metal.m:834: !"unsupported op"

why this issue seems never resolved, is it hard to using CPU only for MUL_MAT operation in unsupported GPU features?

OS: macos 14 (intel)
GPU: amd rx 560

leejet/stable-diffusion.cpp#229

@posojeg
Copy link
Author

posojeg commented Apr 14, 2024

edwin@Edwins-iMac-Pro sdcpp % ./sd -m xxmix.safetensors -p "a girl <lora:lcm_lora15:1>" --steps 4 --lora-model-dir . -v --cfg-scale 1 --sampling-method lcm        
Option: 
    n_threads:         2
    mode:              txt2img
    model_path:        xxmix.safetensors
    wtype:             unspecified
    vae_path:          
    taesd_path:        
    esrgan_path:       
    controlnet_path:   
    embeddings_path:   
    stacked_id_embeddings_path:   
    input_id_images_path:   
    style ratio:       20.00
    normzalize input image :  false
    output_path:       output.png
    init_img:          
    control_image:     
    clip on cpu:       false
    controlnet cpu:    false
    vae decoder on cpu:false
    strength(control): 0.90
    prompt:            a girl <lora:lcm_lora15:1>
    negative_prompt:   
    min_cfg:           1.00
    cfg_scale:         1.00
    clip_skip:         -1
    width:             512
    height:            512
    sample_method:     lcm
    schedule:          default
    sample_steps:      4
    strength(img2img): 0.75
    rng:               cuda
    seed:              42
    batch_count:       1
    vae_tiling:        false
    upscale_repeats:   1
System Info: 
    BLAS = 1
    SSE3 = 1
    AVX = 1
    AVX2 = 1
    AVX512 = 0
    AVX512_VBMI = 0
    AVX512_VNNI = 0
    FMA = 1
    NEON = 0
    ARM_FMA = 0
    F16C = 1
    FP16_VA = 0
    WASM_SIMD = 0
    VSX = 0
[DEBUG] stable-diffusion.cpp:155  - Using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: AMD Radeon RX 560
ggml_metal_init: picking default device: AMD Radeon RX 560
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
ggml_metal_init: loading '/Users/edwin/sdcpp/ggml-metal.metal'
ggml_metal_init: GPU name:   AMD Radeon RX 560
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: simdgroup reduction support   = false
ggml_metal_init: simdgroup matrix mul. support = false
ggml_metal_init: hasUnifiedMemory              = false
ggml_metal_init: recommendedMaxWorkingSetSize  =  4294.97 MB
ggml_metal_init: skipping kernel_soft_max                  (not supported)
ggml_metal_init: skipping kernel_soft_max_4                (not supported)
ggml_metal_init: skipping kernel_rms_norm                  (not supported)
ggml_metal_init: skipping kernel_group_norm                (not supported)
ggml_metal_init: skipping kernel_mul_mv_f32_f32            (not supported)
ggml_metal_init: skipping kernel_mul_mv_f16_f16            (not supported)
ggml_metal_init: skipping kernel_mul_mv_f16_f32            (not supported)
ggml_metal_init: skipping kernel_mul_mv_f16_f32_1row       (not supported)
ggml_metal_init: skipping kernel_mul_mv_f16_f32_l4         (not supported)
ggml_metal_init: skipping kernel_mul_mv_q4_0_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mv_q4_1_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mv_q5_0_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mv_q5_1_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mv_q8_0_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mv_q2_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mv_q3_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mv_q4_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mv_q5_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mv_q6_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mv_iq2_xxs_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_iq2_xs_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mv_iq3_xxs_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_iq3_s_f32          (not supported)
ggml_metal_init: skipping kernel_mul_mv_iq2_s_f32          (not supported)
ggml_metal_init: skipping kernel_mul_mv_iq1_s_f32          (not supported)
ggml_metal_init: skipping kernel_mul_mv_iq4_nl_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mv_iq4_xs_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_f32_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_f16_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_q4_0_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_q4_1_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_q5_0_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_q5_1_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_q8_0_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_q2_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_q3_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_q4_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_q5_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_q6_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_iq2_xxs_f32     (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_iq2_xs_f32      (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_iq3_xxs_f32     (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_iq3_s_f32       (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_iq2_s_f32       (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_iq1_s_f32       (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_iq4_nl_f32      (not supported)
ggml_metal_init: skipping kernel_mul_mv_id_iq4_xs_f32      (not supported)
ggml_metal_init: skipping kernel_mul_mm_f32_f32            (not supported)
ggml_metal_init: skipping kernel_mul_mm_f16_f32            (not supported)
ggml_metal_init: skipping kernel_mul_mm_q4_0_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q4_1_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q5_0_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q5_1_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q8_0_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q2_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q3_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q4_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q5_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_q6_K_f32           (not supported)
ggml_metal_init: skipping kernel_mul_mm_iq2_xxs_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_iq2_xs_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mm_iq3_xxs_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_iq3_s_f32          (not supported)
ggml_metal_init: skipping kernel_mul_mm_iq2_s_f32          (not supported)
ggml_metal_init: skipping kernel_mul_mm_iq1_s_f32          (not supported)
ggml_metal_init: skipping kernel_mul_mm_iq4_nl_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mm_iq4_xs_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_f32_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_f16_f32         (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q4_0_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q4_1_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q5_0_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q5_1_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q8_0_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q2_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q3_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q4_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q5_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_q6_K_f32        (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_iq2_xxs_f32     (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_iq2_xs_f32      (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_iq3_xxs_f32     (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_iq3_s_f32       (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_iq2_s_f32       (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_iq1_s_f32       (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_iq4_nl_f32      (not supported)
ggml_metal_init: skipping kernel_mul_mm_id_iq4_xs_f32      (not supported)
[INFO ] stable-diffusion.cpp:171  - loading model from 'xxmix.safetensors'
[INFO ] model.cpp:735  - load xxmix.safetensors using safetensors format
[DEBUG] model.cpp:801  - init from 'xxmix.safetensors'
[INFO ] stable-diffusion.cpp:194  - Stable Diffusion 1.x 
[INFO ] stable-diffusion.cpp:200  - Stable Diffusion weight type: f16
[DEBUG] stable-diffusion.cpp:201  - ggml tensor size = 432 bytes
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   235.07 MiB, (  243.41 /  4096.00)
[DEBUG] ggml_extend.hpp:890  - clip params backend buffer size =  235.06 MB(VRAM) (196 tensors)
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =  1640.25 MiB, ( 1883.66 /  4096.00)
[DEBUG] ggml_extend.hpp:890  - unet params backend buffer size =  1640.25 MB(VRAM) (686 tensors)
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =    94.47 MiB, ( 1978.13 /  4096.00)
[DEBUG] ggml_extend.hpp:890  - vae params backend buffer size =  94.47 MB(VRAM) (140 tensors)
[DEBUG] stable-diffusion.cpp:302  - loading vocab
[DEBUG] clip.hpp:164  - vocab size: 49408
[DEBUG] clip.hpp:175  -  trigger word img already in vocab
[DEBUG] stable-diffusion.cpp:322  - loading weights
[DEBUG] model.cpp:1373 - loading tensors from xxmix.safetensors
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_0_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_10_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_11_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_1_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_2_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_3_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_4_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_5_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_6_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_7_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_8_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_mlp_fc1.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_mlp_fc1.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_mlp_fc1.lora_up.weight | f16 | 2 [64, 3072, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_mlp_fc2.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_mlp_fc2.lora_down.weight | f16 | 2 [3072, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_mlp_fc2.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_k_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_k_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_k_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_out_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_out_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_out_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_q_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_q_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_q_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_v_proj.alpha | f16 | 0 [1, 1, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_v_proj.lora_down.weight | f16 | 2 [768, 64, 1, 1, 1]' in model file
[INFO ] model.cpp:1519 - unknown tensor 'lora.cond_stage_model_transformer_text_model_encoder_layers_9_self_attn_v_proj.lora_up.weight | f16 | 2 [64, 768, 1, 1, 1]' in model file
[INFO ] stable-diffusion.cpp:421  - total params memory size = 1969.78MB (VRAM 1969.78MB, RAM 0.00MB): clip 235.06MB(VRAM), unet 1640.25MB(VRAM), vae 94.47MB(VRAM), controlnet 0.00MB(VRAM), pmid 0.00MB(VRAM)
[INFO ] stable-diffusion.cpp:425  - loading model from 'xxmix.safetensors' completed, taking 2.31s
[INFO ] stable-diffusion.cpp:442  - running in eps-prediction mode
[DEBUG] stable-diffusion.cpp:470  - finished loaded file
[DEBUG] stable-diffusion.cpp:1557 - txt2img 512x512
[DEBUG] stable-diffusion.cpp:1599 - lora lcm_lora15:1.00
[DEBUG] stable-diffusion.cpp:1603 - prompt after extract and remove lora: "a girl "
[INFO ] stable-diffusion.cpp:553  - Attempting to apply 1 LoRAs
[INFO ] model.cpp:735  - load ./lcm_lora15.safetensors using safetensors format
[DEBUG] model.cpp:801  - init from './lcm_lora15.safetensors'
[INFO ] lora.hpp:39   - loading LoRA from './lcm_lora15.safetensors'
[DEBUG] model.cpp:1373 - loading tensors from ./lcm_lora15.safetensors
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   128.29 MiB, ( 2106.42 /  4096.00)
[DEBUG] ggml_extend.hpp:890  - lora params backend buffer size =  128.28 MB(VRAM) (10240 tensors)
[DEBUG] model.cpp:1373 - loading tensors from ./lcm_lora15.safetensors
[DEBUG] lora.hpp:75   - finished loaded lora
[DEBUG] lora.hpp:183  - (834 / 834) LoRA tensors applied successfully
ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size =   227.81 MiB, ( 2334.23 /  4096.00)
[DEBUG] ggml_extend.hpp:841  - lora compute buffer size: 227.81 MB(VRAM)
[DEBUG] lora.hpp:183  - (834 / 834) LoRA tensors applied successfully
ggml_metal_graph_compute_block_invoke: error: unsupported op 'MUL_MAT'
GGML_ASSERT: /Users/edwin/sd.cpp/ggml/src/ggml-metal.m:834: !"unsupported op"
zsh: abort      ./sd -m xxmix.safetensors -p "a girl <lora:lcm_lora15:1>" --steps 4  . -v  1 

@FSSRepo
Copy link
Collaborator

FSSRepo commented Apr 15, 2024

It seems like your GPU doesn't support any of the matrix multiplication kernels, I'm not sure if it's due to lack of capabilities or compatibility with Metal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants