Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Overriding output data pointer allocated by memory plan is not allowed. #3528

Open
sunqijie0350 opened this issue May 7, 2024 · 7 comments
Assignees
Labels
bug Something isn't working partner: qualcomm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@sunqijie0350
Copy link

:~/DataDisk/qijie.sun/push_files$ adb shell "cd ${DEVICE_DIR}
&& export LD_LIBRARY_PATH=${DEVICE_DIR}
&& export ADSP_LIBRARY_PATH=${DEVICE_DIR}
&& ./qnn_executor_runner --model_path ./dummy_llama2_qnn.pte"
I 00:00:00.001473 executorch:qnn_executor_runner.cpp:131] Model file ./dummy_llama2_qnn.pte is loaded.
I 00:00:00.001617 executorch:qnn_executor_runner.cpp:140] Using method forward
I 00:00:00.001694 executorch:qnn_executor_runner.cpp:188] Setting up planned buffer 0, size 14016.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[WARNING] [Qnn ExecuTorch]: Initializing HtpProvider

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
I 00:00:00.213863 executorch:qnn_executor_runner.cpp:214] Method loaded.
E 00:00:00.214101 executorch:method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
E 00:00:00.214145 executorch:qnn_executor_runner.cpp:263] ignoring error from set_output_data_ptr(): 0x2
I 00:00:00.214171 executorch:qnn_executor_runner.cpp:266] Inputs prepared.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

I 00:00:00.217469 executorch:qnn_executor_runner.cpp:415] Model executed successfully.
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
[INFO] [Qnn ExecuTorch]: Destroy Qnn context
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[INFO] [Qnn ExecuTorch]: Destroy Qnn device
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
[INFO] [Qnn ExecuTorch]: Destroy Qnn context
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[INFO] [Qnn ExecuTorch]: Destroy Qnn device
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
[INFO] [Qnn ExecuTorch]: Destroy Qnn context
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[INFO] [Qnn ExecuTorch]: Destroy Qnn device
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
[INFO] [Qnn ExecuTorch]: Destroy Qnn context
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[INFO] [Qnn ExecuTorch]: Destroy Qnn device
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
[INFO] [Qnn ExecuTorch]: Destroy Qnn context
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[INFO] [Qnn ExecuTorch]: Destroy Qnn device
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
[INFO] [Qnn ExecuTorch]: Destroy Qnn context
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[INFO] [Qnn ExecuTorch]: Destroy Qnn device
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[INFO] [Qnn ExecuTorch]: Destroy Qnn backend
[WARNING] [Qnn ExecuTorch]: qnnOpPackageManager: hexagon unload op package function pointer is nullptr!

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

Normal running model should have the output of inference result and time, but here is obviously not, please consult your experts, what is wrong?

@JacobSzwejbka
Copy link
Contributor

E 00:00:00.214101 executorch:method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
E 00:00:00.214145 executorch:qnn_executor_runner.cpp:263] ignoring error from set_output_data_ptr(): 0x2

This is noisy logging. I 0x2 just means the output location was memory planned which isnt really an error. This logging will be fixed in a future release.

@cccclai Im guessing the actual problem is with qnn but not sure

@JacobSzwejbka JacobSzwejbka added bug Something isn't working partner: qualcomm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels May 7, 2024
@JacobSzwejbka
Copy link
Contributor

I 00:00:00.217469 executorch:qnn_executor_runner.cpp:415] Model executed successfully.

@sunqijie0350 I see this. Are you sure its not running correctly?

@sunqijie0350
Copy link
Author

sunqijie0350 commented May 8, 2024

I 00:00:00.217469 executorch:qnn_executor_runner.cpp:415] Model executed successfully.

@sunqijie0350 I see this. Are you sure its not running correctly?

@JacobSzwejbka

Yes, I have a relatively high probability that I can determine that it ran wrong, and a successful run would have the number of generated tokens and runtime printed, but apparently not here. What's strange is that retrying gives two different results. The probability of the above log appearing during a retry is small, the probability of the below log appearing is higher:

~/DataDisk/qijie.sun/push_files$ adb shell "cd ${DEVICE_DIR}
&& export LD_LIBRARY_PATH=${DEVICE_DIR}
&& export ADSP_LIBRARY_PATH=${DEVICE_DIR}
&& ./qnn_executor_runner --model_path ./dummy_llama2_qnn.pte"
I 00:00:00.001272 executorch:qnn_executor_runner.cpp:131] Model file ./dummy_llama2_qnn.pte is loaded.
I 00:00:00.001532 executorch:qnn_executor_runner.cpp:140] Using method forward
I 00:00:00.001605 executorch:qnn_executor_runner.cpp:188] Setting up planned buffer 0, size 14016.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[WARNING] [Qnn ExecuTorch]: Initializing HtpProvider

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: Function not called, PrepareLib isn't loaded!

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
I 00:00:00.185790 executorch:qnn_executor_runner.cpp:214] Method loaded.
E 00:00:00.186076 executorch:method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
E 00:00:00.186134 executorch:qnn_executor_runner.cpp:263] ignoring error from set_output_data_ptr(): 0x2
I 00:00:00.186171 executorch:qnn_executor_runner.cpp:266] Inputs prepared.
[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[WARNING] [Qnn ExecuTorch]: sg_stubPtr is not null, skip loadRemoteSymbols

[ERROR] [Qnn ExecuTorch]: SSR Detected - You must free and recreate affected QNN API handles associated with deviceId 0 coreId 0 pdId 0

[ERROR] [Qnn ExecuTorch]: Transport.teardownLocked: qnn_transport_teardown failed 0x00000027

[ERROR] [Qnn ExecuTorch]: Transport.teardownLocked: qnn_close error 0x00000027, userCnt 0

[ERROR] [Qnn ExecuTorch]: Transport.teardownLocked failed, error 0x00000007

[WARNING] [Qnn ExecuTorch]: All internal QNN API handles invalidated

[ERROR] [Qnn ExecuTorch]: Graph failed in execution with err 1007

[ERROR] [Qnn ExecuTorch]: qnn_graph_execute failed. Error 1007
E 00:00:00.495474 executorch:QnnExecuTorchBackend.cpp:221] Fail to execute graph
E 00:00:00.495553 executorch:method.cpp:1072] CALL_DELEGATE execute failed at instruction 3: 0x1
F 00:00:00.495633 executorch:qnn_executor_runner.cpp:414] In function main(), assert failed (status == Error::Ok): Execution of method forward failed with status 0x1
Aborted

@sunqijie0350
Copy link
Author

The log of a successful run is as follows:
I 00:00:01.835706 executorch:qnn_executor_runner.cpp:298] 100 inference took 1096.626000 ms, avg 10.966260 ms
[INFO][Qnn ExecuTorch] Destroy Qnn backend parameters
[INFO][Qnn ExecuTorch] Destroy Qnn context
[INFO][Qnn ExecuTorch] Destroy Qnn device
[INFO][Qnn ExecuTorch] Destroy Qnn backend

I'm wondering if the error reported above affects this result output?

@JacobSzwejbka
Copy link
Contributor

[ERROR] [Qnn ExecuTorch]: qnn_graph_execute failed. Error 1007
E 00:00:00.495474 executorch:QnnExecuTorchBackend.cpp:221] Fail to execute graph
E 00:00:00.495553 executorch:method.cpp:1072] CALL_DELEGATE execute failed at instruction 3: 0x1
F 00:00:00.495633 executorch:qnn_executor_runner.cpp:414] In function main(), assert failed (status == Error::Ok): Execution of method forward failed with status 0x1

@cccclai can u take a look

@chiwwang
Copy link
Collaborator

This looks like a failure in fastrpc.

[ERROR] [Qnn ExecuTorch]: SSR Detected - You must free and recreate affected QNN API handles associated with deviceId 0 coreId 0 pdId 0
[ERROR] [Qnn ExecuTorch]: Transport.teardownLocked: qnn_transport_teardown failed 0x00000027
[ERROR] [Qnn ExecuTorch]: Transport.teardownLocked: qnn_close error 0x00000027, userCnt 0
[ERROR] [Qnn ExecuTorch]: Transport.teardownLocked failed, error 0x00000007

Does it always happen? It's usually caused by models larger than what Hexagon can afford... are you using llama2-7b or lager?
LLAMA2-7B requires sophisticated partitions to fit into DSP and we're actively working on it.

@hans00
Copy link

hans00 commented May 29, 2024

Same error on Windows on ARM.

I'm running dummy llama.
It only 727K, should not need partitions 🤔

[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
[INFO] [Qnn ExecuTorch]: soc_model in soc_info: SC8380XP
[INFO] [Qnn ExecuTorch]: backend_type: kHtpBackend
[INFO] [Qnn ExecuTorch]: graph_name: executorch
[INFO] [Qnn ExecuTorch]: library_path:
[INFO] [Qnn ExecuTorch]: tensor_dump_output_path:
[INFO] [Qnn ExecuTorch]: log_level: kLogLevelWarn
[INFO] [Qnn ExecuTorch]: profile_level: kProfileOff
[INFO] [Qnn ExecuTorch]: the size of qnn context binary: 145784
[INFO] [Qnn ExecuTorch]: Is on-device graph construction: 0
[INFO] [Qnn ExecuTorch]: Enable shared buffer: 0
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 2
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[WARNING] [Qnn ExecuTorch]: QnnDsp <W> Function not called, PrepareLib isn't loaded!
 
[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
I 34:02:26.313899 executorch:qnn_executor_runner.cpp:215] Method loaded.
E 34:02:26.313949 executorch:qnn_executor_runner.cpp:264] ignoring error from set_output_data_ptr(): 0x2
I 34:02:26.313982 executorch:qnn_executor_runner.cpp:267] Inputs prepared.
I 34:02:26.314094 executorch:qnn_executor_runner.cpp:273] Number of inputs: 1
I 34:02:26.314241 executorch:qnn_executor_runner.cpp:341] Perform 0 inference for warming up
I 34:02:26.314265 executorch:qnn_executor_runner.cpp:347] Start inference (0)
[WARNING] [Qnn ExecuTorch]: Shared buffer is not supported on this platform.
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> DspTransport call failed, error 0x00000007
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> Transport.teardownLocked: qnn_close priority handle failed 0x00000200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> Transport.teardownLocked: qnn_transport_teardown failed 0x8000040d
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> Transport.teardownLocked: qnn_close error 0x00000200, userCnt 0
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> Transport.teardownLocked failed, error 0x00000007
 
[WARNING] [Qnn ExecuTorch]: QnnDsp <W> All internal QNN API handles invalidated
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 31 with length: 3704 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 25 with length: 4024 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 13 with length: 3704 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 1 with length: 160 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 22 with length: 2912 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 16 with length: 2097152 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 20 with length: 4024 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 2 with length: 3976 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 8 with length: 2097152 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 14 with length: 2912 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 6 with length: 1872 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 12 with length: 4024 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 39 with length: 2416 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 27 with length: 2912 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 21 with length: 3704 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 30 with length: 4024 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 24 with length: 2097152 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 38 with length: 6240 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 32 with length: 2912 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 26 with length: 3704 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 41 with length: 2097152 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 29 with length: 2097152 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 34 with length: 2097152 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> fastrpc memory failed to unmap for fd: 37 with length: 3752 failed with error: 0x200
 
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> Graph  failed in execution with err 1007
 
[ERROR] [Qnn ExecuTorch]: qnn_graph_execute failed. Error 1007
E 34:02:26.664728 executorch:QnnExecuTorchBackend.cpp:230] Fail to execute graph
E 34:02:26.664821 executorch:method.cpp:1082] CALL_DELEGATE execute failed at instruction 3: 0x1
I 34:02:26.664845 executorch:qnn_executor_runner.cpp:365] 1 inference took 350.555000 ms, avg 350.555000 ms
F 34:02:26.664897 executorch:qnn_executor_runner.cpp:370] In function main(), assert failed (status == Error::Ok): Execution of method forward failed with status 0x1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working partner: qualcomm For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

5 participants