-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] InternalError: Check failed: (res == VK_SUCCESS) is false: Vulkan Error, code=-4: VK_ERROR_DEVICE_LOST #2328
Labels
bug
Confirmed bugs
Comments
do you mind try out the python api https://llm.mlc.ai/docs/deploy/python_engine.html and provide a reproducible script that can bring ths error? |
Thank you, this script causes the error every time it's run: from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/Llama-3-8B-Instruct-q4f16_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": """What a profound and timeless question!
The meaning of life is a topic that has puzzled philosophers, theologians, and scientists for centuries. While there may not be a definitive answer, I can offer some perspectives and insights that might be helpful.
One approach is to consider the concept of purpose. What gives your life significance? What are your values, passions, and goals? For many people, finding meaning and purpose in life involves pursuing their values and interests, building meaningful relationships, and making a positive impact on the world.
Another perspective is to look at the human experience as a whole. We are social creatures, and our lives are intertwined with those of others. We have a natural desire for connection, community, and belonging. We also have a need for self-expression, creativity, and personal growth. These aspects of human nature can be seen as fundamental to our existence and provide a sense of meaning.
Some people find meaning in their lives through spirituality or religion. They may believe that their existence has a higher purpose, and that their experiences and challenges are part of a larger plan.
Others may find meaning through their work, hobbies, or activities that bring them joy and fulfillment. They may believe that their existence has a purpose because they are contributing to the greater good, making a positive impact, or leaving a lasting legacy.
Ultimately, the meaning of life is a highly personal and subjective concept. It can be influenced by our experiences, values, and perspectives. While there may not be a single, definitive answer, exploring these questions and reflecting on our own experiences can help us discover our own sense of purpose and meaning.
What are your thoughts on the meaning of life? What gives your life significance?
"""}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate() This is the full log:
|
Thank you, do you also mind comment about the GPu you have and the vram size ? |
I don't have a separate GPU, I'm using a Celeron 5105 with an Intel UHD Graphics 24EU Mobile, with no VRAM of its own.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
馃悰 Bug
MLC LLM briefly causes the computer to freeze when the prompt is too long, followed by MLC LLM crashing.
To Reproduce
Steps to reproduce the behavior:
Traceback (most recent call last):
File "/home/username/miniconda3/envs/envformlc/bin/mlc_llm", line 8, in
sys.exit(main())
^^^^^^
File "/home/username/miniconda3/envs/envformlc/lib/python3.12/site-packages/mlc_llm/main.py", line 37, in main
cli.main(sys.argv[2:])
File "/home/username/miniconda3/envs/envformlc/lib/python3.12/site-packages/mlc_llm/cli/chat.py", line 42, in main
chat(
File "/home/username/miniconda3/envs/envformlc/lib/python3.12/site-packages/mlc_llm/interface/chat.py", line 160, in chat
cm.generate(
File "/home/username/miniconda3/envs/envformlc/lib/python3.12/site-packages/mlc_llm/chat_module.py", line 863, in generate
self._prefill(prompt, generation_config=generation_config)
File "/home/username/miniconda3/envs/envformlc/lib/python3.12/site-packages/mlc_llm/chat_module.py", line 1086, in _prefill
self._prefill_func(
File "tvm/_ffi/_cython/./packed_func.pxi", line 332, in tvm._ffi._cy3.core.PackedFuncBase.call
File "tvm/_ffi/_cython/./packed_func.pxi", line 277, in tvm._ffi._cy3.core.FuncCall
File "tvm/_ffi/_cython/./base.pxi", line 182, in tvm._ffi._cy3.core.CHECK_CALL
File "/home/username/miniconda3/envs/envformlc/lib/python3.12/site-packages/tvm/_ffi/base.py", line 481, in raise_last_ffi_error
raise py_err
File "/workspace/mlc-llm/cpp/llm_chat.cc", line 1697, in mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtrtvm::runtime::Object const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#5}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
File "/workspace/mlc-llm/cpp/llm_chat.cc", line 1010, in mlc::llm::LLMChat::PrefillStep(std::__cxx11::basic_string<char, std::char_traits, std::allocator >, bool, bool, mlc::llm::PlaceInPrompt, tvm::runtime::String)
File "/workspace/mlc-llm/cpp/llm_chat.cc", line 1241, in mlc::llm::LLMChat::SampleTokenFromLogits(tvm::runtime::NDArray, picojson::object_with_ordered_keys)
File "/workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/ndarray.h", line 405, in mlc::llm::LLMChat::UpdateLogitsOrProbOnCPUSync(tvm::runtime::NDArray)
tvm.error.InternalError: Traceback (most recent call last):
8: mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtrtvm::runtime::Object const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#5}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const
at /workspace/mlc-llm/cpp/llm_chat.cc:1697
7: mlc::llm::LLMChat::PrefillStep(std::__cxx11::basic_string<char, std::char_traits, std::allocator >, bool, bool, mlc::llm::PlaceInPrompt, tvm::runtime::String)
at /workspace/mlc-llm/cpp/llm_chat.cc:1010
6: mlc::llm::LLMChat::SampleTokenFromLogits(tvm::runtime::NDArray, picojson::object_with_ordered_keys)
at /workspace/mlc-llm/cpp/llm_chat.cc:1241
5: mlc::llm::LLMChat::UpdateLogitsOrProbOnCPUSync(tvm::runtime::NDArray)
at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/ndarray.h:405
4: tvm::runtime::NDArray::CopyFromTo(DLTensor const*, DLTensor*, void*)
3: tvm::runtime::DeviceAPI::CopyDataFromTo(DLTensor*, DLTensor*, void*)
2: tvm::runtime::vulkan::VulkanDeviceAPI::CopyDataFromTo(void const*, unsigned long, void*, unsigned long, unsigned long, DLDevice, DLDevice, DLDataType, void*)
1: tvm::runtime::vulkan::VulkanStream::Synchronize()
0: _ZN3tvm7runtime6deta
File "/workspace/tvm/src/runtime/vulkan/vulkan_stream.cc", line 155
InternalError: Check failed: (res == VK_SUCCESS) is false: Vulkan Error, code=-4: VK_ERROR_DEVICE_LOST
terminate called after throwing an instance of 'tvm::runtime::InternalError'
what(): [22:27:47] /workspace/tvm/src/runtime/vulkan/vulkan_device.cc:402: InternalError: Check failed: (__e == VK_SUCCESS) is false: Vulkan Error, code=-4: VK_ERROR_DEVICE_LOST
Stack trace:
0: _ZN3tvm7runtime6deta
1: tvm::runtime::vulkan::VulkanDevice::QueueSubmit(VkSubmitInfo, VkFence_T*) const
2: tvm::runtime::vulkan::VulkanStream::Synchronize()
3: tvm::runtime::vulkan::VulkanDeviceAPI::StreamSync(DLDevice, void*)
4: ZN3tvm7runtime6vulkan15VulkanDeviceAPI13FreeDataS
5: tvm::runtime::NDArray::Internal::DefaultDeleter(tvm::runtime::Object*)
6: tvm::runtime::relax_vm::PagedAttentionKVCacheObj::~PagedAttentionKVCacheObj()
7: ZN3tvm7runtime18SimpleObjAllocator7H
8: tvm::runtime::ObjectPtrtvm::runtime::Object::reset()
at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/object.h:455
9: tvm::runtime::ObjectPtrtvm::runtime::Object::~ObjectPtr()
at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/object.h:404
10: tvm::runtime::ObjectRef::~ObjectRef()
at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/object.h:519
11: mlc::llm::LLMChat::~LLMChat()
at /workspace/mlc-llm/cpp/llm_chat.cc:371
12: mlc::llm::LLMChatModule::~LLMChatModule()
at /workspace/mlc-llm/cpp/llm_chat.cc:1638
13: tvm::runtime::SimpleObjAllocator::Handlermlc::llm::LLMChatModule::Deleter(tvm::runtime::Object*)
at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/memory.h:138
14: tvm::runtime::Object::DecRef()
at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/object.h:846
15: tvm::runtime::Object::DecRef()
at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/object.h:842
16: tvm::runtime::ObjectPtrtvm::runtime::Object::reset()
at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/object.h:455
17: tvm::runtime::ObjectPtrtvm::runtime::Object::~ObjectPtr()
at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/object.h:404
18: mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtrtvm::runtime::Object const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#12}::~TVMRetValue()
at /workspace/mlc-llm/cpp/llm_chat.cc:1757
19: tvm::runtime::SimpleObjAllocator::Handler<tvm::runtime::PackedFuncSubObj<mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtrtvm::runtime::Object const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#12}> >::Deleter(tvm::runtime::Object*)
at /workspace/mlc-llm/3rdparty/tvm/include/tvm/runtime/memory.h:138
20: TVMObjectFree
21: __pyx_tp_dealloc_3tvm_4_ffi_4_cy3_4core_PackedFuncBase(_object*)
22: 0x00000000005cab98
23: 0xffffffffffffffff
Aborted (core dumped)
Expected behavior
I expect the software to run normally and the model to answer.
Environment
conda
, source): condapip
, source): I don't remember installing this. Maybe I already had it.python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))"
, applicable if you compile models):Additional context
The 'aaaaaaa' prompt consistently causes the application to crash, but this behavior is also observed with shorter, more complex prompts. I love MLC-LLM, it's so much faster than koboldcpp on my mini PC. Please let me know if there is anything I can do.
The text was updated successfully, but these errors were encountered: