Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IndexError: list index out of range on ubuntu 20.04 when run python v2/chat.py #147

Open
SeekPoint opened this issue Jun 30, 2023 · 2 comments

Comments

@SeekPoint
Copy link

(gh_ChatRWKV_py11) amd00@MZ32-00:~/llm_dev/ChatRWKV$ python v2/chat.py

ChatRWKV v2 https://github.com/BlinkDL/ChatRWKV

Chinese - cuda fp16 - /home/amd00/llm_dev/ChatRWKV/v2/prompt/default/Chinese-2.py
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
Using /home/amd00/.cache/torch_extensions/py311_cpu as PyTorch extensions root...
Creating extension directory /home/amd00/.cache/torch_extensions/py311_cpu/wkv_cuda...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/amd00/.cache/torch_extensions/py311_cpu/wkv_cuda/build.ninja...
Traceback (most recent call last):
File "/home/amd00/llm_dev/ChatRWKV/v2/chat.py", line 107, in
from rwkv.model import RWKV
File "/home/amd00/llm_dev/ChatRWKV/v2/../rwkv_pip_package/src/rwkv/model.py", line 29, in
load(
File "/home/amd00/anaconda3/envs/gh_ChatRWKV_py11/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load
return _jit_compile(
^^^^^^^^^^^^^
File "/home/amd00/anaconda3/envs/gh_ChatRWKV_py11/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1509, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/amd00/anaconda3/envs/gh_ChatRWKV_py11/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1611, in _write_ninja_file_and_build_library
_write_ninja_file_to_build_library(
File "/home/amd00/anaconda3/envs/gh_ChatRWKV_py11/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 2007, in _write_ninja_file_to_build_library
cuda_flags = common_cflags + COMMON_NVCC_FLAGS + _get_cuda_arch_flags()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/amd00/anaconda3/envs/gh_ChatRWKV_py11/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1773, in _get_cuda_arch_flags
arch_list[-1] += '+PTX'
~~~~~~~~~^^^^
IndexError: list index out of range

@BlinkDL
Copy link
Owner

BlinkDL commented Jul 7, 2023

try nvidia-smi

@SeekPoint
Copy link
Author

(gh_ChatRWKV) amd00@MZ32-00:~/llm_dev/ChatRWKV$ nvidia-smi
Sat Jul 8 17:10:32 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.43.04 Driver Version: 515.43.04 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:81:00.0 Off | N/A |
| 16% 41C P8 2W / 250W | 10MiB / 22528MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... On | 00000000:C1:00.0 Off | N/A |
| 16% 41C P8 16W / 250W | 10MiB / 22528MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1417 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 1945 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 1417 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 1945 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
(gh_ChatRWKV) amd00@MZ32-00:~/llm_dev/ChatRWKV$

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants