Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot install on Windows #16

Open
SoftologyPro opened this issue Feb 22, 2024 · 29 comments
Open

Cannot install on Windows #16

SoftologyPro opened this issue Feb 22, 2024 · 29 comments

Comments

@SoftologyPro
Copy link

I create new clean Python environment.
Then the following commands all work

git clone https://github.com/myshell-ai/AIlice.git
cd AIlice
pip install -e .

But when I try the next pip install -e .[speech] it fails with these errors...
fail.txt

Any ideas? Thanks.

@SoftologyPro
Copy link
Author

After some Googling, adding an environment variable
set SETUPTOOLS_USE_DISTUTILS=stdlib
got past the error.

Hopefully that helps other Windows users.

@SoftologyPro
Copy link
Author

SoftologyPro commented Feb 22, 2024

So, I got through all the installs, ie.

git clone https://github.com/myshell-ai/AIlice.git
cd AIlice
pip install -e .
set SETUPTOOLS_USE_DISTUTILS=stdlib
pip install -e .[speech]
pip install -e .[finetuning]

When I run
python ailicemain.py --modelID=hf:Open-Orca/Mistral-7B-OpenOrca --prompt="main" --quantization=8bit --contextWindowRatio=0.6
I now get

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
The `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead.
Encountered an exception, AIlice is exiting: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes`

I ran those 2 pip installs, ie

pip install accelerate
pip install -i https://pypi.org/simple/ bitsandbytes

but I still get the same error?

@stevenlu137
Copy link
Collaborator

I tried to reproduce the errors mentioned above, but failed. Can you tell me the details of your running environment? For example, the operating system version, python seems to be 3.10?

If this problem cannot be solved in the end, it is actually a better choice to directly use LM Studio to run the model, so that you can run larger models, and you do not need to wait for a long time each time you start AIlice. In addition, the voice dialogue function is not very good at the moment, if you are not interested in developing it, it is best to wait until this module is refactored before experiencing it.

@SoftologyPro
Copy link
Author

SoftologyPro commented Feb 22, 2024

Windows 11 and Python 3.10.8.
I created a fresh env to test under, ie python -m venv voc_ailice and then ran

git clone https://github.com/myshell-ai/AIlice.git
cd AIlice
pip install -e .
set SETUPTOOLS_USE_DISTUTILS=stdlib
pip install -e .[speech]
pip install -e .[finetuning]

No problem if it is not 100% ready yet. I can wait. One of my users requested I added support for it in Visions of Chaos.

@stevenlu137
Copy link
Collaborator

I forgot to remind you early that AIlice does not yet support Windows PowerShell scripts, so you will encounter problems when use the programming function of AIlice. I am trying to reproduce your problem. If it is solved successfully, I will consider adding PowerShell support.
I took a look at Visions of Chaos, very cool software!

@stevenlu137
Copy link
Collaborator

I installed AIlice in the conda environment under Windows, and tried programming and executing tasks. It turns out that WSL can execute these bash codes very well, so it seems that we do not really need to support powershell.

But I still haven't reproduced these installation errors. Let me clean up AIlice's dependencies first.

@SoftologyPro
Copy link
Author

SoftologyPro commented Feb 23, 2024

But I still haven't reproduced these installation errors. Let me clean up AIlice's dependencies first.

Maybe a requirements.txt with all dependancies like other repos would help? Then the install steps would be the git clone followed by pip install -r requirements.txt

If you do go that was, please ensure version numbers for each package, ie
https://softologyblog.wordpress.com/2023/10/10/a-plea-to-all-python-developers/

No rush, take your time.

@stevenlu137
Copy link
Collaborator

OK, I will provide a requirements.txt after the voice dialogue function is perfected and useless dependencies are removed. Please wait for a few days.

@stevenlu137
Copy link
Collaborator

I just submitted a requirements.txt in the main branch, tested under python 3.10.13 environment.
Please note that in this environment, various functions of AIlice (such as voice, fine-tuning) work normally, but there are several dependencies in the environment that are in conflict. I hope this meets your requirements.
We will improve the problem of dependency conflicts by cutting dependencies in the future.

@SoftologyPro
Copy link
Author

OK, I cloned the latest repo, created a new venv with python -m venv voc_ailice then installed the requriements.
I did uninstll the CPU torch and install torch==2.2.1+cu118 at the end to support GPU.
I also had to copy alicemain.py up a directory for the script to start (as it references imports under ailice).
Then when ruinning
python ailicemain.py --modelID=hf:Open-Orca/Mistral-7B-OpenOrca --prompt="main" --quantization=8bit --contextWindowRatio=0.6
It does go off and download the models,. but then gives these errors.

(voc_ailice) D:\Tests\AIlice>python ailicemain.py --modelID=hf:Open-Orca/Mistral-7B-OpenOrca --prompt="main" --quantization=8bit --contextWindowRatio=0.6
config.json is located at C:\Users\Jason\AppData\Local\Steven Lu\ailice/config.json
In order to simplify installation and usage, we have set local execution as the default behavior, which means AI has complete control over the local environment. To prevent irreversible losses due to potential AI errors, you may consider one of the following two methods: the first one, run AIlice in a virtual machine; the second one, install Docker, use the provided Dockerfile to build an image and container, and modify the relevant configurations in config.json. For detailed instructions, please refer to the documentation.
killing proc with PID 2044
killing proc with PID 2652
killing proc with PID 3756
killing proc with PID 12332
killing proc with PID 35252
killing proc with PID 41644
killing proc with PID 73964
storage  started.
browser  started.
arxiv  started.
google  started.
duckduckgo  started.
scripter  started.
files  started.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
The `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead.
False

===================================BUG REPORT===================================
D:\Tests\AIlice\voc_ailice\lib\site-packages\bitsandbytes\cuda_setup\main.py:167: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes


  warn(msg)
================================================================================
The following directories listed in your path were found to be non-existent: {WindowsPath('AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAcCZeirGGrEqmdFE/iVIU9gQAAAACAAAAAAAQZgAAAAEAACAAAABkOjlPM3Ad7QK5YEqtYWm9QgyusA6afSSnl+OD09xTSwAAAAAOgAAAAAIAACAAAACAjUxwJk6mANJL++YQnuDM4jNW51pDNHK3iaLWCvgMyWAAAACaphf6WH4LAJZeNzzEfImYhLBtsIJgwNeRySNGXC/3MFQsnV2RoshvUIUueG7p3sKMfrlGcIKtK0yWP9UK7YZm6Wq3yYrCHmqANABmFNU5jD5/U/zAXgT+Si94/5DfMcRAAAAAn/6Pi0Gs3tcjkKqccb9FvB4ti6Yl+oqKKcdN76zHUZXJjYRLXb4OidtJWeUhiKT+fz6BiJ8IZ7DOrnjg0Hc4yg==')}
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
DEBUG: Possible options found for libcudart.so: set()
CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.9.
CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
CUDA SETUP: Loading binary D:\Tests\AIlice\voc_ailice\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected.
CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable
CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null
CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc
CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA.
CUDA SETUP: Solution 2a): Download CUDA install script: wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/cuda_install.sh
CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local
Encountered an exception, AIlice is exiting: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):

        CUDA Setup failed despite GPU being available. Please run the following command to get more information:

        python -m bitsandbytes

        Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
        to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
        and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
  File "D:\Tests\AIlice\ailicemain.py", line 100, in main
    mainLoop(**kwargs)
  File "D:\Tests\AIlice\ailicemain.py", line 65, in mainLoop
    llmPool.Init([modelID])
  File "D:\Tests\AIlice\ailice\core\llm\ALLMPool.py", line 19, in Init
    self.pool[id] = MODEL_WRAPPER_MAP[config.models[modelType]["modelWrapper"]](modelType=modelType, modelName=modelName)
  File "D:\Tests\AIlice\ailice\core\llm\AModelLLAMA.py", line 21, in __init__
    self.LoadModel(modelName)
  File "D:\Tests\AIlice\ailice\core\llm\AModelLLAMA.py", line 35, in LoadModel
    self.LoadModel_Default(modelName=modelName)
  File "D:\Tests\AIlice\ailice\core\llm\AModelLLAMA.py", line 41, in LoadModel_Default
    self.model = transformers.AutoModelForCausalLM.from_pretrained(modelName,
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\transformers\models\auto\auto_factory.py", line 561, in from_pretrained
    return model_class.from_pretrained(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\transformers\modeling_utils.py", line 3389, in from_pretrained
    hf_quantizer.preprocess_model(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\transformers\quantizers\base.py", line 166, in preprocess_model
    return self._process_model_before_weight_loading(model, **kwargs)
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\transformers\quantizers\quantizer_bnb_8bit.py", line 219, in _process_model_before_weight_loading
    from ..integrations import get_keys_to_not_convert, replace_with_bnb_linear
  File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\transformers\utils\import_utils.py", line 1380, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\transformers\utils\import_utils.py", line 1392, in _get_module
    raise RuntimeError(
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000244598B5090>
Traceback (most recent call last):
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000244598B5A20>
Traceback (most recent call last):
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000244598B6320>
Traceback (most recent call last):
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000244598B68C0>
Traceback (most recent call last):
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000244598B6E60>
Traceback (most recent call last):
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000244598B7400>
Traceback (most recent call last):
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'

(voc_ailice) D:\Tests\AIlice>

@stevenlu137
Copy link
Collaborator

  1. Since the directory structure of AIlice is designed for future release as a package, pip install -e is still suggested after installing dependencies using requirements.txt. This can avoid a lot of inconvenience.
  2. This is mainly the error message output by bitsandbytes, indicating that the CUDA runtime is missing. So first you need to confirm your GPU model and whether CUDA is installed, and use python -m bitsandbytes to view detailed information about the running environment.

@SoftologyPro
Copy link
Author

SoftologyPro commented Feb 27, 2024

OK, got a bit further.
I ran pip install -e .
For bitsabdbytes I uninstalled the existing version and then used this version which worked in other environments I have
pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts git+https://github.com/Keith-Hon/bitsandbytes-windows.git@fdb1ba729fa1936f2ef69e5ff1084d09189fdc2a
Now when I run
python ailicemain.py --modelID=hf:Open-Orca/Mistral-7B-OpenOrca --prompt="main" --quantization=8bit --contextWindowRatio=0.6
It gets to the green User: prompt, but when I try just hello I get a new error.
FUll stats...

(voc_ailice) D:\Tests\AIlice>python ailicemain.py --modelID=hf:Open-Orca/Mistral-7B-OpenOrca --prompt="main" --quantization=8bit --contextWindowRatio=0.6
config.json is located at C:\Users\Jason\AppData\Local\Steven Lu\ailice/config.json
In order to simplify installation and usage, we have set local execution as the default behavior, which means AI has complete control over the local environment. To prevent irreversible losses due to potential AI errors, you may consider one of the following two methods: the first one, run AIlice in a virtual machine; the second one, install Docker, use the provided Dockerfile to build an image and container, and modify the relevant configurations in config.json. For detailed instructions, please refer to the documentation.
killing proc with PID 1308
killing proc with PID 16472
killing proc with PID 26220
killing proc with PID 28116
killing proc with PID 41856
killing proc with PID 45100
killing proc with PID 46820
killing proc with PID 54184
killing proc with PID 55836
storage  started.
browser  started.
arxiv  started.
google  started.
duckduckgo  started.
scripter  started.
files  started.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
The `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead.

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
binary_path: D:\Tests\AIlice\voc_ailice\lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll
CUDA SETUP: Loading binary D:\Tests\AIlice\voc_ailice\lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll...
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:06<00:00,  3.21s/it]
generation_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 120/120 [00:00<?, ?B/s]
D:\Tests\AIlice\voc_ailice\lib\site-packages\huggingface_hub\file_download.py:149: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\Jason\.cache\huggingface\hub\models--Open-Orca--Mistral-7B-OpenOrca. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
  warnings.warn(message)
USER: hello
Keyword arguments {'add_special_tokens': False} not recognized.
ASSISTANT_AIlice:  D:\Tests\AIlice\voc_ailice\lib\site-packages\transformers\models\mistral\modeling_mistral.py:688: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
  attn_output = torch.nn.functional.scaled_dot_product_attention(

CUDA is definitely installed as I use it for many other Python AI/ML scripts.

@stevenlu137
Copy link
Collaborator

"Keyword arguments {'add_special_tokens': False} not recognized."
This error is common in recent versions of transformers. The reason is not clear yet, but it does not affect the operation of AIlice.

"ASSISTANT_AIlice: D:\Tests\AIlice\voc_ailice\lib\site-packages\transformers\models\mistral\modeling_mistral.py:688: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
attn_output = torch.nn.functional.scaled_dot_product_attention("
This one seems incomplete. Will AIlice crash next?

@SoftologyPro
Copy link
Author

OK, my mistake. I was running some other Python code in the background so it was slower than expected. I do now get a response. I get error messages before the reponses, but it seems to work.

(voc_ailice) D:\Tests\AIlice>python ailicemain.py --modelID=hf:Open-Orca/Mistral-7B-OpenOrca --prompt="main" --quantization=8bit --contextWindowRatio=0.6
config.json is located at C:\Users\Jason\AppData\Local\Steven Lu\ailice/config.json
In order to simplify installation and usage, we have set local execution as the default behavior, which means AI has complete control over the local environment. To prevent irreversible losses due to potential AI errors, you may consider one of the following two methods: the first one, run AIlice in a virtual machine; the second one, install Docker, use the provided Dockerfile to build an image and container, and modify the relevant configurations in config.json. For detailed instructions, please refer to the documentation.
storage  started.
browser  started.
arxiv  started.
google  started.
duckduckgo  started.
scripter  started.
files  started.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
The `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead.

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
binary_path: D:\Tests\AIlice\voc_ailice\lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll
CUDA SETUP: Loading binary D:\Tests\AIlice\voc_ailice\lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll...
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:04<00:00,  2.06s/it]
USER: hello
Keyword arguments {'add_special_tokens': False} not recognized.
ASSISTANT_AIlice:  D:\Tests\AIlice\voc_ailice\lib\site-packages\transformers\models\mistral\modeling_mistral.py:688: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
  attn_output = torch.nn.functional.scaled_dot_product_attention(
Hello! I'm an assistant designed to help you with your needs. Please provide me with a task or question, and I will do my best to assist you.
USER: tell me a joke and explain why it is funny
Keyword arguments {'add_special_tokens': False} not recognized.
Keyword arguments {'add_special_tokens': False} not recognized.
ASSISTANT_AIlice:  Sure! Here's a joke: "Why don't scientists trust atoms? Because they make up everything!"

This joke is funny because it plays on the pun of the word "make up," which can mean both creating something and dressing up. In this context, it implies that atoms are responsible for creating everything in the universe, but also that they are deceitful or dishonest, as they "make up" stories or lies. The humor comes from the unexpected twist in the meaning of the word "make up" and the absurdity of attributing human-like traits to atoms.
USER:

Getting closer.
I see that gradio is installed with requirements. Is there a Web UI for this? Does it allow text-to-speech and speech-to-text?

@stevenlu137
Copy link
Collaborator

Yes, you can communicate with AIlice directly through voice conversation:
ailice_web --modelID=hf:Open-Orca/Mistral-7B-OpenOrca --prompt="main" --quantization=8bit --contextWindowRatio=0.6 --speechOn --ttsDevice=cuda --sttDevice=cuda
or
ailice_main --modelID=hf:Open-Orca/Mistral-7B-OpenOrca --prompt="main" --quantization=8bit --contextWindowRatio=0.6 --speechOn --ttsDevice=cuda --sttDevice=cuda

Note that you may need to wait for a while when using it for the first time, AIlice needs to download model weights in the background.
Remove the "--ttsDevice=cuda --sttDevice=cuda" options if you do not have enough video mem.

@SoftologyPro
Copy link
Author

OK, thanks. Maybe add some text after "files started." when downloading the models...
"Please wait. AIlice is downloading required models. This can take a while depending on your Internet speed. Check the netwoprk activity under Task Manager to check for download traffic."

After the models download I can open the UI. I allow it access to my microphone.
I click record, say "hello this is a test" and click stop. Then I get this error.

To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\routes.py", line 561, in predict
    output = await route_utils.call_process_api(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\route_utils.py", line 235, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\blocks.py", line 1627, in process_api
    result = await self.call_function(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\blocks.py", line 1173, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\utils.py", line 690, in wrapper
    response = f(*args, **kwargs)
  File "D:\Tests\AIlice\ailiceweb.py", line 114, in stt
    text = speech.Speech2Text(audio[1], audio[0])
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 102, in methodTemplate
    return self.RemoteCall(methodName,args,kwargs)
  File "D:\Tests\AIlice\ailice\common\lightRPC.py", line 129, in RemoteCall
    raise ret['exception']
ValueError: Input signal length=2 is too small to resample from 44100->16000
Traceback (most recent call last):
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\queueing.py", line 495, in call_prediction
    output = await route_utils.call_process_api(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\route_utils.py", line 235, in call_process_api
    output = await app.get_blocks().process_api(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\blocks.py", line 1627, in process_api
    result = await self.call_function(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\blocks.py", line 1185, in call_function
    prediction = await utils.async_iteration(iterator)
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\utils.py", line 514, in async_iteration
    return await iterator.__anext__()
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\utils.py", line 507, in __anext__
    return await anyio.to_thread.run_sync(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\utils.py", line 490, in run_sync_iterator_async
    return next(iterator)
  File "D:\Tests\AIlice\voc_ailice\lib\site-packages\gradio\utils.py", line 673, in gen_wrapper
    response = next(iterator)
  File "D:\Tests\AIlice\ailiceweb.py", line 87, in bot
    if str != type(history[-1][0]):
IndexError: list index out of range

When I play the recorded audio it is sped up 2 times and is repeated twice. If I type a text input the AI does respond and speak the response.

@stevenlu137
Copy link
Collaborator

Yeah, I'm thinking of adding a progress prompt as well, it's just that for some technical reasons this simple little task has become a bit difficult.
The problem you're currently having looks like the gradio isn't working properly, because the whole replaying audio thing is completely controlled internally by gradio. It seems that you can only try to switch the version of gradio.

If you are eager to try the voice conversation function, you can also use the voice conversation function of ailice_main, which provides a mechanism similar to natural dialogue between people, but is not as controllable as ailice_web.

@SoftologyPro
Copy link
Author

If you are eager to try the voice conversation function, you can also use the voice conversation function of ailice_main, which provides a mechanism similar to natural dialogue between people, but is not as controllable as ailice_web.

I can wait for a fix for the UI voice conversation fix. That is what people will want to use rather than the commandline.

@stevenlu137
Copy link
Collaborator

You may want to pay attention to this issue:
gradio-app/gradio#7284

@SoftologyPro
Copy link
Author

You may want to pay attention to this issue: gradio-app/gradio#7284

OK, thanks for that link. It may be an issue with Firefox byu the looks of it. I was going to test, but...

I tried a clean install today. Starting from a fresh environment, then git clone, then the pip installs. Now no matter how I do the installs (pip install -e method or requirements.txt) I get this new error when starting

D:\Tests\AIlice\AIlice\.venv\lib\site-packages\requests\__init__.py:109: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (None)/charset_normalizer (3.3.2) doesn't match a supported version!
  warnings.warn(
config.json is located at C:\Users\Jason\AppData\Local\Steven Lu\ailice/config.json
In order to simplify installation and usage, we have set local execution as the default behavior, which means AI has complete control over the local environment. To prevent irreversible losses due to potential AI errors, you may consider one of the following two methods: the first one, run AIlice in a virtual machine; the second one, install Docker, use the provided Dockerfile to build an image and container, and modify the relevant configurations in config.json. For detailed instructions, please refer to the documentation.
killing proc with PID 20952
killing proc with PID 20976
killing proc with PID 20992
killing proc with PID 21036
killing proc with PID 21044
killing proc with PID 21068
killing proc with PID 21120
killing proc with PID 21180
killing proc with PID 21244
killing proc with PID 21308
killing proc with PID 21340
storage  started.
browser  started.
arxiv  started.
google  started.
duckduckgo  started.
scripter  started.
speech  started.
files  started.
Encountered an exception, AIlice is exiting: Resource temporarily unavailable
  File "D:\Tests\AIlice\AIlice\ailice\ailiceweb.py", line 172, in main
    mainLoop(**kwargs)
  File "D:\Tests\AIlice\AIlice\ailice\ailiceweb.py", line 45, in mainLoop
    clientPool.Init()
  File "D:\Tests\AIlice\AIlice\ailice\common\ARemoteAccessors.py", line 25, in Init
    self.pool[speech] = makeClient(speech)
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 143, in makeClient
    ret=ReceiveMsg(socket)
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 27, in ReceiveMsg
    return pickle.loads(conn.recv())
  File "zmq\\backend\\cython\\socket.pyx", line 805, in zmq.backend.cython.socket.Socket.recv
  File "zmq\\backend\\cython\\socket.pyx", line 841, in zmq.backend.cython.socket.Socket.recv
  File "zmq\\backend\\cython\\socket.pyx", line 199, in zmq.backend.cython.socket._recv_copy
  File "zmq\\backend\\cython\\socket.pyx", line 194, in zmq.backend.cython.socket._recv_copy
  File "zmq\\backend\\cython\\checkrc.pxd", line 22, in zmq.backend.cython.checkrc._check_rc
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000232FC362050>
Traceback (most recent call last):
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000232FC362830>
Traceback (most recent call last):
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000232FC363130>
Traceback (most recent call last):
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000232FC3636D0>
Traceback (most recent call last):
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000232FC363C70>
Traceback (most recent call last):
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'
Exception ignored in: <function makeClient.<locals>.GenesisRPCClientTemplate.__del__ at 0x00000232FC3C4280>
Traceback (most recent call last):
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 134, in __del__
  File "D:\Tests\AIlice\AIlice\ailice\common\lightRPC.py", line 118, in Send
AttributeError: 'NoneType' object has no attribute 'REQ'

That is at the point that the models would normally be downloaded? Is there a temp outage with the model server/hosts?

@stevenlu137
Copy link
Collaborator

AIlice's peripheral modules (including speech) are RPC services of independent processes. Some modules are necessary to start AIlice, if it fails to start, AIlice will fail.
This situation is caused by the failure of the speech module to start. You can use the following command to start this module manually and view the error output:

python3 -m ailice.modules.ASpeech --addr=ipc:///tmp/ASpeech.ipc

In the future, I will add some error handling and user-friendly error reporting mechanisms to remind users.

@stevenlu137
Copy link
Collaborator

In addition, because the current timeout setting is too short, this error may occasionally occur. Just try again.

@stevenlu137
Copy link
Collaborator

I just submitted a change to the master branch to avoid "Resource temporarily unavailable" errors caused by slow startup of peripheral modules. You can update the code to see if the problem is solved.

@SoftologyPro
Copy link
Author

SoftologyPro commented Feb 27, 2024

OK, testing again today.

This works
python ailicemain.py --modelID=hf:Open-Orca/Mistral-7B-OpenOrca --prompt="main" --quantization=8bit --contextWindowRatio=0.6

This still fails with "Resource temporarily unavailable"
python ailiceweb.py --modelID=hf:Open-Orca/Mistral-7B-OpenOrca --prompt="main" --quantization=8bit --contextWindowRatio=0.6 --speechOn --ttsDevice=cuda --sttDevice=cuda

If I try python -m ailice.modules.ASpeech --addr=ipc:///tmp/ASpeech.ipc I get new errors...

(.venv) D:\Tests\AIlice\AIlice\ailice>python -m ailice.modules.ASpeech --addr=ipc:///tmp/ASpeech.ipc
Traceback (most recent call last):
  File "D:\Python\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "D:\Python\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "D:\Tests\AIlice\AIlice\ailice\modules\ASpeech.py", line 8, in <module>
    from ailice.modules.speech.ATTS_LJS import T2S_LJS
  File "D:\Tests\AIlice\AIlice\ailice\modules\speech\ATTS_LJS.py", line 1, in <module>
    from espnet2.bin.tts_inference import Text2Speech
  File "D:\Tests\AIlice\AIlice\.venv\lib\site-packages\espnet2\bin\tts_inference.py", line 17, in <module>
    from typeguard import check_argument_types
ImportError: cannot import name 'check_argument_types' from 'typeguard' (D:\Tests\AIlice\AIlice\.venv\lib\site-packages\typeguard\__init__.py)

This is the batch file I now use to install a clean setup from scratch to test. Uses the original pip install -e methods and updates bitsandbytes to a working version and torch for GPU support.

@echo off
echo *** Deleting AIlice directory if it exists
if exist AIlice\. rd /S /Q AIlice
echo *** Deleting config.json
del /q "%USERPROFILE%\AppData\Local\Steven Lu\ailice\config.json"
echo *** Cloning AIlice repository
git clone https://github.com/myshell-ai/AIlice.git
cd AIlice
echo *** setting up virtual environment
if exist .venv\. rd /S /Q .venv
python -m venv .venv
echo *** activating virtual environment
call .venv\scripts\activate.bat
echo *** updating pip
python.exe -m pip install --upgrade pip
echo *** pip install -e .
pip install -e .
echo *** pip install -e .[speech]
set SETUPTOOLS_USE_DISTUTILS=stdlib
pip install -e .[speech]
echo *** pip install -e .[finetuning]
pip install -e .[finetuning]
echo *** pip install -e .
pip install -e .
echo *** installing GPU torch
pip uninstall -y torch
pip uninstall -y torch
pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts torch==2.1.1+cu118 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip uninstall -y charset-normalizer
pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts charset-normalizer==3.3.2
echo *** fixing bitsandbytes
pip uninstall -y bitsandbytes
pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts https://softology.pro/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl
cd ..
echo *** finished AIlice install

@stevenlu137
Copy link
Collaborator

Ah, this is a known issue. Some dependencies, maybe espnet, occasionally have versions that do not correctly reference its dependency typeguard, causing problems.
Previously AIlice had added typeguard==2.13.3 as a remedy, but considering this is not AIlice's direct dependency, this is not a good option, so I removed this item in a recent commit. It seems that maybe this item can be added to requirements.txt.
You can try to install typeguard.

@SoftologyPro
Copy link
Author

Rolling back typeguard got it working.
I have to do that all the time when packages outside my control get an upgrade (without a version number).
Gradio record problem is still there,. but that will probably need a gradio fix.
Here is a batch file to install clean on Windows. Create a new directory and run this bacth file inside it.

@echo off
echo *** Deleting AIlice directory if it exists
if exist AIlice\. rd /S /Q AIlice
echo *** Deleting config.json
del /q "%USERPROFILE%\AppData\Local\Steven Lu\ailice\config.json"
echo *** Cloning AIlice repository
git clone https://github.com/myshell-ai/AIlice.git
cd AIlice
echo *** setting up virtual environment
if exist .venv\. rd /S /Q .venv
python -m venv .venv
echo *** activating virtual environment
call .venv\scripts\activate.bat
echo *** updating pip
python.exe -m pip install --upgrade pip
echo *** pip install -e .
pip install -e .
echo *** pip install -e .[speech]
set SETUPTOOLS_USE_DISTUTILS=stdlib
pip install -e .[speech]
echo *** pip install -e .[finetuning]
pip install -e .[finetuning]
echo *** pip install -e .
pip install -e .
echo *** installing GPU torch
pip uninstall -y torch
pip uninstall -y torch
pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts torch==2.1.1+cu118 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
echo *** patching other Python packages
pip uninstall -y charset-normalizer
pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts charset-normalizer==3.3.2
pip uninstall -y typing_extensions
pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts typing_extensions==4.8.0
pip uninstall -y typeguard
pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts typeguard==2.13.3
echo *** fixing bitsandbytes
pip uninstall -y bitsandbytes
pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts https://softology.pro/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl
cd ..
echo *** finished AIlice install

@SoftologyPro
Copy link
Author

Launching the gradio UI on Firefox has the bug that the recorded sound is twice as fast and doubled up.
Launching the gradio UI on Edge lstarts, but when you click "Enable microphone access" Edge hangs.
Launching the gradio UI on Chrome launches, the audio records correctly, the first time it gave errors, but the second time it works. Record, ask question, stop recording, UI shows question and AI answers and speaks.

@stevenlu137
Copy link
Collaborator

Thank you for your detailed information!
I usually don’t work under windows, so there are still some things that confuse me here. For example, does torch have the cpu version installed by default?

@SoftologyPro
Copy link
Author

Thank you for your detailed information! I usually don’t work under windows, so there are still some things that confuse me here. For example, does torch have the cpu version installed by default?

If you just install "torch" on Windows you get the CPU version. You need to specify the GPU version by "torch==2.1.1+cu118". Then the other fixes so gradio and your scripts start. Most of my environments these days have some form of pip uninstall / pip installs at the end to fix version numbers or packages that have since been updated and broken older code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants