Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Symlink Support #86

Open
ChiNoel-osu opened this issue Nov 22, 2023 · 0 comments
Open

Symlink Support #86

ChiNoel-osu opened this issue Nov 22, 2023 · 0 comments

Comments

@ChiNoel-osu
Copy link

ChiNoel-osu commented Nov 22, 2023

I was using faster-whisper and the downloaded model utilized relative symlink (to avoid duplications I suppose) but the webui (or the ctranslate2) doesn't like it:

Traceback (most recent call last):
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\subsai\webui.py", line 545, in <module>
    run()
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\subsai\webui.py", line 538, in run
    webui()
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\subsai\webui.py", line 318, in webui
    subs = _transcribe(file_path, stt_model_name, model_config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 194, in wrapper
    return cached_func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 223, in __call__
    return self._get_or_create_cached_value(args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 248, in _get_or_create_cached_value
    return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 302, in _handle_cache_miss
    computed_value = self._info.func(*func_args, **func_kwargs)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\subsai\webui.py", line 189, in _transcribe
    model = subs_ai.create_model(model_name, model_config=model_config)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\subsai\main.py", line 96, in create_model
    return AVAILABLE_MODELS[model_name]['class'](model_config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\subsai\models\faster_whisper_model.py", line 240, in __init__
    self.model = WhisperModel(model_size_or_path=self._model_size_or_path,
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\AI\LLM\Subs-AI\venv\Lib\site-packages\faster_whisper\transcribe.py", line 120, in __init__
    self.model = ctranslate2.models.Whisper(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Unable to open file 'model.bin' in model 'C:\Users\Victor\.cache\huggingface\hub\models--guillaumekln--faster-whisper-large-v2\snapshots\f541c54c566e32dc1fbce16f98df699208837e8b'

models--guillaumekln--faster-whisper-large-v2\snapshots\f541c54c566e32dc1fbce16f98df699208837e8b is a folder that contains model files, those files are symlinks to the actual files at models--guillaumekln--faster-whisper-large-v2\blobs folder.

If I copy those files over and rename them, this error goes away. And the rest works flawlessly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant