Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to execute inference using Colab #128

Open
liabar01 opened this issue Apr 8, 2024 · 1 comment
Open

Unable to execute inference using Colab #128

liabar01 opened this issue Apr 8, 2024 · 1 comment
Labels
help wanted Extra attention is needed

Comments

@liabar01
Copy link

liabar01 commented Apr 8, 2024

When following the colab notebook two errors arise one of which has been mentioned previously and the second relates to an include which does not appear to be used:
"AttributeError: torch._inductor.config.fx_graph_cache does not exist"

Removing the include still does not allow me to execute the notebook. I have also tried with a local GPU (poetry and pip based install) and have also tried with CPU.



Using device=cuda
Loading model ...
using dtype=float16
Time to load model: 10.76 seconds
Compiling...Can take up to 2 mins.
---------------------------------------------------------------------------
BackendCompilerFailed                     Traceback (most recent call last)
[<ipython-input-5-1ac00c833092>](https://localhost:8080/#) in <cell line: 4>()
      2 from fam.llm.fast_inference import TTS
      3 
----> 4 tts = TTS()

50 frames
[/usr/lib/python3.10/concurrent/futures/_base.py](https://localhost:8080/#) in __get_result(self)
    401         if self._exception:
    402             try:
--> 403                 raise self._exception
    404             finally:
    405                 # Break a reference cycle with the exception in self._exception

BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Internal Triton PTX codegen error: 
ptxas /tmp/compile-ptx-src-b01c43, line 70; error   : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 70; error   : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 72; error   : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 72; error   : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 74; error   : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 74; error   : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 76; error   : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 76; error   : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas fatal   : Ptx assembly aborted due to errors


Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True
I also followed the Readme and tried both the pip and poetry based installs 

Any help appreciated, attached my 'patch' which has the workaround from previous issue and include removed
MLECO-4788-MetaVoice-model-for-speech-generation-par.txt

Thanks,
Liam

@bprathap104
Copy link

Same issue here, can anyone suggest a solution for this. I've added q.half() from #108

@lucapericlp lucapericlp added the help wanted Extra attention is needed label May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants