-
-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: "LayerNormKernelImpl" #110
Comments
Hello! Very sorry you're experiencing issues. My experience is that this particular error message is usually a red herring, and this appears to be the case here - the message at the top stating Thanks, and again sorry for the trouble! |
WIN11 Intel Core i7 with HD520 internal GPU |
Same error on full amd build, 7950x3D & 6900XT.
|
Had same problem on v2.5 now I've upgraded to latest v3 but still keep getting this error:
Received error builtins.RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
In settings its selected use only "Full"
So why is it even trying to use "Half"?
Tried it with different models both SD1.5 & SDXL get same error :(
Here is Debug :
stderr (may not be an error:) DEBUG:root:Initializing MLIR with module: _site_initialize_0
DEBUG:root:Registering dialects from initializer
DEBUG:jax._src.path:etils.epath was not found. Using pathlib for file I/O.
DEBUG:enfugue:Changing to scheduler EulerDiscreteScheduler (eds)
DEBUG:enfugue:Instruction 2 beginning task “Preparing Inference Pipeline”
DEBUG:enfugue:Inferencing on cpu, must use dtype bfloat16
DEBUG:enfugue:Initializing pipeline from checkpoint at D:\AIs\Stability Matrix\Data\Models\StableDiffusion\deliberate_v3.safetensors. Arguments are {'cache_dir': 'D:\AIs\Stability Matrix\Data\.cache\enfugue\cache', 'engine_size': '512', 'tiling_stride': '0', 'requires_safety_checker': 'False', 'torch_dtype': 'dtype', 'force_full_precision_vae': 'False', 'controlnets': {}, 'ip_adapter': 'IPAdapter', 'task_callback': 'function', 'offload_models': 'True', 'load_safety_checker': 'False'}
torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from
torchvision.io
, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you havelibjpeg
orlibpng
installed before buildingtorchvision
from source?torch_jit_internal.py:857: UserWarning: Unable to retrieve source for @torch.jit._overload function: .
warnings.warn(
torch_jit_internal.py:857: UserWarning: Unable to retrieve source for @torch.jit._overload function: .
warnings.warn(
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Pipeline still initializing. Please wait.
WARNING:tensorflow:AutoGraph is not available in this environment: functions lack code information. This is typical of some environments like the interactive Python shell. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#access-to-source-code for more information.
DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
DEBUG:h5py._conv:Creating converter from 7 to 5
DEBUG:h5py._conv:Creating converter from 5 to 7
DEBUG:h5py._conv:Creating converter from 7 to 5
DEBUG:h5py._conv:Creating converter from 5 to 7
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Instruction 2 beginning task “Loading checkpoint file deliberate_v3.safetensors”
INFO:enfugue:No configuration file found for checkpoint deliberate_v3, using Stable Diffusion 1.5
INFO:enfugue:Checkpoint has 4 input channels
DEBUG:enfugue:Instruction 2 beginning task “Loading UNet”
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Loading 686 keys into UNet state dict (non-strict)
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Offloading enabled; sending UNet to CPU
DEBUG:enfugue:Instruction 2 beginning task “Loading Default VAE”
DEBUG:enfugue:Loading 248 keys into Autoencoder state dict (strict). Autoencoder scale is 0.18215
DEBUG:enfugue:Offloading enabled; sending VAE to CPU
DEBUG:enfugue:Instruction 2 beginning task “Loading preview VAE madebyollin/taesd”
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:http.client:send: b'HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.0; python/3.10.11; torch/2.1.0+cu118\r\nAccept-Encoding: identity\r\nAccept: /\r\nConnection: keep-alive\r\nauthorization: Bearer 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\r\nX-Amzn-Trace-Id: 0776431b-6e6e-4fb1-bb49-27e380bf6b2d\r\n\r\n'
DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n'
DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8
DEBUG:http.client:header: Content-Length: 551
DEBUG:http.client:header: Connection: keep-alive
DEBUG:http.client:header: Date: Fri, 17 Nov 2023 17:23:32 GMT
DEBUG:http.client:header: X-Powered-By: huggingface-moon
DEBUG:http.client:header: X-Request-Id: Root=1-6557a194-25b783783848cf9673df5f53;0776431b-6e6e-4fb1-bb49-27e380bf6b2d
DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co
DEBUG:http.client:header: Vary: Origin
DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
DEBUG:http.client:header: X-Robots-Tag: none
DEBUG:http.client:header: X-Repo-Commit: 73b7d1c8836d16997316fb94c4b36a9095442c1d
DEBUG:http.client:header: Accept-Ranges: bytes
DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox
DEBUG:http.client:header: ETag: "62f01c3eb447730d88eb61aca75331a564301e5a"
DEBUG:http.client:header: X-Cache: Miss from cloudfront
DEBUG:http.client:header: Via: 1.1 876bec0443fc8f764d98d36e203f84e0.cloudfront.net (CloudFront)
DEBUG:http.client:header: X-Amz-Cf-Pop: JFK52-P3
DEBUG:http.client:header: X-Amz-Cf-Id: jdg7EdWh3AxR7ezET2ljKF2p6YpxuZ9hdkWvEbZXRxC-3RNJPiejkw==
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 0
DEBUG:http.client:send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.0; python/3.10.11; torch/2.1.0+cu118\r\nAccept-Encoding: identity\r\nAccept: /\r\nConnection: keep-alive\r\nauthorization: Bearer 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\r\nX-Amzn-Trace-Id: bdb63051-af58-4de0-9fdf-a44cf00f1074\r\n\r\n'
DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n'
DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8
DEBUG:http.client:header: Content-Length: 4519
DEBUG:http.client:header: Connection: keep-alive
DEBUG:http.client:header: Date: Fri, 17 Nov 2023 17:23:32 GMT
DEBUG:http.client:header: X-Powered-By: huggingface-moon
DEBUG:http.client:header: X-Request-Id: Root=1-6557a194-6376dd3244dbc09173a97734;bdb63051-af58-4de0-9fdf-a44cf00f1074
DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co
DEBUG:http.client:header: Vary: Origin
DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
DEBUG:http.client:header: X-Repo-Commit: 32bd64288804d66eefd0ccbe215aa642df71cc41
DEBUG:http.client:header: Accept-Ranges: bytes
DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox
DEBUG:http.client:header: ETag: "2c19f6666e0e163c7954df66cb901353fcad088e"
DEBUG:http.client:header: X-Cache: Miss from cloudfront
DEBUG:http.client:header: Via: 1.1 876bec0443fc8f764d98d36e203f84e0.cloudfront.net (CloudFront)
DEBUG:http.client:header: X-Amz-Cf-Pop: JFK52-P3
DEBUG:http.client:header: X-Amz-Cf-Id: 5erFe9aUVOvy9x2QD3069ngaMI6mZlnHUl0F73OPuXEEjljTcNO5rQ==
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Offloading enabled; sending text encoder to CPU
DEBUG:enfugue:Instruction 2 beginning task “Loading tokenizer openai/clip-vit-large-patch14”
DEBUG:http.client:send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.0; python/3.10.11; torch/2.1.0+cu118\r\nAccept-Encoding: identity\r\nAccept: /\r\nConnection: keep-alive\r\nauthorization: Bearer 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\r\nX-Amzn-Trace-Id: 55f50cd2-704c-4a18-b6b8-ac9c9abd67f6\r\n\r\n'
DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n'
DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8
DEBUG:http.client:header: Content-Length: 961143
DEBUG:http.client:header: Connection: keep-alive
DEBUG:http.client:header: Date: Fri, 17 Nov 2023 17:23:40 GMT
DEBUG:http.client:header: X-Powered-By: huggingface-moon
DEBUG:http.client:header: X-Request-Id: Root=1-6557a19c-1518ba5a0e642834504b84de;55f50cd2-704c-4a18-b6b8-ac9c9abd67f6
DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co
DEBUG:http.client:header: Vary: Origin
DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
DEBUG:http.client:header: X-Repo-Commit: 32bd64288804d66eefd0ccbe215aa642df71cc41
DEBUG:http.client:header: Accept-Ranges: bytes
DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox
DEBUG:http.client:header: ETag: "4297ea6a8d2bae1fea8f48b45e257814dcb11f69"
DEBUG:http.client:header: X-Cache: Miss from cloudfront
DEBUG:http.client:header: Via: 1.1 876bec0443fc8f764d98d36e203f84e0.cloudfront.net (CloudFront)
DEBUG:http.client:header: X-Amz-Cf-Pop: JFK52-P3
DEBUG:http.client:header: X-Amz-Cf-Id: MOmuoWkpr2pn5cQzOqL-PLzgFoldPlfBdxmv-r2okM1vLz4xK9CHTg==
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0
diffusers\pipelines\pipeline_utils.py:749: FutureWarning:
torch_dtype
is deprecated and will be removed in version 0.25.0.deprecate("torch_dtype", "0.25.0", "")
DEBUG:enfugue:Pipeline still initializing. Please wait.
DEBUG:enfugue:Setting scheduler to EulerDiscreteScheduler
DEBUG:enfugue:Instruction 2 beginning task “Executing Inference”
DEBUG:enfugue:Calling pipeline with arguments {'latent_callback': 'function', 'width': '512', 'height': '512', 'strength': '0.99', 'tile': '(False, False)', 'freeu_factors': '[1.2, 1.4, 0.9, 0.2]', 'num_inference_steps': '13', 'num_results_per_prompt': '1', 'tiling_stride': '0', 'guidance_scale': '7', 'prompts': '[Prompt]', 'progress_callback': 'function', 'latent_callback_steps': '0', 'latent_callback_type': 'pil'}
DEBUG:enfugue:Calculated overall steps to be 14 - 0 image prompt embedding probe(s) + [1 chunk(s) * (0 encoding step(s) + (1 decoding step(s) * 1 frame(s)) + (1 temporal chunk(s) * 13 inference step(s))]
ERROR:enfugue:Received error builtins.RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
DEBUG:enfugue:Traceback (most recent call last):
File "enfugue\diffusion\process.py", line 185, in run
File "enfugue\diffusion\process.py", line 250, in handle
File "enfugue\diffusion\process.py", line 284, in execute_diffusion_plan
File "enfugue\diffusion\invocation.py", line 1219, in execute
File "enfugue\diffusion\invocation.py", line 1375, in execute_inference
File "enfugue\diffusion\manager.py", line 4472, in call
File "torch\utils_contextlib.py", line 115, in decorate_context
File "enfugue\diffusion\pipeline.py", line 3455, in call
File "enfugue\diffusion\pipeline.py", line 1036, in encode_prompt
File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
File "torch\nn\modules\module.py", line 1527, in _call_impl
File "transformers\models\clip\modeling_clip.py", line 840, in forward
File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
File "torch\nn\modules\module.py", line 1527, in _call_impl
File "transformers\models\clip\modeling_clip.py", line 745, in forward
File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
File "torch\nn\modules\module.py", line 1527, in _call_impl
File "transformers\models\clip\modeling_clip.py", line 656, in forward
File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
File "torch\nn\modules\module.py", line 1527, in _call_impl
File "transformers\models\clip\modeling_clip.py", line 385, in forward
File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
File "torch\nn\modules\module.py", line 1527, in _call_impl
File "torch\nn\modules\normalization.py", line 196, in forward
File "torch\nn\functional.py", line 2543, in layer_norm
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
INFO:enfugue:Reached maximum idle time after 15.2 seconds, exiting engine process
Can you fix this?
The text was updated successfully, but these errors were encountered: