[BUG] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm) #197
Labels
bug
Something isn't working
File "/home/pete/PycharmProjects/Time-Series-Classification-master/model/mmm4tsc.py", line 224, in forward
fused = self.visual_expert(concat)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/zeta/nn/modules/visual_expert.py", line 106, in call
normalized = self.norm(x)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/modules/normalization.py", line 196, in forward
return F.layer_norm(
File "/home/pete/anaconda3/envs/mamba/lib/python3.8/site-packages/torch/nn/functional.py", line 2543, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper_CUDA__native_layer_norm)
Upvote & Fund
The text was updated successfully, but these errors were encountered: