Replies: 5 comments 5 replies
-
Hi @jeyblu, |
Beta Was this translation helpful? Give feedback.
-
Hi @jcwchen, yes assigning numpy bfloat16 values. We want to be able to use onnx/onnx/mapping.py and onnx/onnx/helper.py to generate bfloat16 models, e.g. bfloat16 weights. If numpy doesn't currently support bfloat16, how do you create onnx bfloat16 models? Thanks |
Beta Was this translation helpful? Give feedback.
-
How do you convert fp32 onnx models to bfloat16 onnx models? |
Beta Was this translation helpful? Give feedback.
-
Is bfloat16 stored as uint16 as suggested in line 16 of mapping.py? For float16, lines 354-355 of helper.py uses astype(np.float16).view(dtype=np.uint16). But for bfloat16, lines 356-357 of helper.py uses astype(np.float32). Is this referring to the source float32 tensor from which the data is converted or the target bfloat16 tensor? |
Beta Was this translation helpful? Give feedback.
-
If numpy supports bfloat16 eventually, then this workaround should be changed, right? May be helpful to add some documentation to help and remind whoever needs to revert this workaround later. |
Beta Was this translation helpful? Give feedback.
-
Is it possible for onnx to support bfloat16 datatype? Thanks
Line 16 of onnx/onnx/mapping.py and lines 331-333 of onnx/onnx/helper.py show that bfloat16 is recast to float16.
int(TensorProto.BFLOAT16): np.dtype('float16'), # native numpy does not support bfloat16
Beta Was this translation helpful? Give feedback.
All reactions