Skip to content

Missing type support in parser for various types (float16, bfloat16, ...) #6152

@TinaAMD

Description

@TinaAMD

Bug Report

Is the issue related to model conversion?

No.

Describe the bug

The parser has only partial support for data type parsing:

Status OnnxParser::Parse(TensorProto& tensorProto, const TypeProto& tensorTypeProto) {

The missing types are:

TensorProto_DataType_FLOAT16
TensorProto_DataType_BFLOAT16
TensorProto_DataType_FLOAT8E4M3FN
TensorProto_DataType_FLOAT8E4M3FNUZ
TensorProto_DataType_FLOAT8E5M2
TensorProto_DataType_FLOAT8E5M2FNUZ
TensorProto_DataType_COMPLEX64
TensorProto_DataType_COMPLEX128

System information

System independent.

Reproduction instructions

Find a sample model and instruction for float16 below; note however that it would be great to support all types not just float16. The error will be the same for the other types.

Sample model:
gemmfloat16.onnxtxt

Sample instructions:

import onnx
from pathlib import Path
onnx.parser.parse_model(Path("gemmfloat16.onnxtxt.txt").read_text())

Result:
onnx.parser.ParseError: b'[ParseError at position (...)]\nError context: <float16[4, 4] weight = {...}, float16[4] bias = {...}>\nUnhandled type: %d10'

(Note: The .onnxtxt format is not allowed to be uploaded therefore the sample is .onnxtxt.txt)

Expected behavior

ONNX model is parsed successfully.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions