Skip to content

ONNX Inference Speed extremely slow compare to .pt Model #4808

@shrijan00

Description

@shrijan00

Hi,
I tried to inference an image of resolution 1024*1536 using onnx and .pt model
As you can see the huge time difference between the 2 cases in the image
image

Any reason for this?

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions