You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 1, 2025. It is now read-only.
There are two ways to execute neural nets through the Glow compiler:
Use Glow as a stand alone compiler and load Caffe2/ONNX models, see, ImageClassifier for example
Make Glow embedded into Pytorch/Caffe2 via ONNXIFI interface
The purpose of this issue is to cover completed work for ONNXIFI support, but more importantly outline future plans.
Current state
At this point we've made a lot of progress and can execute CV models, see, Resnet50 support.
More sophisticated models which involves various operators can be executed as well, see, list of related closed issues here.
Support of concurrent execution was added allowing to throttle incoming Pytorch/Caffe2 concurrency to concurrency level supported by a specific Glow backend.
Future work
Stability and error handling is one of the most important aspects that needs to be in place
Execution of quantized int8 and fp16 models through the ONNXIFI interface
Improved debugging experience, per operator logging/statistics