-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Closed
Labels
announcementImportant information for users/developersImportant information for users/developerstopic: enhancementRequest for new feature or operatorRequest for new feature or operatortopic: testtrackingTracking issuesTracking issues
Description
System information
Latest
What is the problem that this feature solves?
- Before ONNX 1.13.0, these models were stored as AWS. The URLs were broken and the owner cannot be reached out. Therefore, we have transferred the data to download.onnxruntime.ai, updated the URLs in code, and is going to release patch 1.13.1 to fix it. Today, users can use onnx.backend to download these real-world models and run backend test (checker and shape_inference) with
pytest
under onnx repo. It's a burden that onnx downloads the large models every time in a fresh environment. - These models are quite old which were added 4 years ago (using opset 9). In the past, they were added, probably because there was no ONNX Model Zoo then. After ONNX Model Zoo was created, we have transferred the same models in https://github.com/onnx/models. For people who are interested in real-world models, now they can find these models there. It's duplicate that now onnx is hosting the identical models in 2 different places.
Alternatives considered
- Keep them as download.onnxruntime.ai; Nevertheless, the space we have right now is not very reliable and broke pipelines randomly sometime. It slows down development process.
- Download these model from https://github.com/onnx/models through onnx.hub in the backend code; However, the download quota from git-lfs is limited. It will be broken again if the download quota is running out that month.
- Use light local onnx file instead of heavy remote files Use light local onnx file instead of heavy remote files #4861 I think this idea is good-to-have after we eventually decide to deprecate real model tests from onnx repo. We can still let ONNX have tests for same network architecture with light size, which is more feasible for running tests.
Describe the feature
- Deprecate real model tests from onnx repo since next release (ONNX 1.14.0 probably). Going forward, users won't be able to download these real models through onnx.backend code.
- Instead, users can use onnx.hub to download identical models from https://github.com/onnx/models (Only resnet50.onnx is slightly different because the one in the Zoo was batchified). Take AlexNet as an example for backend testing,
import itertools
import onnx
model = onnx.hub.load('alexnet', opset=9)
onnx.checker.check_model(model)
model = onnx.shape_inference.infer_shapes(model, check_type=True, strict_mode=True)
value_infos = {
vi.name: vi
for vi in itertools.chain(model.graph.value_info, model.graph.output)
}
for node in model.graph.node:
for i, output in enumerate(node.output):
if node.op_type == "Dropout" and i != 0:
continue
assert output in value_infos
tt = value_infos[output].type.tensor_type
assert tt.elem_type != onnx.TensorProto.UNDEFINED
for dim in tt.shape.dim:
assert dim.WhichOneof("value") == "dim_value"
- Meanwhile, although users cannot access these models from onnx.backend code, people can still download these 9 models directly from the URL dowmain: https://download.onnxruntime.ai/models, which are
- https://download.onnxruntime.ai/onnx/models/bvlc_alexnet.tar.gz
- https://download.onnxruntime.ai/onnx/models/densenet121.tar.gz
- https://download.onnxruntime.ai/onnx/models/inception_v1.tar.gz
- https://download.onnxruntime.ai/onnx/models/inception_v2.tar.gz
- https://download.onnxruntime.ai/onnx/models/resnet50.tar.gz
- https://download.onnxruntime.ai/onnx/models/shufflenet.tar.gz
- https://download.onnxruntime.ai/onnx/models/squeezenet.tar.gz
- https://download.onnxruntime.ai/onnx/models/vgg19.tar.gz
- https://download.onnxruntime.ai/onnx/models/zfnet512.tar.gz
- From testing perspective, onnx repo still cover a similar test in https://github.com/onnx/onnx/blob/main/.github/workflows/weekly_mac_ci.yml. It will run onnx checker, shape_inference, version_converter weekly with all models from ONNX Model Zoo (including these real models from onnx repo). By contrast, today in every fresh pipeline ONNX will download these 9 models every time
Will this influence the current api (Y/N)?
Y. Users cannot get real models from onnx.backend anymore.
Feature Area
test
Are you willing to contribute it (Y/N)
Yes
Notes
Any feedback is welcome.
Metadata
Metadata
Assignees
Labels
announcementImportant information for users/developersImportant information for users/developerstopic: enhancementRequest for new feature or operatorRequest for new feature or operatortopic: testtrackingTracking issuesTracking issues