-
-
Notifications
You must be signed in to change notification settings - Fork 17.2k
Description
Search before asking
- I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
Hi, I'm trying to run detect.py
with my onnx model, while I find that the image size is changed from (416, 416) to (448,448) by the check_img_size
function, since model.stride=64
. However, my model is a P5 model. Then, I find that the default stride
is set to 64 in DetectMultiBackend
as follows:
stride, names = 64, [f'class{i}' for i in range(1000)] # assign defaults
If the input is a .pt model, the stride
will be changed by
stride = max(int(model.stride.max()), 32) # model stride
while, if the input is a .onnx model, the stride
will never be changed. Then, if I exported the onnx mdoel by python export.py --device 0 --img 416
, during inference, I got this error:
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: images for the following indices
index: 2 Got: 448 Expected: 416
index: 3 Got: 448 Expected: 416
Please fix either the inputs or the model.
Process finished with exit code 1
Of course, if I use --dynamic
to export .onnx model, the detect.py
will run without error, but the image size is still changed to (448,448) and the inference speed is lower. And the the output anchors will be misplaced.
If I modify the default stride
to 32, then all the outputs are correct.
So I'm confused about the default setting of stride. Is this a bug? Or, is there a bug in my code?
Environment:
Win11 + anaconda + cuda 11.0 + python 3.8 + pytorch 1.7.1 + yolov5-6.0
Additional
No response