-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Fix shape inference for Squeeze-11 #5735
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: daquexian <daquexian566@gmail.com>
Signed-off-by: daquexian <daquexian566@gmail.com>
onnx/defs/tensor/old.cc
Outdated
if (std::any_of(starts.begin(), starts.end(), is_negative) || | ||
std::any_of(ends.begin(), ends.end(), is_negative) || | ||
std::any_of(axes.begin(), axes.end(), is_negative)) { | ||
if (std::any_of(axes.begin(), axes.end(), is_negative)) { | ||
// Negative axes were not explicitly discussed in the spec before opset-10. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Negative axes were not explicitly discussed in the spec before opset-10
It is true. But only negative "axes", not negative starts or ends.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #5735 +/- ##
=======================================
Coverage 56.68% 56.68%
=======================================
Files 506 506
Lines 30227 30233 +6
Branches 4565 4565
=======================================
+ Hits 17133 17139 +6
Misses 12268 12268
Partials 826 826 ☔ View full report in Codecov by Sentry. |
Thank you for the improvement! Could you help adding context for reviewers by improving the pull request title and description? |
onnx/defs/tensor/old.cc
Outdated
if (!ctx.getInputType(0)->tensor_type().has_shape()) { | ||
return; | ||
} | ||
|
||
ctx.getOutputType(0)->mutable_tensor_type()->mutable_shape(); | ||
const auto& input_shape = ctx.getInputType(0)->tensor_type().shape(); | ||
const auto input_ndim = input_shape.dim_size(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: may be useful to factor out this logic into a function for use by the two versions of squeeze.
onnx/defs/tensor/old.cc
Outdated
} | ||
if (starts[j] >= 0 && ends[j] >= 0) { | ||
auto newval = std::min(dim_value, ends[j]) - starts[j]; | ||
if (newval >= 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe better to use std::max(newval, 0)
? That is more general.
onnx/defs/tensor/old.cc
Outdated
if (newval >= 0) { | ||
newdim->set_dim_value(newval); | ||
const auto& dim = ctx.getInputType(0)->tensor_type().shape().dim((int)i); | ||
if (dim.has_dim_value()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could improve this further (in the case where the dim's value is not known): the dim-value is needed only if one of starts/ends is positive and the other is negative. Otherwise, subtraction is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, maybe something like below:
same_sign = ((starts[j] < 0) && (ends[j] < 0)) || ((starts[j] >= 0) && (ends[j] >= 0))
if (dim.has_dim_value()) {
auto dim_value = dim.dim_value();
if (starts[j] < 0) {
starts[j] += dim_value;
}
if (ends[j] < 0) {
ends[j] += dim_value;
}
}
if (same_sign || dim.has_dim_value()) {
auto newval = std::max(0, std::min(dim_value, ends[j]) - starts[j]);
newdim->set_dim_value(newval);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, maybe something like below:
same_sign = ((starts[j] < 0) && (ends[j] < 0)) || ((starts[j] >= 0) && (ends[j] >= 0)) if (dim.has_dim_value()) { auto dim_value = dim.dim_value(); if (starts[j] < 0) { starts[j] += dim_value; } if (ends[j] < 0) { ends[j] += dim_value; } } if (same_sign || dim.has_dim_value()) { auto newval = std::max(0, std::min(dim_value, ends[j]) - starts[j]); newdim->set_dim_value(newval); }
but auto newval = std::max(0, std::min(dim_value, ends[j]) - starts[j]); still needs dim_value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. So, this may be more complicated than I thought. I wanted to handle simple cases like slicing X[-2: -1]
or X[1:4]
... which can almost be done without knowing the original dimension. But I guess we need the original dimension for special cases like when the index-value is out of range.
@@ -1148,6 +1148,21 @@ def test_squeeze(self) -> None: | |||
graph, [make_tensor_value_info("y", TensorProto.FLOAT, (3, 2))] | |||
) | |||
|
|||
def test_squeeze_no_axes_opset11(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A testcase for the slice op changes would also be useful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you take a look at the comments? Thanks!
Hi @daquexian : any thoughts about the comments? Thanks! |
onnx/defs/tensor/old.cc
Outdated
axes.push_back(i); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can probably set the output shape and return here since all the verifications below can't fail in this case.
I see this has a 1.16 Milestone but hasn't had an update since December. Should this be considered a blocker to cutting a 1.16 release branch? |
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Redone in #5967 ... closing this. |
### Description replicate of #5735 by @daquexian ### Motivation and Context Fix shape inference for Squeeze-11 and Squeeze-1 Signed-off-by: liqunfu <liqun.fu@microsoft.com>
### Description replicate of onnx#5735 by @daquexian ### Motivation and Context Fix shape inference for Squeeze-11 and Squeeze-1 Signed-off-by: liqunfu <liqun.fu@microsoft.com> Signed-off-by: isdanni <leedanni@gmail.com>
### Description replicate of onnx#5735 by @daquexian ### Motivation and Context Fix shape inference for Squeeze-11 and Squeeze-1 Signed-off-by: liqunfu <liqun.fu@microsoft.com> Signed-off-by: Linsho Kaku <linsho@preferred.jp>
No description provided.