Skip to content

Conversation

daquexian
Copy link
Member

No description provided.

Signed-off-by: daquexian <daquexian566@gmail.com>
Signed-off-by: daquexian <daquexian566@gmail.com>
@daquexian daquexian requested review from a team as code owners November 2, 2023 14:48
if (std::any_of(starts.begin(), starts.end(), is_negative) ||
std::any_of(ends.begin(), ends.end(), is_negative) ||
std::any_of(axes.begin(), axes.end(), is_negative)) {
if (std::any_of(axes.begin(), axes.end(), is_negative)) {
// Negative axes were not explicitly discussed in the spec before opset-10.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Negative axes were not explicitly discussed in the spec before opset-10

It is true. But only negative "axes", not negative starts or ends.

Copy link

codecov bot commented Nov 2, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (66b7fb6) 56.68% compared to head (b812a27) 56.68%.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #5735   +/-   ##
=======================================
  Coverage   56.68%   56.68%           
=======================================
  Files         506      506           
  Lines       30227    30233    +6     
  Branches     4565     4565           
=======================================
+ Hits        17133    17139    +6     
  Misses      12268    12268           
  Partials      826      826           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@justinchuby
Copy link
Member

Thank you for the improvement! Could you help adding context for reviewers by improving the pull request title and description?

if (!ctx.getInputType(0)->tensor_type().has_shape()) {
return;
}

ctx.getOutputType(0)->mutable_tensor_type()->mutable_shape();
const auto& input_shape = ctx.getInputType(0)->tensor_type().shape();
const auto input_ndim = input_shape.dim_size();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: may be useful to factor out this logic into a function for use by the two versions of squeeze.

}
if (starts[j] >= 0 && ends[j] >= 0) {
auto newval = std::min(dim_value, ends[j]) - starts[j];
if (newval >= 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe better to use std::max(newval, 0)? That is more general.

if (newval >= 0) {
newdim->set_dim_value(newval);
const auto& dim = ctx.getInputType(0)->tensor_type().shape().dim((int)i);
if (dim.has_dim_value()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could improve this further (in the case where the dim's value is not known): the dim-value is needed only if one of starts/ends is positive and the other is negative. Otherwise, subtraction is fine.

Copy link
Contributor

@gramalingam gramalingam Nov 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, maybe something like below:

   same_sign = ((starts[j] < 0) && (ends[j] < 0)) || ((starts[j] >= 0) && (ends[j] >= 0))
   if (dim.has_dim_value()) {
      auto dim_value = dim.dim_value();
      if (starts[j] < 0) {
        starts[j] += dim_value;
      }
      if (ends[j] < 0) {
        ends[j] += dim_value;
      }
   }
   if (same_sign || dim.has_dim_value()) {
      auto newval = std::max(0, std::min(dim_value, ends[j]) - starts[j]);
      newdim->set_dim_value(newval);
   }

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, maybe something like below:

   same_sign = ((starts[j] < 0) && (ends[j] < 0)) || ((starts[j] >= 0) && (ends[j] >= 0))
   if (dim.has_dim_value()) {
      auto dim_value = dim.dim_value();
      if (starts[j] < 0) {
        starts[j] += dim_value;
      }
      if (ends[j] < 0) {
        ends[j] += dim_value;
      }
   }
   if (same_sign || dim.has_dim_value()) {
      auto newval = std::max(0, std::min(dim_value, ends[j]) - starts[j]);
      newdim->set_dim_value(newval);
   }

but auto newval = std::max(0, std::min(dim_value, ends[j]) - starts[j]); still needs dim_value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. So, this may be more complicated than I thought. I wanted to handle simple cases like slicing X[-2: -1] or X[1:4] ... which can almost be done without knowing the original dimension. But I guess we need the original dimension for special cases like when the index-value is out of range.

@@ -1148,6 +1148,21 @@ def test_squeeze(self) -> None:
graph, [make_tensor_value_info("y", TensorProto.FLOAT, (3, 2))]
)

def test_squeeze_no_axes_opset11(self) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A testcase for the slice op changes would also be useful

@gramalingam gramalingam added the module: shape inference Issues related to shape inference label Nov 14, 2023
@justinchuby justinchuby changed the title some shape inference bugfixes Fix shape inference for Squeeze-11 Nov 16, 2023
Copy link
Member

@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you take a look at the comments? Thanks!

@gramalingam
Copy link
Contributor

Hi @daquexian : any thoughts about the comments? Thanks!

axes.push_back(i);
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can probably set the output shape and return here since all the verifications below can't fail in this case.

@justinchuby justinchuby added this to the 1.16 milestone Dec 22, 2023
@cjvolzka
Copy link
Collaborator

I see this has a 1.16 Milestone but hasn't had an update since December. Should this be considered a blocker to cutting a 1.16 release branch?

liqunfu added a commit that referenced this pull request Feb 27, 2024
Signed-off-by: liqunfu <liqun.fu@microsoft.com>
@gramalingam
Copy link
Contributor

Redone in #5967 ... closing this.

github-merge-queue bot pushed a commit that referenced this pull request Feb 27, 2024
### Description
replicate of #5735 by @daquexian 

### Motivation and Context
Fix shape inference for Squeeze-11 and Squeeze-1

Signed-off-by: liqunfu <liqun.fu@microsoft.com>
isdanni pushed a commit to isdanni/onnx that referenced this pull request Mar 18, 2024
### Description
replicate of onnx#5735 by @daquexian

### Motivation and Context
Fix shape inference for Squeeze-11 and Squeeze-1

Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: isdanni <leedanni@gmail.com>
linshokaku pushed a commit to linshokaku/onnx that referenced this pull request Oct 2, 2024
### Description
replicate of onnx#5735 by @daquexian

### Motivation and Context
Fix shape inference for Squeeze-11 and Squeeze-1

Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: Linsho Kaku <linsho@preferred.jp>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: shape inference Issues related to shape inference
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

7 participants