-
Notifications
You must be signed in to change notification settings - Fork 25.2k
[export] Convert all export tests to .module() #118425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/118425
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit fe03b0f with merge base 65f8276 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D53075379 |
torch/export/_unlift.py
Outdated
args = fx_pytree.tree_flatten_spec( | ||
args, self._in_spec, exact_structural_match=True | ||
) # type: ignore[assignment] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@suo I had to change this back to tree_flatten_spec because it's able to handle cases where we pass in kwargs with different orders than how it was exported (ex. {"kwarg1": 1, "kwarg2":2} vs. {"kwarg2": 2, "kwarg1": 1}) . Do you have a better suggestion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either way, not supporting custom pytrees is a bigger downside than not being relaxed about dict ordering, so still not sold on tree_flatten_spec here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably still no on tree_flatten_spec, can we rewrite the tests instead?
torch/export/_unlift.py
Outdated
args = fx_pytree.tree_flatten_spec( | ||
args, self._in_spec, exact_structural_match=True | ||
) # type: ignore[assignment] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torch/export/_unlift.py
Outdated
args = fx_pytree.tree_flatten_spec( | ||
args, self._in_spec, exact_structural_match=True | ||
) # type: ignore[assignment] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either way, not supporting custom pytrees is a bigger downside than not being relaxed about dict ordering, so still not sold on tree_flatten_spec here.
4fb41b8
to
e4e37bb
Compare
This pull request was exported from Phabricator. Differential Revision: D53075379 |
e4e37bb
to
fddace2
Compare
This pull request was exported from Phabricator. Differential Revision: D53075379 |
fddace2
to
08403cf
Compare
This pull request was exported from Phabricator. Differential Revision: D53075379 |
Summary: Pull Request resolved: pytorch#118425 Test Plan: CI Reviewed By: suo Differential Revision: D53075379
08403cf
to
1dc3a3a
Compare
This pull request was exported from Phabricator. Differential Revision: D53075379 |
for the pytree thing #118505 |
Summary: Pull Request resolved: pytorch#118425 Test Plan: CI Reviewed By: suo Differential Revision: D53075379
1dc3a3a
to
2153da8
Compare
This pull request was exported from Phabricator. Differential Revision: D53075379 |
2153da8
to
fe03b0f
Compare
This pull request was exported from Phabricator. Differential Revision: D53075379 |
@pytorchbot merge -f 'Landed internally' (Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally) |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Test Plan: CI Differential Revision: D53075379 Pull Request resolved: pytorch#118425 Approved by: https://github.com/suo
Test Plan: CI
Differential Revision: D53075379