Skip to content

Conversation

awgu
Copy link
Collaborator

@awgu awgu commented Apr 23, 2024

Copy link

pytorch-bot bot commented Apr 23, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/124767

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 57ee17e with merge base c82fcb7 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added ci-td-distributed oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (fsdp) release notes category labels Apr 23, 2024
awgu pushed a commit that referenced this pull request Apr 23, 2024
ghstack-source-id: 0d9b686
Pull Request resolved: #124767
@awgu awgu marked this pull request as ready for review April 23, 2024 21:16
mesh = torch.arange(math.prod(mesh_shape), dtype=torch.int).view(mesh_shape)
# Always initialize the mesh's tensor on CPU, regardless of what the
# external device type has been set to be (e.g. meta)
with torch.device("cpu"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I think with this, we can remove the .detach().to("cpu") here:
https://github.com/pytorch/pytorch/blob/main/torch/distributed/device_mesh.py#L215

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was worrying that someone out there might init a DeviceMesh object directly instead of through init_device_mesh, so leaving that there might be safer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just remember we had a check right above it, but ye I think we can keep it to make sure the dtype is consistent.
https://github.com/pytorch/pytorch/blob/main/torch/distributed/device_mesh.py#L212-L213

@awgu awgu requested a review from wz337 April 23, 2024 22:31
Copy link
Contributor

@wz337 wz337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@awgu
Copy link
Collaborator Author

awgu commented Apr 24, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Apr 24, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: HTTP Error 500: Internal Server Error

Details for Dev Infra team Raised by workflow job

@awgu
Copy link
Collaborator Author

awgu commented Apr 24, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Apr 24, 2024
Pull Request resolved: #124768
Approved by: https://github.com/wz337
ghstack dependencies: #124651, #124741, #124767
pytorchmergebot pushed a commit that referenced this pull request Apr 24, 2024
This PR adds a `DeviceMesh.from_group()` static method to convert an existing process group to a device mesh.

Motivation: We need `DeviceMesh.from_group()` to allow FSDP2 to interoperate with distributed libraries that do not use `DeviceMesh` for all parallelisms.

Pull Request resolved: #124787
Approved by: https://github.com/wanchaol
ghstack dependencies: #124651, #124741, #124767, #124768, #124780
alat-rights pushed a commit to alat-rights/pytorch that referenced this pull request Apr 26, 2024
This PR adds a `DeviceMesh.from_group()` static method to convert an existing process group to a device mesh.

Motivation: We need `DeviceMesh.from_group()` to allow FSDP2 to interoperate with distributed libraries that do not use `DeviceMesh` for all parallelisms.

Pull Request resolved: pytorch#124787
Approved by: https://github.com/wanchaol
ghstack dependencies: pytorch#124651, pytorch#124741, pytorch#124767, pytorch#124768, pytorch#124780
pytorchmergebot pushed a commit that referenced this pull request Apr 29, 2024
This PR renames the `FSDP` class to `FSDPModule`. This is a BC breaking change. The rationale is that `FSDPModule` is more descriptive since `fully_shard` is a module-level API (applied to a `module` arg), so the `FSDP` class will always correspond to a module.

Also, users commonly import `FullyShardedDataParallel` as `FSDP`, so this can help avoid some name conflict in some cases.

Pull Request resolved: #124955
Approved by: https://github.com/wanchaol, https://github.com/wconstab
ghstack dependencies: #124651, #124741, #124767, #124768, #124780, #124787
petrex pushed a commit to petrex/pytorch that referenced this pull request May 3, 2024
This PR makes sure to construct the `DeviceMesh`'s `mesh` tensor on CPU device in `init_device_mesh()`. This means that we can call `init_device_mesh()` under meta-device context and still construct the correct `mesh` tensor.

Pull Request resolved: pytorch#124767
Approved by: https://github.com/wz337
ghstack dependencies: pytorch#124651, pytorch#124741
pytorch-bot bot pushed a commit that referenced this pull request May 3, 2024
Pull Request resolved: #124768
Approved by: https://github.com/wz337
ghstack dependencies: #124651, #124741, #124767
pytorch-bot bot pushed a commit that referenced this pull request May 3, 2024
This PR adds a `DeviceMesh.from_group()` static method to convert an existing process group to a device mesh.

Motivation: We need `DeviceMesh.from_group()` to allow FSDP2 to interoperate with distributed libraries that do not use `DeviceMesh` for all parallelisms.

Pull Request resolved: #124787
Approved by: https://github.com/wanchaol
ghstack dependencies: #124651, #124741, #124767, #124768, #124780
pytorch-bot bot pushed a commit that referenced this pull request May 3, 2024
This PR renames the `FSDP` class to `FSDPModule`. This is a BC breaking change. The rationale is that `FSDPModule` is more descriptive since `fully_shard` is a module-level API (applied to a `module` arg), so the `FSDP` class will always correspond to a module.

Also, users commonly import `FullyShardedDataParallel` as `FSDP`, so this can help avoid some name conflict in some cases.

Pull Request resolved: #124955
Approved by: https://github.com/wanchaol, https://github.com/wconstab
ghstack dependencies: #124651, #124741, #124767, #124768, #124780, #124787
@github-actions github-actions bot deleted the gh/awgu/569/head branch June 3, 2024 01:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci-td-distributed ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: DeviceMesh
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants