-
Notifications
You must be signed in to change notification settings - Fork 5.8k
[xdoctest][task 184-185] reformat example code with google style in distributed/auto_parallel/static/*
#56666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- dist.DeviceMesh
- _merge_tensor
- split
- _get_sliced_index
这几个都找不到 ... ...
>>> import paddle | ||
>>> import paddle.distributed as dist | ||
|
||
paddle.enable_static() | ||
>>> paddle.enable_static() | ||
|
||
mesh = dist.DeviceMesh([[2, 4, 5], [0, 1, 3]]) | ||
assert mesh.shape == [2, 3] | ||
assert mesh.device_ids == [2, 4, 5, 0, 1, 3] | ||
>>> mesh = dist.DeviceMesh([[2, 4, 5], [0, 1, 3]]) | ||
>>> assert mesh.shape == [2, 3] | ||
>>> assert mesh.device_ids == [2, 4, 5, 0, 1, 3] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
distributed 相关的代码,先加上 # doctest: +REQUIRES(env:DISTRIBUTED)
吧 ~
这个 xxx_v2
感觉像是临时的 ... ...
@megemini 可以再review一下 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python/paddle/distributed/auto_parallel/static/cluster_v2.py 这个文件加了 doctest 就行,其他的先不动吧,应该也是个临时文件 ... ...
其他原来无法调用的方法,可以用 Converter 的静态调用,辛苦参考我这边的代码修改一下吧~
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | ||
>>> import numpy as np | ||
>>> partition_tensor_list = [(np.array([[[1.11, 1.12]]]), [[0,1],[0,1],[0,2]])] | ||
>>> tensor = np.array([[[1.13, 1.14]]]) | ||
>>> partition_index = [[0,1],[0,1],[2,4]] | ||
|
||
_merge_tensor(partition_tensor_list, tensor, partition_index) | ||
# partition_tensor_list: [(np.array([[[1.11, 1.12, 1.13, 1.14]]]), [[0,1],[0,1],[0,4]])] | ||
>>> _merge_tensor(partition_tensor_list, tensor, partition_index) | ||
>>> print(partition_tensor_list) | ||
[(np.array([[[1.11, 1.12, 1.13, 1.14]]]), [[0,1],[0,1],[0,4]])] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
把 _merge_tensor 改为 merge 吧:
>>> import numpy as np
>>> import paddle
>>> from paddle.distributed.auto_parallel.static.converter import Converter
>>> partition_tensor_list = [(np.array([[[1.11, 1.12]]]), [[0,1],[0,1],[0,2]])]
>>> tensor = np.array([[[1.13, 1.14]]])
>>> partition_index = [[0,1],[0,1],[2,4]]
>>> complete_shape = [3, 2]
>>> Converter.merge(partition_tensor_list, tensor, partition_index, complete_shape)
>>> print(partition_tensor_list)
[(array([[[1.11, 1.12, 1.13, 1.14]]]), [[0, 1], [0, 1], [0, 4]])]
>>> # doctest: +REQUIRES(env:DISTRIBUTED) | ||
>>> import numpy as np | ||
>>> complete_tensor = np.array([[[1.11, 1.12, 1.13, 1.14, 1.15, 1.16]]]) | ||
>>> rank = 2 | ||
>>> complete_shape = [1, 1, 6] | ||
>>> dims_mapping = [-1, -1, 0] | ||
>>> process_shape = [3] | ||
>>> process_group = [0, 1, 2] | ||
|
||
>>> sliced_tensor_list = split(complete_tensor, [[], [], [2, 4]], 3) | ||
>>> print(sliced_tensor_list) | ||
[array([[[1.11, 1.12]]]), array([[[1.13, 1.14]]]), array([[[1.15, 1.16]]])] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
split 用 Converter 静态调用:
>>> import numpy as np
>>> from paddle.distributed.auto_parallel.static.converter import Converter
>>> complete_tensor = np.array([[[1.11, 1.12, 1.13, 1.14, 1.15, 1.16]]])
>>> rank = 2
>>> complete_shape = [1, 1, 6]
>>> dims_mapping = [-1, -1, 0]
>>> process_shape = [3]
>>> process_group = [0, 1, 2]
>>> sliced_tensor_list = Converter.split(complete_tensor, [[], [], [2, 4]], 3)
>>> print(sliced_tensor_list)
[array([[[1.11, 1.12]]]), array([[[1.13, 1.14]]]), array([[[1.15, 1.16]]])]
>>> import numpy as np | ||
>>> from paddle.distributed.auto_parallel.static.utils import _get_sliced_index | ||
>>> complete_tensor = np.array([[[1.11, 1.12, 1.13, 1.14, 1.15, 1.16]]]) | ||
>>> rank = 2 | ||
>>> complete_shape = [1, 1, 6] | ||
>>> dims_mapping = [-1, -1, 0] | ||
>>> process_shape = [3] | ||
>>> process_group = [0, 1, 2] | ||
|
||
>>> slice_tensor = _slice_tensor(complete_tensor, [[], [], [2, 4]], 3) | ||
>>> print(slice_tensor) | ||
[array([[[1.11, 1.12]]]), array([[[1.13, 1.14]]]), array([[[1.15, 1.16]]])] | ||
|
||
>>> index = _get_sliced_index(rank, complete_shape, dims_mapping, | ||
... process_shape, process_group) | ||
>>> print(index) | ||
2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里的 _slice_tensor 没啥用,_get_sliced_index 也用静态调用吧:
>>> import numpy as np
>>> from paddle.distributed.auto_parallel.static.converter import Converter
>>> complete_tensor = np.array([[[1.11, 1.12, 1.13, 1.14, 1.15, 1.16]]])
>>> rank = 2
>>> complete_shape = [1, 1, 6]
>>> dims_mapping = [-1, -1, 0]
>>> process_shape = [3]
>>> process_group = [0, 1, 2]
>>> index = Converter._get_sliced_index(rank, complete_shape, dims_mapping,
... process_shape, process_group)
>>> print(index)
2
啊 ~~~ cluster_v2.py 怎么给删了 ~~~ 🤣🤣🤣 我的意思是,那个改成那样就挺好(有 doctest 就行) ~ 里面的方法没法调用,可能是文件本身的问题,先不管了 ~ 是我的问题,以后应该说的再清楚一点 ~~~ 麻烦恢复回来吧,之前改的就挺好 ~ |
麻烦顺师傅了,我的问题,当时没看仔细 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# doctest: +SKIP('code is wrong')
还是改为 REQUIRES 吧 ~
其他 OK,辛苦!:)
@@ -58,14 +58,15 @@ class DeviceMesh(core.DeviceMesh): | |||
Examples: | |||
.. code-block:: python | |||
|
|||
import paddle | |||
import paddle.distributed as dist | |||
>>> # doctest: +SKIP('code is wrong') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# doctest: +SKIP('code is wrong')
-> # doctest: +REQUIRES(env:DISTRIBUTED)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM ~ 辛苦!
…distributed/auto_parallel/static/*` (PaddlePaddle#56666) * [Doctest]fix No.184,185, test=docs_preview * add env skip * fix @staticmethod * fix * add xdoctest for v2 * fix
PR types
Others
PR changes
Others
Description
修改如下文件的示例代码,使其通过
xdoctest
检查:python/paddle/distributed/auto_parallel/static/cluster_v2.py
python/paddle/distributed/auto_parallel/static/converter.py
这个pr问题比较多,cluster_v2中的dist.DeviceMesh,converter的第2,3,5都有些函数没有找到对应的
预览:
Related links
@sunzhongkai588 @SigureMo @megemini