Skip to content

Conversation

ooooo-create
Copy link
Contributor

PR types

Others

PR changes

Others

Description

修改如下文件的示例代码,使其通过 xdoctest 检查:

  • python/paddle/distributed/auto_parallel/static/cluster_v2.py
  • python/paddle/distributed/auto_parallel/static/converter.py
    这个pr问题比较多,cluster_v2中的dist.DeviceMesh,converter的第2,3,5都有些函数没有找到对应的

预览:

Related links

@sunzhongkai588 @SigureMo @megemini

@paddle-bot paddle-bot bot added the contributor External developers label Aug 25, 2023
Copy link
Contributor

@megemini megemini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • dist.DeviceMesh
  • _merge_tensor
  • split
  • _get_sliced_index

这几个都找不到 ... ...

Comment on lines +61 to +68
>>> import paddle
>>> import paddle.distributed as dist

paddle.enable_static()
>>> paddle.enable_static()

mesh = dist.DeviceMesh([[2, 4, 5], [0, 1, 3]])
assert mesh.shape == [2, 3]
assert mesh.device_ids == [2, 4, 5, 0, 1, 3]
>>> mesh = dist.DeviceMesh([[2, 4, 5], [0, 1, 3]])
>>> assert mesh.shape == [2, 3]
>>> assert mesh.device_ids == [2, 4, 5, 0, 1, 3]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

distributed 相关的代码,先加上 # doctest: +REQUIRES(env:DISTRIBUTED) 吧 ~

这个 xxx_v2 感觉像是临时的 ... ...

@luotao1 luotao1 added the HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务 label Aug 28, 2023
@luotao1
Copy link
Contributor

luotao1 commented Aug 31, 2023

@megemini 可以再review一下

Copy link
Contributor

@megemini megemini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

python/paddle/distributed/auto_parallel/static/cluster_v2.py 这个文件加了 doctest 就行,其他的先不动吧,应该也是个临时文件 ... ...

其他原来无法调用的方法,可以用 Converter 的静态调用,辛苦参考我这边的代码修改一下吧~

Comment on lines 357 to 365
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> import numpy as np
>>> partition_tensor_list = [(np.array([[[1.11, 1.12]]]), [[0,1],[0,1],[0,2]])]
>>> tensor = np.array([[[1.13, 1.14]]])
>>> partition_index = [[0,1],[0,1],[2,4]]

_merge_tensor(partition_tensor_list, tensor, partition_index)
# partition_tensor_list: [(np.array([[[1.11, 1.12, 1.13, 1.14]]]), [[0,1],[0,1],[0,4]])]
>>> _merge_tensor(partition_tensor_list, tensor, partition_index)
>>> print(partition_tensor_list)
[(np.array([[[1.11, 1.12, 1.13, 1.14]]]), [[0,1],[0,1],[0,4]])]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

把 _merge_tensor 改为 merge 吧:

>>> import numpy as np
>>> import paddle
>>> from paddle.distributed.auto_parallel.static.converter import Converter
>>> partition_tensor_list = [(np.array([[[1.11, 1.12]]]), [[0,1],[0,1],[0,2]])]
>>> tensor = np.array([[[1.13, 1.14]]])
>>> partition_index = [[0,1],[0,1],[2,4]]
>>> complete_shape = [3, 2]
>>> Converter.merge(partition_tensor_list, tensor, partition_index, complete_shape)
>>> print(partition_tensor_list)
[(array([[[1.11, 1.12, 1.13, 1.14]]]), [[0, 1], [0, 1], [0, 4]])]

Comment on lines 423 to 434
>>> # doctest: +REQUIRES(env:DISTRIBUTED)
>>> import numpy as np
>>> complete_tensor = np.array([[[1.11, 1.12, 1.13, 1.14, 1.15, 1.16]]])
>>> rank = 2
>>> complete_shape = [1, 1, 6]
>>> dims_mapping = [-1, -1, 0]
>>> process_shape = [3]
>>> process_group = [0, 1, 2]

>>> sliced_tensor_list = split(complete_tensor, [[], [], [2, 4]], 3)
>>> print(sliced_tensor_list)
[array([[[1.11, 1.12]]]), array([[[1.13, 1.14]]]), array([[[1.15, 1.16]]])]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

split 用 Converter 静态调用:

>>> import numpy as np
>>> from paddle.distributed.auto_parallel.static.converter import Converter
>>> complete_tensor = np.array([[[1.11, 1.12, 1.13, 1.14, 1.15, 1.16]]])
>>> rank = 2
>>> complete_shape = [1, 1, 6]
>>> dims_mapping = [-1, -1, 0]
>>> process_shape = [3]
>>> process_group = [0, 1, 2]
>>> sliced_tensor_list = Converter.split(complete_tensor, [[], [], [2, 4]], 3)
>>> print(sliced_tensor_list)
[array([[[1.11, 1.12]]]), array([[[1.13, 1.14]]]), array([[[1.15, 1.16]]])]

Comment on lines 515 to 531
>>> import numpy as np
>>> from paddle.distributed.auto_parallel.static.utils import _get_sliced_index
>>> complete_tensor = np.array([[[1.11, 1.12, 1.13, 1.14, 1.15, 1.16]]])
>>> rank = 2
>>> complete_shape = [1, 1, 6]
>>> dims_mapping = [-1, -1, 0]
>>> process_shape = [3]
>>> process_group = [0, 1, 2]

>>> slice_tensor = _slice_tensor(complete_tensor, [[], [], [2, 4]], 3)
>>> print(slice_tensor)
[array([[[1.11, 1.12]]]), array([[[1.13, 1.14]]]), array([[[1.15, 1.16]]])]

>>> index = _get_sliced_index(rank, complete_shape, dims_mapping,
... process_shape, process_group)
>>> print(index)
2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的 _slice_tensor 没啥用,_get_sliced_index 也用静态调用吧:

>>> import numpy as np
 >>> from paddle.distributed.auto_parallel.static.converter import Converter
 >>> complete_tensor = np.array([[[1.11, 1.12, 1.13, 1.14, 1.15, 1.16]]])
 >>> rank = 2
 >>> complete_shape = [1, 1, 6]
 >>> dims_mapping = [-1, -1, 0]
 >>> process_shape = [3]
 >>> process_group = [0, 1, 2]
 >>> index = Converter._get_sliced_index(rank, complete_shape, dims_mapping,
 ...                                 process_shape, process_group)
 >>> print(index)
2

@megemini
Copy link
Contributor

啊 ~~~ cluster_v2.py 怎么给删了 ~~~ 🤣🤣🤣

我的意思是,那个改成那样就挺好(有 doctest 就行) ~ 里面的方法没法调用,可能是文件本身的问题,先不管了 ~

是我的问题,以后应该说的再清楚一点 ~~~

麻烦恢复回来吧,之前改的就挺好 ~

@ooooo-create
Copy link
Contributor Author

啊 ~~~ cluster_v2.py 怎么给删了 ~~~ 🤣🤣🤣

我的意思是,那个改成那样就挺好(有 doctest 就行) ~ 里面的方法没法调用,可能是文件本身的问题,先不管了 ~

是我的问题,以后应该说的再清楚一点 ~~~

麻烦恢复回来吧,之前改的就挺好 ~

麻烦顺师傅了,我的问题,当时没看仔细

Copy link
Contributor

@megemini megemini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# doctest: +SKIP('code is wrong') 还是改为 REQUIRES 吧 ~

其他 OK,辛苦!:)

@@ -58,14 +58,15 @@ class DeviceMesh(core.DeviceMesh):
Examples:
.. code-block:: python

import paddle
import paddle.distributed as dist
>>> # doctest: +SKIP('code is wrong')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# doctest: +SKIP('code is wrong') -> # doctest: +REQUIRES(env:DISTRIBUTED)

Copy link
Contributor

@megemini megemini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ~ 辛苦!

@luotao1 luotao1 merged commit 1a15a35 into PaddlePaddle:develop Sep 5, 2023
BeingGod pushed a commit to BeingGod/Paddle that referenced this pull request Sep 9, 2023
…distributed/auto_parallel/static/*` (PaddlePaddle#56666)

* [Doctest]fix No.184,185, test=docs_preview

* add env skip

* fix @staticmethod

* fix

* add xdoctest for v2

* fix
@ooooo-create ooooo-create deleted the ooooo/xdoctest184 branch September 23, 2023 05:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants