-
Notifications
You must be signed in to change notification settings - Fork 294
【Hackathon 6th No.17】为 Paddle 新增 sparse.mask_as API #901
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
||
## 1、相关背景 | ||
|
||
[NO.17 为 Paddle 新增 sparse.mask_as API](https://github.com/PaddlePaddle/community/blob/master/hackathon/hackathon_6th/%E3%80%90Hackathon%206th%E3%80%91%E5%BC%80%E6%BA%90%E8%B4%A1%E7%8C%AE%E4%B8%AA%E4%BA%BA%E6%8C%91%E6%88%98%E8%B5%9B%E6%A1%86%E6%9E%B6%E5%BC%80%E5%8F%91%E4%BB%BB%E5%8A%A1%E5%90%88%E9%9B%86.md#no17-%E4%B8%BA-paddle-%E6%96%B0%E5%A2%9E-sparsemask_as-api) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个展开写一下吧,不要直接复制题目
} | ||
|
||
template <typename T, typename Context> | ||
void MaskAsCsrKernel(const Context& dev_ctx, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个会性能比较差吧,torch是怎么实现的,可以直接mask吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个会性能比较差吧,torch是怎么实现的,可以直接mask吗
性能差,是指先把 csr 转为 coo ,然后用 coo 的算子计算吗?
torch 对于 csr
使用 aten/src/ATen/native/sparse/SparseCsrTensorMath.cpp
的 sparse_mask_sparse_compressed
,是把 mask 的 indices 取出来,然后再通过 dense_to_sparse_with_mask
进行转换 ~
不过,torch 里面仍有很多地方都是用 coo 的 sparse_mask 然后再转为 csr ,比如aten/src/ATen/native/TensorConversions.cpp
Tensor to_dense_backward(const Tensor& grad, const Tensor& input_, c10::optional<bool> masked_grad_) {
const auto input_layout = input_.layout();
const bool masked_grad = masked_grad_.value_or(true);
switch (input_layout) {
...
case kSparseCsc:
...
return grad.sparse_mask(input_.to_sparse(input_.sparse_dim())).to_sparse(input_layout);
...
我看到 Paddle 目前的 sparse 算子里面 paddle/phi/kernels/sparse/cpu/elementwise_kernel.cc
的 ElementWise##name##CsrCPUKernel
和 paddle/phi/kernels/sparse/cpu/reshape_kernel.cc
的 ReshapeCsrKernel
都是先把 csr 转为 coo 进行计算的,这里是要单独写 csr 的 mask 吗?
Update 20240518
其中 cpu kernel 可以直接 mask; gpu kernel 需要先将 PyTorch 的实现方式参考性不大,因为,目前 Paddle 的很多 sparse 算子在处理 另外,PaddlePaddle/Paddle#64320 已同步更新具体的实现代码 ~ @zhwesky2010 请评审 ~ |
@megemini 请修改一下描述的语法问题 |
??? 在那儿??? 没找到 😂 是这个 PR 里面的? |
|
||
## 2、功能目标 | ||
|
||
实现 `paddle.sparse.mask_as` 作为独立的函数调。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
函数调用
|
||
PyTorch 中的 [torch.Tensor.sparse_mask](https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_mask.html#torch-tensor-sparse-mask) | ||
|
||
实现了相同的能力。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这一段可以和上面连接起来,语法不通顺
Upate 20240523
@zhwesky2010 请评审 ~ |
PR types
Others
PR changes
Docs
Description
NO.17 为 Paddle 新增 sparse.mask_as API
相应接口的 RFC。
另外,实现了相应接口的 PR PaddlePaddle/Paddle#64320
可以作为 reivew 使用 ~ 暂未实现 docstring 和单测 ~
请评审 ~