-
Notifications
You must be signed in to change notification settings - Fork 653
Closed
Labels
type/questionAn issue that's a questionAn issue that's a question
Description
❓ The question
Hi, i'm from the PyTorch team and I'm recently aware that we need some customization in layer norm, because it'll seg fault without bias:
Line 203 in cf12108
class AMDLayerNorm(LayerNormBase): |
import torch
assert torch.version.hip is not None
input = torch.randn(10, 10, 10).cuda()
ln = torch.nn.LayerNorm([10, 10], bias=False).cuda()
ln(input).sum().backward()
print(ln.weight.grad)
assert ln.bias is None
Metadata
Metadata
Assignees
Labels
type/questionAn issue that's a questionAn issue that's a question