-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Open
Labels
good first issuemodule: error checkingBugs related to incorrect/lacking error checkingBugs related to incorrect/lacking error checkingtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🚀 The feature, motivation and pitch
Hey 👋 from the Hugging Face Open-Source team,
We're seeing the following issue over and over again across libraries
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
or:
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
E.g.: https://github.com/runwayml/stable-diffusion/issues/23
The problem here is that a PyTorch model has been converted to fp16 and the user tried to run it on CPU, e.g. the following:
from torch import nn
import torch
linear = nn.Linear(2,2, dtype=torch.float16)
tensor = torch.ones((2,), dtype=torch.float16)
linear(tensor)
yields:
"addmm_impl_cpu_" not implemented for 'Half'
Could we maybe catch such errors in the forward of https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module
and return a simpler error message that just says "Float16 cannot be run on CPU"?
Alternatives
No response
Additional context
No response
cc @malfet
Metadata
Metadata
Assignees
Labels
good first issuemodule: error checkingBugs related to incorrect/lacking error checkingBugs related to incorrect/lacking error checkingtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Type
Projects
Status
In Progress