-
-
Notifications
You must be signed in to change notification settings - Fork 751
Description
Many tensor factory methods defined in torch.java get mapped to functions in the at
namespace (from the aTen tensor library underlying PyTorch) instead of the torch
namespace.
See rand
for instance but this is the case for most factory functions and possibly others as well.
@Namespace("at") public static native @ByVal Tensor rand(@ByVal @Cast({"int64_t*", "c10::ArrayRef<int64_t>", "std::vector<int64_t>&"}) @StdVector long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options);
Usually though, we want to call factory functions in the torch
namespace because only they give us things like variables and autodiff. I.e. requires_grad
does not work on factory methods from aTen.
This is also stated in the docs:
https://pytorch.org/cppdocs/#c-frontend
Unless you have a particular reason to constrain yourself exclusively to ATen or the Autograd API, the C++ frontend is the recommended entry point to the PyTorch C++ ecosystem. While it is still in beta as we collect user feedback (from you!), it provides both more functionality and better stability guarantees than the ATen and Autograd APIs.
Replace at:: with torch:: for factory function calls. You should never use factory functions from the at:: namespace, as they will create tensors. The corresponding torch:: functions will create variables, and you should only ever deal with variables in your code.
One thing we might have to consider is backward-compatibility, i.e. by using different names for colliding functions in the torch namespace.