Skip to content

PyTorch tensor factory methods should use torch namespace instead of at #1197

@sbrunk

Description

@sbrunk

Many tensor factory methods defined in torch.java get mapped to functions in the at namespace (from the aTen tensor library underlying PyTorch) instead of the torch namespace.

See rand for instance but this is the case for most factory functions and possibly others as well.

@Namespace("at") public static native @ByVal Tensor rand(@ByVal @Cast({"int64_t*", "c10::ArrayRef<int64_t>", "std::vector<int64_t>&"}) @StdVector long[] size, @ByVal(nullValue = "at::TensorOptions{}") TensorOptions options);

Usually though, we want to call factory functions in the torch namespace because only they give us things like variables and autodiff. I.e. requires_grad does not work on factory methods from aTen.

This is also stated in the docs:

https://pytorch.org/cppdocs/#c-frontend

Unless you have a particular reason to constrain yourself exclusively to ATen or the Autograd API, the C++ frontend is the recommended entry point to the PyTorch C++ ecosystem. While it is still in beta as we collect user feedback (from you!), it provides both more functionality and better stability guarantees than the ATen and Autograd APIs.

https://pytorch.org/cppdocs/notes/faq.html#i-created-a-tensor-using-a-function-from-at-and-get-errors

Replace at:: with torch:: for factory function calls. You should never use factory functions from the at:: namespace, as they will create tensors. The corresponding torch:: functions will create variables, and you should only ever deal with variables in your code.

One thing we might have to consider is backward-compatibility, i.e. by using different names for colliding functions in the torch namespace.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions