-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Open
Labels
high prioritymodule: docsRelated to our documentation, both in docs/ and docblocksRelated to our documentation, both in docs/ and docblocksmodule: multiprocessingRelated to torch.multiprocessingRelated to torch.multiprocessingtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
The following program never terminates.
import torch
import torch.multiprocessing as mp
def foo():
x = torch.ones((2, 50, 10))
return torch.einsum('ijl,ikl->ijk', x, x)
if __name__ == '__main__':
foo()
p = mp.Process(target=foo)
p.start()
p.join()
The behavior persists if one changes the einsum
inside foo
with an equivalent operation (e.g., bmm(y, y.transpose(1,2))
, or (y.unsqueeze(2) * y.unsqueeze(1)).sum(3)
.
It doesn't reproduce, however, if one doesn't call foo
inside the main block.
Environment
PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Arch Linux
GCC version: (GCC) 8.2.1 20181127
CMake version: version 3.13.4
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Additional context
Perhaps related to #2245.
borzunov, alexey-salmin-navee, FelSiq and agriyakhetarpal
Metadata
Metadata
Assignees
Labels
high prioritymodule: docsRelated to our documentation, both in docs/ and docblocksRelated to our documentation, both in docs/ and docblocksmodule: multiprocessingRelated to torch.multiprocessingRelated to torch.multiprocessingtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module