Skip to content

Improved WGAN memory leak and training issue #4

@vishwakftw

Description

@vishwakftw

There seems to be a memory leak in the improved WGAN code. This is due to the BatchNorm2d layers in the Discriminator and Generator of the DCGAN, which is used for the improved WGAN and the torch.autograd.grad function. This is also referenced in this issue: PyTorch Double Backward on BatchNorm2d.

The patch is ready, but we might have to wait until the next release.

There seems to be problem with training without the BatchNorm2d layers. This needs to be tracked and fixed.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions