Skip to content

[release 2.8-2.9] Delete support for Maxwell, Pascal, and Volta architectures for CUDA 12.8 and 12.9 builds #157517

@atalman

Description

@atalman

🐛 Describe the bug

Please see:
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#deprecated-architectures

Maxwell, Pascal, and Volta architectures are now feature-complete with no further enhancements planned. While CUDA Toolkit 12.x series will continue to support building applications for these architectures, offline compilation and library support will be removed in the next major CUDA Toolkit version release. Users should plan migration to newer architectures, as future toolkits will be unable to target Maxwell, Pascal, and Volta GPUs.

For CUDA 12.8 and 12.9 these architectures are deprecated however still supported. They will be depreacted with CUDA 13.0 release.

Hence we suggest to announce deprecation of support for these architectures for PyTorch Release 2.8 and deprecate these for Release 2.9

For Release 2.8 Option 1 (Currently in trunk):
CUDA 12.6: 5.0;6.0;7.0;7.5;8.0;8.6;9.0
CUDA 12.8: 7.5;8.0;8.6;9.0;10.0;12.0 -> Version Released on pypi
CUDA 12.9: 7.5;8.0;8.6;9.0;10.0;12.0+PTX

For Release 2.8 Option 2:
CUDA 12.6: 5.0;6.0;7.0;7.5;8.0;8.6;9.0
CUDA 12.8: 5.0;6.0;7.0;7.5;8.0;8.6;9.0 -> Version Released on pypi
CUDA 12.9: 7.5;8.0;8.6;9.0;10.0;12.0+PTX

Update: Option 2 is not a possible due to large binary size. See comment: #157517 (comment)

cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @ptrblck @eqy @jerryzh168 @malfet @tinglvv @kwen2501 @nWEIdia

Versions

2.9.0

Metadata

Metadata

Assignees

Labels

high prioritymodule: cudaRelated to torch.cuda, and CUDA support in generaloncall: relengIn support of CI and Release EngineeringtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

In Progress

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions