Releases: Dao-AILab/flash-attention
Releases Β· Dao-AILab/flash-attention
v2.8.3
v2.8.2
Bump to v2.8.2
v2.8.1
Bump to v2.8.1
v2.8.0.post2
[CI] Build with NVCC_THREADS=2 to avoid OOM
v2.8.0.post1
[CI] Compile with ubuntu-22.04 instead of ubuntu-20.04
v2.8.0
Bump to v2.8.0
v2.7.4.post1
Drop Pytorch 2.1
v2.7.4
Bump to v2.7.4
v2.7.3
Change version to 2.7.3 (#1437) Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>
v2.7.2.post1
[CI] Use MAX_JOBS=1 with nvcc 12.3, don't need OLD_GENERATOR_PATH