-
-
Notifications
You must be signed in to change notification settings - Fork 217
Closed
Description
Is your feature request related to a problem? Please describe.
Finetuning on RTX 3060 12GB with 20s length. Can only fit into VRAM with 1 Batch size. Using Grad 16 to counteract this and now fitting with 10.9GB VRAM usage.
Describe the solution you'd like
Just edit the min value in finetune.py
Additional context
I'm just lazy to go edit the values every install/update and would help more people with less VRAM to finetune. Add explanation in 'info - batch size' to turn up grad accumulation if batch size is low for optimal training.
Metadata
Metadata
Assignees
Labels
No labels