-
-
Notifications
You must be signed in to change notification settings - Fork 217
Description
With the older versions of finetune (pre 28th December 2023), it wasn't compacting models (due to the the fact that the "correct" code for doing this has changed). I've now gotten hold of the updated/correct code and this has been integrated into finetune.py, so this will not be an issue moving forwards.
For people stuck with large (5GB models) who want to compact them, you will need the updated version of AllTalk https://github.com/erew123/alltalk_tts#-updating
This process has now been built into Finetune. You would:
-
Copy the 5GB
model.pth
file into the/finetune/
folder and rename itbest_model.pth
-
Start up
finetune.py
and go to the final tab. -
There is a button at the bottom called
Compact a legacy finetuned model
. Click the button and wait for the on-screen prompt to say its completed. -
In the
/finetune/
folder you should now have both yourbest_model.pth
andmodel.pth
.
Themodel.pth
is your new compressed file. Copy it back to your loader location, confirm that it works, then you can delete yourbest_model.pth
file.