Skip to content

🔄 Compacting an existing finetuned model. #28

@erew123

Description

@erew123

With the older versions of finetune (pre 28th December 2023), it wasn't compacting models (due to the the fact that the "correct" code for doing this has changed). I've now gotten hold of the updated/correct code and this has been integrated into finetune.py, so this will not be an issue moving forwards.

For people stuck with large (5GB models) who want to compact them, you will need the updated version of AllTalk https://github.com/erew123/alltalk_tts#-updating

This process has now been built into Finetune. You would:

  1. Copy the 5GB model.pth file into the /finetune/ folder and rename it best_model.pth

  2. Start up finetune.py and go to the final tab.

  3. There is a button at the bottom called Compact a legacy finetuned model. Click the button and wait for the on-screen prompt to say its completed.

  4. In the /finetune/ folder you should now have both your best_model.pth and model.pth.
    The model.pth is your new compressed file. Copy it back to your loader location, confirm that it works, then you can delete your best_model.pth file.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions