Skip to content

Alltalk Integration & 1x question re voice consistency #96

@erew123

Description

@erew123

Hi

Congratulations on releasing the V1 update. I've integrated the v1 update into AllTalk v2 BETA https://github.com/erew123/alltalk_tts/tree/alltalkbeta

If you are ever interested in making any updates to the setup for AllTalk I happily welcome them and the engine for Parler is here https://github.com/erew123/alltalk_tts/tree/alltalkbeta/system/tts_engines/parler

Through AllTalk there is a full AllTalk API suite, OpenAI v1 TTS API compatible endpoint and integration to Text-gen-webui, kobold and Sillytavern, as well as things like HomeAssistant via the OpenAI compatible API.

My 1x question is How similar should the "Using a specific speaker" consistency be? And does the consistency vary by model?

If it matters to my question, here is how I have setup the voices https://github.com/erew123/alltalk_tts/blob/alltalkbeta/system/tts_engines/parler/parler_voices.json with the "Native" ones being the inbuilt voices.

image
image
image
image
image

Thanks

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions