feat(tts): switch default model from vctk_vits to vits (LJSpeech)#82
Open
feat(tts): switch default model from vctk_vits to vits (LJSpeech)#82
Conversation
Switch default Coqui TTS model from vctk_vits (multi-speaker, 109 speakers, ~9GB memory) to vits (LJSpeech, single speaker, ~1-2GB). The vctk model loaded all 109 speaker embeddings into GPU memory even when only one speaker was used. Changes: - Default model: vctk_vits -> vits (LJSpeech) - Default voice: p339 -> default (single speaker, no selection needed) - Model lists: LJSpeech now listed first as recommended - Documentation updated with memory usage notes
⚙️ Settings E2E Test RecordingRun #22028822222 | Commit 7ffb188 |
🎥 Browser E2E Test RecordingRun #22028822222 | Commit 7ffb188 |
🔔 Push Browser E2E Test RecordingRun #22028822222 | Commit 7ffb188 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.



Summary
vctk_vits(~9GB memory) tovits(LJSpeech, ~1-2GB)p339todefault(single speaker model)Changes
shared/src/schemas/settings.ts: Default voicep339→defaultscripts/coqui-server.py: Default model → LJSpeech, reordered model listbackend/src/services/coqui.ts: Same default model change and reorderingfrontend/src/components/settings/TTSSettings.tsx: Updated fallback modelREADME.md: Updated default model/voice referencesWhy
The
vctk_vitsmodel loads all 109 speaker embedding vectors into GPU memory (~9GB on MPS) even when only one speaker is used. Thevits(LJSpeech) model uses ~1-2GB — a 7GB reduction with no quality loss for single-speaker use.