Skip to content

Yet another PyTorch implementation of Tacotron 2 with reduction factor and faster training speed.

License

Notifications You must be signed in to change notification settings

JohnsonTsing/Tacotron2-PyTorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

85 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tacotron2-PyTorch

Yet another PyTorch implementation of Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. The project is highly based on these. I made some modification to improve speed and performance of both training and inference.

TODO

Requirements

  • Python >= 3.5.2
  • torch >= 1.0.0
  • numpy
  • scipy
  • pillow
  • inflect
  • librosa
  • Unidecode
  • matplotlib
  • tensorboardX

Preprocessing

Currently only support LJ Speech. You can modify hparams.py for different sampling rates. prep decides whether to preprocess all utterances before training or online preprocess. pth sepecifies the path to store preprocessed data.

Training

  1. For training Tacotron2, run the following command.
python3 train.py --data_dir=<dir/to/dataset> --ckpt_dir=<dir/to/models>
  1. For training using a pretrained model, run the following command.
python3 train.py --data_dir=<dir/to/dataset> --ckpt_dir=<dir/to/models> --ckpt_pth=<pth/to/pretrained/model>
  1. For using Tensorboard (optional), run the following command.
python3 train.py --data_dir=<dir/to/dataset> --ckpt_dir=<dir/to/models> --log_dir=<dir/to/logs>

You can find alinment images and synthesized audio clips during training. Recording freqency and text to synthesize can be set in hparams.py.

Inference

  • For synthesizing wav files, run the following command.
python3 inference.py --ckpt_pth=<pth/to/model> --img_pth=<pth/to/save/alignment> --wav_pth=<pth/to/save/wavs> --text=<text/to/synthesize>

Pretrained Model

You can download pretrained models from here (git commit: 9e7c26d). The hyperparameter for training is also in the directory.

Vocoder

A vocoder is not implemented yet. For now it just reconstucts the linear spectrogram from the Mel spectrogram directly and uses Griffim-Lim to synthesize the waveform. A pipeline for WG-WaveNet is in progress. Or you can refer to WaveNet, FFTNet, or WaveGlow.

Results

You can find some samples in results. These results are generated using either pseudo inverse or WaveNet.

The alignment of the attention is pretty well now (about 100k training steps), the following figure is one sample.

This figure shows the Mel spectrogram from the decoder without the postnet, the Mel spectrgram with the postnet, and the alignment of the attention.

References

This project is highly based on the works below.

About

Yet another PyTorch implementation of Tacotron 2 with reduction factor and faster training speed.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%