Here I've trained a LSTM network to work as a language model on lyrics of a specific singer(in this case Taylor Swift) and then tried to genrate new lyrics.
Some Sample Lyrics generated by model
I fake a smile so he won't see you again wish i can't just fine around here thinking cause
It has started to lose context just after some words. Model architecture is kept simple for learning purpose. Better Architecture can be tried , or more lyrics can be added to corpus for better generation.
After LSTMs, Transformers have evolved to be better state-of-art models for many tasks as they make use of attention efficiently and also be trained in parallel unlike RNNs which depened on backpropgation in sequence.
-
Inspired by this Tutorial Series - by Laurence Moroney.
-
Video lectures on NLP with Deep Learning by Christopher Manning
-
Model is trained using Google Colab