Train a LSTM (Long Short Term Memory) model to generate text.
Yeezy Taught Me Text Generation. A neural network trained next-character prediction model. The model is designed to predict the next character in a text given some preceding string of characters. Doing this repeatedly builds up a text, character by character.
As Kanye West said "Lack of visual empathy, equates the meaning of L-O-V-E."
YEEZY'S SOURCE DATA
YEEZY'S MODEL LOADING
Model saved in IndexedDB: Load text data first.
YEEZY'S MODEL TRAINING
It can take a while to generate an effective model. Try increasing the number of epochs to improve the results.
YEEZY'S TEXT GENERATION PARAMETERS
To generate text the model needs to have some number of preceding characters from which it continues, we call these characters the seed text. You can type one in, or we will extract a random substring from the input text to be the seed text. Note that the seed text must be at least 40 characters long.
YEEZY'S MODEL OUTPUT