Yeezy Taught Me Text Generation:

Train a LSTM (Long Short Term Memory) model to generate text.

DESCRIPTION

Yeezy Taught Me Text Generation. A neural network trained next-character prediction model. The model is designed to predict the next character in a text given some preceding string of characters. Doing this repeatedly builds up a text, character by character.

As Kanye West said "Lack of visual empathy, equates the meaning of L-O-V-E."

Yeezy taught me well

YEEZY'S STATUS

Please select a text data source or enter your custom text in the text box below and click "Load source data".

YEEZY'S SOURCE DATA

YEEZY'S MODEL LOADING

Model saved in IndexedDB: Load text data first.

LSTM layer size(s) (e.g., 128 or 100,50):

YEEZY'S MODEL TRAINING

It can take a while to generate an effective model. Try increasing the number of epochs to improve the results.

Number of Epochs:
Examples per epoch:
Batch size:
Validation spilt:
Learning rate:

YEEZY'S TEXT GENERATION PARAMETERS

To generate text the model needs to have some number of preceding characters from which it continues, we call these characters the seed text. You can type one in, or we will extract a random substring from the input text to be the seed text. Note that the seed text must be at least 40 characters long.

Length of generated text:
Generation temperature:
Seed text:

YEEZY'S MODEL OUTPUT

Yeezy taught-me-well. Generated text:

SOURCE CODE

Source code to this project on Github here.