Ms.Robot Fashion Modelling 👩🏻🔬
＊ ✿ ❀ Training a variational autoencoder on the Fashion MNIST dataset ❀ ✿ ＊
Table_of_Contents 💜
 Motivation
 Autoencoders
 Label descriptions
 Download the fashion data
 Run the training script
 Loss error function
 TensorBoard monitoring model training
 Conclusion Model Discussion
 References
Motivation 💜
 Train a variational autoenconder (VAE) using TensorFlow.js on Node with technical requirements:
 Tensorflow==1.12.0
 Keras==2.2.4
 TensorflowJS==0.6.7

The model will be trained on the Fashion MNIST dataset
Image of Gender/Brilliance Bias. Google’s suggestions when user types “How to get my daughter into… “ Reference Cameron Russell video “Looks aren’t everything. Believe me, I’m a model.” Link to video here.
Autoencoders 💜
Image. How autoencoders work using the MNIST data set with the number “2”
 “Autoencoding” == Data compression algorithm with compression and decompression functions
 User defines the parameters in the function using variational autoencoder
 Selfsupervised learning where target models are generated from input data
 Implemented with neural networks  useful for problems in unsupervised learning (no labels)
Variational Autoencoders (VAE) 💜
 Variational autoencoders are autoencoders, but with more constraints
 Generative model with parameters of a probability distribution modeling the data
 The encoder, decoder, and VAE are 3 models that share weights. After training the VAE model, the encoder can be used to generate latent vectors

Refer to Keras tutorial for variational autoenconder (MNIST digits) except we will be using Fashion data instead :)
Image. Marie Kondo sparking joy with the wonders of variational autoencoders 👩🏻🔬
Variational Autoencoder (VAE) Example 💜
Example of encoder network maping inputs to latent vectors:
 Input samples x into two parameters in latent space = z_mean and z_log_sigma
 Randomly sample points z from latent normal distribution to generate data
 z = z_mean + exp(z_log_sigma) * epsilon, where epsilon is a random normal tensor
 Decoder network maps latent space points back to the original input data
x = Input(batch_shape=(batch_size, original_dim))
h = Dense(intermediate_dim, activation='relu')(x)
z_mean = Dense(latent_dim)(h)
z_log_sigma = Dense(latent_dim)(h)
Sample Code for VAE encoder network
Label_descriptions 💜
Ms.Robot has the following fashion pieces in her wardrobe:
 Tshirt/top
 Trouser
 Pullover
 Dress
 Coat
 Sandal
 Shirt
 Sneaker
 Bag
 Ankle boot
Image. The 0 to 9 label descriptions for the Fashion MNIST dataset
Prepare_the_node_environment 💜
yarn
# Or
npm install
Download_the_fashion_data 💜
 Download Ms.Robot’s fashion dataset with over 60,000 fashion training set images trainimagesidx3ubyte.gz from here
 Uncompress the large file size (26 MBytes)
 Move the uncompressed file
trainimagesidx3ubyte
intodataset
folder in the example folder of this repo
Run_the_training_script 💜
 Can not feed all the data to the model at once due to computer memory limitations so data is split into “batches”
 When all batches are fed exactly once, an “epoch” is completed. As training script runs, preview images afer every epoch will show
 At the end of each epoch the preview image should look more and more like an item of clothing for Ms.Robot
yarn train
Loss_error_function 💜
 Loss function to account for error in training since Ms.Robot is picky about her fashion pieces
 Two loss function options: The default binary cross entropy (BCE) or mean squared error (MSE)

The loss from a good training run will be approx 4050 range whereas an average training run will be close to zero
Image of loss curve with the binary cross entropy error function
TensorBoard_monitoring_model_training 💜
Use logDir
flag of yarn train
command. Log the batchbybatch loss values to a log directory
yarn train logDir /tmp/vae_logs
Start TensorBoard in a separate terminal to print an http:// URL to the console. The training process can then be monitored in the browser by Ms.Robot:
pip install tensorboard
tensorboard logdir /tmp/vae_logs
Image. Tensorboard’s monitoring interface.
Image. Tensorboard’s monitoring interface.
Conclusion_Model_Discussion 💜
Results show a generative model with parameters of a probability distribution variational autoencoder (VAE) is capable of achieving results on a highly challenging dataset of over 60,000 fashion set images using machine learning. The variational autoencoder VAE is a generative model which means it can be used to generate new fashion pieces for Ms.Robot. This is done by scanning the latent plane, sampling the latent points at regular intervals, to generate the corresponding fashion piece for each point. Run to serve the model and the training web page.
yarn watch
*Image of completed training results on fashion MNIST 30x30 grid of small images for Ms.Robot. Visualization of the latent manifold** that was “generated” by the Ms.Robot generative model.
References 💜
 “How to get my daughter into modeling?” https://duckduckgo.com/?q=how+to+get+my+daughter+into+modeling&t=hx&ia=web
 Tensorflow’s tutorial with tf.keras, a highlevel API to train Fashion MNIST https://www.tensorflow.org/tutorials/keras/basic_classification
 Genius Bias Gender Paper https://fermatslibrary.com/s/genderstereotypesaboutintellectualabilityemergeearlyandinfluencechildrensinterests#emailnewsletter
 Zaiando Research Fashion MNIST data http://fashionmnist.s3website.eucentral1.amazonaws.com/
 Repo name inspiration from Mr.Robot https://mrrobot.fandom.com/wiki/Elliot_Alderson
 Google Scholar  Publications on Fashion MNIST data sets https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=fashionmnist&btnG=&oq=fas
 Building Autoencoders in Keras using DL for Python https://blog.keras.io/buildingautoencodersinkeras.html
 L. FeiFei, R. Fergus, and P. Perona. Learning generative visual models from few training examples. Computer Vision and Image Understanding. 2007.
 Kaggle Data Science competitions with fashion data set https://www.kaggle.com/zalandoresearch/fashionmnist
 FashionMNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. Han Xiao, Kashif Rasul, Roland Vollgraf. arXiv:1708.07747
 Kingma, Diederik P., and Max Welling. “AutoEncoding Variational Bayes.” https://arxiv.org/abs/1312.6114