# Variational Autoencoders

This post is going to talk about an incredibly interesting unsupervised learning method in machine learning called variational autoencoders. It's main claim to fame is in building generative models of complex distributions like handwritten digits, faces, and image segments among others. The really cool thing about this topic is that it has firm roots in probability but uses a function approximator (i.e. neural networks) to approximate an otherwise intractable problem. As usual, I'll try to start with some background and motivation, include a healthy does of math, and along the way try to convey some of the intuition of why it works. I've also annotated a basic example so you can see how the math relates to an actual implementation. I based much of this post on Carl Doersch's tutorial, which has a great explanation on this whole topic, so make sure you check that out too.