Building A Table Tennis Ranking Model

I wrote a post about building a table tennis ranking model over at Rubikloud:

It uses Bradley-Terry probability model to predict the outcome of pair-wise comparisons (e.g. games or matches). I describe an easy algorithm for fitting the model (via MM-algorithms) as well as adding a simple Bayesian prior to handle ill-defined cases. I even have some code on Github so you can build your own ranking system using Google sheets.

Here's a blurb:

Many of our Rubikrew are big fans of table tennis, in fact, we’ve held an annual table tennis tournament for all the employees for three years running (and I’m the reigning champion). It’s an incredibly fun event where everyone in the company gets involved from the tournament participants to the spectators who provide lively play-by-play commentary.

Unfortunately, not everyone gets to participate either due to travel and scheduling issues, or by the fact that they miss the actual tournament period in the case of our interns and co-op students. Another downside is that the event is a single-elimination tournament, so while it has a clear winner the ranking of the participants is not clear.

Being a data scientist, I identified this as a thorny issue for our Rubikrew table tennis players. So, I did what any data scientist would do and I built a model.

Enjoy!

A Variational Autoencoder on the SVHN dataset

In this post, I'm going to share some notes on implementing a variational autoencoder (VAE) on the Street View House Numbers (SVHN) dataset. My last post on variational autoencoders showed a simple example on the MNIST dataset but because it was so simple I thought I might have missed some of the subtler points of VAEs -- boy was I right! The fact that I'm not really a computer vision guy nor a deep learning guy didn't help either. Through this exercise, I picked up some of the basics in the "craft" of computer vision/deep learning area; there are a lot of subtle points that are easy to gloss over if you're just reading someone else's tutorial. I'll share with you some of the details in the math (that I initially got wrong) and also some of the implementation notes along with a notebook that I used to train the VAE. Please check out my previous post on variational autoencoders to get some background.

Read more…

Variational Autoencoders

This post is going to talk about an incredibly interesting unsupervised learning method in machine learning called variational autoencoders. It's main claim to fame is in building generative models of complex distributions like handwritten digits, faces, and image segments among others. The really cool thing about this topic is that it has firm roots in probability but uses a function approximator (i.e. neural networks) to approximate an otherwise intractable problem. As usual, I'll try to start with some background and motivation, include a healthy does of math, and along the way try to convey some of the intuition of why it works. I've also annotated a basic example so you can see how the math relates to an actual implementation. I based much of this post on Carl Doersch's tutorial, which has a great explanation on this whole topic, so make sure you check that out too.

Read more…

Variational Bayes and The Mean-Field Approximation

This post is going to cover Variational Bayesian methods and, in particular, the most common one, the mean-field approximation. This is a topic that I've been trying to understand for a while now but didn't quite have all the background that I needed. After picking up the main ideas from variational calculus and getting more fluent in manipulating probability statements like in my EM post, this variational Bayes stuff seems a lot easier.

Variational Bayesian methods are a set of techniques to approximate posterior distributions in Bayesian Inference. If this sounds a bit terse, keep reading! I hope to provide some intuition so that the big ideas are easy to understand (which they are), but of course we can't do that well unless we have a healthy dose of mathematics. For some of the background concepts, I'll try to refer you to good sources (including my own), which I find is the main blocker to understanding this subject (admittedly, the math can sometimes be a bit cryptic too). Enjoy!

Read more…

The Calculus of Variations

This post is going to describe a specialized type of calculus called variational calculus. Analogous to the usual methods of calculus that we learn in university, this one deals with functions of functions and how to minimize or maximize them. It's used extensively in physics problems such as finding the minimum energy path a particle takes under certain conditions. As you can also imagine, it's also used in machine learning/statistics where you want to find a density that optimizes an objective [1]. The explanation I'm going to use (at least for the first part) is heavily based upon Svetitsky's Notes on Functionals, which so far is the most intuitive explanation I've read. I'll try to follow Svetitsky's notes to give some intuition on how we arrive at variational calculus from regular calculus with a bunch of examples along the way. Eventually we'll get to an application that relates back to probability. I think with the right intuition and explanation, it's actually not too difficult, enjoy!

Read more…

I'm Brian Keng, a former academic, current data scientist and engineer. This is the place where I write about all things technical.

Twitter: @bjlkeng


Archive

Tags

RSS feed


Signup for Email Blog Posts