My last post was about some common mistakes when betting or gambling, even with a basic understanding of probability. This post is going to talk about the other side: optimal betting strategies using some very interesting results from some very famous mathematicians in the 50s and 60s. I'll spend a bit of time introducing some new concepts (at least to me), setting up the problem and digging into some of the math. We'll be looking at it from the lens of our simplest probability problem: the coin flip. A note: I will not be covering the part that shows you how to make a fortune -- that's an exercise best left to the reader.
Games and gambling have been part of human cultures around the world for millennia. Nowadays, the connection between games of chance and mathematics (in particular probability) are so obvious that it is taught to school children. However, the mathematics of games and gambling only started to formally develop in the 17th century with the works of multiple mathematicians such as Fermat and Pascal. It is then no wonder that many incorrect beliefs around gambling have formed that are "intuitive" from a layman's perspective but fail to pass muster when applying the rigor of mathematics.
In this post, I want to discuss how surprisingly easy it is to be fooled into the wrong line of thinking even when approaching it using mathematics. We'll take a look from both a theoretical (mathematics) point of view looking at topics such as the Gambler's Fallacy and the law of small numbers as well as do some simulations using code to gain some insight into the problem. This post was inspired by a paper I recently came across a paper by Miller and Sanjurjo that explains the surprising result of how easily we can be fooled.
Modern probability theory is typically derived from the Kolmogorov axioms, using measure theory with concepts like events and sample space. In one way, it's intuitive to understand how this works as Laplace wrote:
The probability of an event is the ratio of the number of cases favorable to it, to the number of all cases possible, when [the cases are] equally possible. ... Probability is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.
However, the intuition of this view of probability breaks down when we want to do more complex reasoning. After learning probability from the lens of coins, dice and urns full of red and white balls, I still didn't feel that I had have a strong grasp about how to apply it to other situations -- especially ones where it was difficult or too abstract to apply the idea of "a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible". And then I read Probability Theory: The Logic of Science by E. T. Jayne.
Jayne takes a drastically different approach to probability, not with events and sample spaces, but rather as an extension of Boolean logic. Taking this view made a great deal of sense to me since I spent a lot of time studying and reasoning in Boolean logic. The following post is my attempt to explain Jayne's view of probability theory, where he derives it from "common sense" extensions to Boolean logic. (Spoiler alert: he ends up with pretty much the same mathematical system as Kolmogorov's probability theory.) I'll stay away from any heavy derivations and stick with the intuition, which is exactly where I think this view of probability theory is most useful.
Even though it was just a scant few years ago, my research in grad school seems like it was from another lifetime. Nowadays I deal with data and most of my code revolves around manipulating and extracting interesting insights from it. However in my previous life I spent quite a bit of time dealing with satisfiability problems. So before I start writing about data and related topics, I thought I'd kick it old school and write about some topics from my formative years.
I have something to admit: I've never done any serious programming in a functional language. Yes, yes, I've done some small school assignments in Scheme (a dialect of Lisp), even helping out my friend from another university with his Scheme assignment but nothing real . Does this make me a worse programmer? Probably not, I imagine most developers haven't done anything real with functional programming. Although that's probably not what you'd expect from reading Hacker News, where you don't know programming if you haven't tried Clojure or Haskell .
My position is much more pragmatic: I'm interested in tools and techniques that help me solve problems faster, cleaner and with less headache. Ideas and concepts from functional programming languages are supposed to help with at least some of that -- and they do; they're just not a panacea for all your programming woes. Like most things a solid grasp of the fundamentals goes a long way. So the topic of this post is about something pretty fundamental: a subroutine. In particular, two important kinds of subroutines: procedures and pure functions.