machine learning

An introduction to Recurrent Neural Networks for scientific applications with a case study on air quality modelling

Introduction Deep learning has attracted considerable attention for its near-human ability in a variety of complex problems such as image recognition, playing games, and recently conversational AI through large language models. Each of these applications requires unimaginable volumes of data and computational resources beyond the reach of all but the richest companies. This resource hungry nature, coupled with the huge hype that accompanies any deep-learning application, makes it challenging to gain a realistic assessment of their real-world potential for less demanding use-cases, such as scientific time-series modelling.

Dirichlet Process Mixture Models Part III: Chinese Restaurant Process vs Stick-breaking

In the previous entry of what has evidently become a series on modelling binary mixtures with Dirichlet Processes (part 1 discussed using pymc3 and part 2 detailed writing custom Gibbs samplers), I ended by stating that I’d like to look into writing a Gibbs sampler using the stick-breaking formulation of the Dirichlet Process, in contrast to the Chinese Restaurant Process (CRP) version I’d just implemented. Actually coding this up this was rather straight forward and took less time than I expected, but I found the differences and similarities between these two same ways of expressing the same mathematical model interesting enough for a post of its own.

Writing Gibbs Samplers for clustering binary data using Dirichlet Processes

Back at the start of the year (which really doesn’t seem like that long a time ago) I was looking at using Dirichlet Processes to cluster binary data using PyMC3. I was unable to get the PyMC3 mixture model API working using the general purpose Gibbs Sampler, but after some tweaking of a custom likelihood function I got something reasonable-looking working using Variational Inference (VI). While this was still useful for exploratory analysis purposes, I’d prefer to use MCMC sampling so that I have more confidence in the groupings (since VI only approximates the posterior) in case I wanted to use these groups to generate further research questions.

Optimising a football team rating system for match prediction accuracy

While I’ve been quite happy with the performance of my Predictaball football rating system, one thing that that’s bothered me since its inception last summer is the reliance on hard-coded parameters. Similar to many other football rating methods, it’s an adaptation of the Elo system that was designed for Chess matches by Arpad Elo in the 1950s. His aim was to devise an easily implementable system to rate competitors in a 2-person zero-sum game.

Fixing bug with predicting clusters in flexmix

A second post in 2 days on mixture modelling? No awards for guessing what type of analysis I’ve been preoccupied with recently! Today’s post provides an ugly hack to fix a bug in the R flexmix package for likelihood-based mixture modelling and provides a cautionary tale about environments. In short, I’ve encountered problems when trying to predict the cluster membership for out-of-sample data using this package, and judging from a couple of posts I found online, I’m not the only one.

Modelling Bernoulli Mixture Models with Dirichlet Processes in PyMC

I’ve been spending a lot of time over the last week getting Theano working on Windows playing with Dirichlet Processes for clustering binary data using PyMC3. While there is a great tutorial for mixtures of univariate distributions, there isn’t a lot out there for multivariate mixtures, and Bernoulli mixtures in particular. This notebook shows an example of fitting such a model using PyMC3 and highlights the importance of careful parameterisation as well as demonstrating that variational inference can prove advantageous over standard sampling methods like NUTS for such problems.

Predictaball has its own website!

I’ve created a website for Predictaball with team ratings and match predictions for all 4 main European leagues, at thepredictaball.com. It has each team’s current rating and plots showing the change over the course of season, along with match outcome forecasts. Various statistics are also included, such as the biggest upset, worst teams in history, as well as this season’s predictive accuracy. Previously only Premiership match predictions were made available (via Twitter) and so I’m happy that I’ve finally got this website released.

Predictaball: mid-season review

This post continues on from the mid-season review of the Elo system and looks at my Bayesian football prediction model, Predictaball, up to and including matchday 20 of the Premier League (29th December). I’ll go over the overall predictive accuracy and compare my model to others, including bookies, expected goals (xG), and a compilation of football models. Overall accuracy So far, across the top 4 European leagues, there have been 696 matches with 379 (54%) of these outcomes being correctly predicted.

Elo ratings of the Premier League: mid-season review

This is going to be the first of 2 posts looking at the mid-season performance of my football prediction and rating system, Predictaball. In this post I’m going to focus on the Elo rating system. Premier league standings I’ll firstly look at how the teams in the Premiership stand, both in terms of their Elo rating and their accumulated points, as displayed in the table below, ordered by Elo. Over-performing teams, as defined by being at least 3 ranks higher in points than in Elo, are coloured in green, while under-performing teams, the opposite, are highlighted in red.

Predictaball

A machine learning sports prediction bot