Hacking an epic NHL goal celebration with a hue light show and real-time machine learning

See media coverage of this blog post.

In Montréal this time of year, the city literally stops and everyone starts talking, thinking and dreaming about a single thing: the Stanley Cup Playoffs. Even most of those who don’t normally care the least bit about hockey transform into die hard fans of the Montréal Canadiens, or the Habs like we also call them.

Below is a Youtube clip of the epic goal celebration hack in action. In a single sentence, I trained a machine learning model to detect in real-time that a goal was just scored by the Habs based on the live audio feed of a game and to trigger a light show using Philips hues in my living room.

The rest of this post explains each step that was involved in putting this together. A full architecture diagram is available if you want to follow along.

 

The hack

The original goal (no pun intended) of this hack was to program a celebratory light show using Philips hue lights and play the Habs’ goal song when they scored a goal. Everything would be triggered using a big Griffin PowerMate USB button that would need to be pushed by whoever was the closest to it when the goal occurred.

That is already pretty cool, but can we take it one step further? Wouldn’t it be better if the celebratory sequence could be triggered automatically?

As far as I could find, there is no API or website available online that can give me reliable notifications within a second or two that a goal was scored. So how can we do it very quickly?

Imagine you watch a hockey game blindfolded, I bet you would have no problem knowing when goals are scored because a goal sounds a lot different that anything else in a game. There is of course the goal horn, if the home team scores, but also the commentator who usually yells a very intense and passionate “GOOOAAAALLLLL!!!!!”. By hooking up into the audio feed of the game and processing it in real-time using a machine learning model trained to detect when a goal occurs, we could trigger the lights and music automatically, allowing all the spectators to dance and do celebratory chest-bumps without having to worry about pushing a button.

Some signal processing

The first step is to take a look at what a goal sound looks like. The Habs’ website has a listing of all previous games with ~4 minutes video highlights of each game. I extracted the audio from a particular highlight and used librosa, a library for audio and music analysis, to do some simple signal processing. If you’ve never played with sounds before, you can head over to Wikipedia to read about what a spectrogram is. You can also simply think of it as taking the waveform of an audio file and creating a simple heat map over time and audio frequencies (Hz). Low-pitched sounds are at the lower end of the y-axis and high-pitched sounds are on the upper end, while the color represents the intensity of the sound.

We’re going to be using the mel power spectrogram (MPS), which is like a spectrogram with additional transformations applied on top of it.

You can use the code below to display the MPS of a sound file.


# Mostly taken from: http://nbviewer.ipython.org/github/bmcfee/librosa/blob/master/examples/LibROSA%20demo.ipynb
import librosa
import matplotlib.pyplot as plt
# Load sound file
y, sr = librosa.load("filename.mp3")
# Let's make and display a mel-scaled power (energy-squared) spectrogram
S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128)
# Convert to log scale (dB). We'll use the peak power as reference.
log_S = librosa.logamplitude(S, ref_power=np.max)
# Make a new figure
plt.figure(figsize=(12,4))
# Display the spectrogram on a mel scale
# sample rate and hop length parameters are used to render the time axis
librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel')
# Put a descriptive title on the plot
plt.title('mel power spectrogram')
# draw a color bar
plt.colorbar(format='%+02.0f dB')
# Make the figure layout compact
plt.tight_layout()

view raw

gistfile1.py

hosted with ❤ by GitHub

This is what the MPS of a 4 minutes highlight of a game looks like:

highligh

mel power spectogram of a 4 minutes highlight

Now let’s take a look at an 8 seconds clip from that highlight, specifically when a goal occurred.

goal

mel power spectrogram of a goal by the Canadiens

As you can see, there are very distinctive patterns when the commentator yells (the 4 big wavy lines), and when the goal horn goes off in the amphitheater (many straight lines). Being able to see the patterns with the naked eye is very encouraging in terms of being able to train a model to detect it.

There are tons of different audio features we could derive from the waveform to use as features for our classifier. However, I always try to start simple to create a working baseline and improve from there. So I decided to simply vectorize the MPS, which was created by using 2 second clips with frequencies up to 8KHz with 128 Mel bands at a sampling rate of 22.5KHz. The MPS have a shape of 128×87, which results in a feature vector of 11,136 elements when vectorized.

The machine learning problem

If you’re not familiar with machine learning, think of it as building algorithms that can learn from data. The type of ML task we need to do for this project is binary classification, which means making the difference between two classes of things:

  • positive class: the Canadiens scored a goal
  • negative class: the Canadiens did not score a goal

Put another way, we need to train a model that can give us the probability that the Canadiens scored a goal given the last 2 seconds of audio.

A model learns to perform a task through training, which is looking at past examples of those two classes and figuring out what are the statistical regularities in the data that allow it to separate the classes. However, it is easy for a computer to learn things by heart. The goal of machine learning is producing models that are able to generalize what they learn to data they have never seen, to new examples. What this means for us is that we’ll be using past games to train the model but what we obviously want to do are predictions for future games in real-time as they are aired on TV.

Building the dataset

As with any machine learning project, there is a time when you will feel like a monkey, and that is usually when you’re either building, importing or cleaning a dataset. For this project, this took the form of recording the audio from multiple 4 minute highlights of games and noting the time in the clip when a goal was scored by the Habs or the opposing team.

Obviously, we’ll be using the Canadiens’ goals as positive examples for our classifier, since that is what we are trying to detect.

Now what about negative examples? If you think about it, the very worst thing that could happen to this system is for it to get false positives (falsely thinking there is a goal). Imagine we are playing against the Toronto Maple Leafs and they score a goal and the light show starts. Not only did we just get scored and are bummed out, but on top of that the algorithm is trolling us about it by playing our own goal song! (This is naturally a fictitious example because the Leafs are obviously not making the playoffs once again this year) To make sure that doesn’t happen, we’ll be using all the opposing team’s goals as explicit negatives. The hope is that the model will be able to distinguish between goals for and against because the commentator is much more enthusiastic for Canadiens’ goals.

To illustrate this, compare the MSP of the Habs’ goal above with the example below of a goal against the Habs. The commentator’s scream is much shorter and the goal horn of the opponent’s team amphitheater is at very different frequencies than the one at the Bell Center. The goal horn only goes off when the home team scores so the MSP below is taken from a game not played in Montréal.

goal-against

mel power spectrogram of a goal against the Canadiens

In addition to the opposing team’s goals, we’ll use 50 randomly selected segments from each highlight that are far enough from an actual goal as negatives, so that the model is exposed to what the uneventful portions of a game sound like.

False negatives (missing an actual goal) are still bad, but we prefer them over false positives. We’ll talk about how we can deal with them later on.

Note that I did not do any alignment of the sound files, meaning the commentator yelling does not start at exactly the same time in every clip. The dataset ended up consisting of 10 games, with 34 goals by the Habs and 17 goals against them. The randomly selected negative clips added another 500 examples.

Training and picking a classifier

As I mentioned earlier, the goal was to start simple. To that effect, the first models I tried were a simple logistic regression and an SVM with an rbf kernel over the raw vectorized MPS.

I was a bit surprised that this trivial approach yielded usable results. The logistic regression got an AUC of 0.97 and an F1 score of 0.63, while the SVM got an AUC of 0.98 and an F1 score of 0.71. Those results were obtained by holding out 20% of the training data to test on.

At this point I ran a few complete game broadcasts through the system and each time the model detected a goal, I wrote out the 2 seconds corresponding sound file to disk. A bunch were false positives that corresponded to commercials. The model had never seen commercials before because they are not included in game highlights. I added those false positives to the negative examples, retrained and the problem went away.

However the AUC/F1 score were not an accurate estimation of the performance I could expect because I was not necessarily planning to use a single prediction as the trigger for the light show. Since I’m scoring many times per second, I could try decision rules that would look at the last n predictions to make a decision.

I ran a 10-fold cross-validation, holding out an entire game from the training set, and actually stepping through the held out game’s highlight as if it was the real-time audio stream of a live game. That way I could test out multi-prediction decision rules.

I tried two decision rules:

  1. average of last n predictions over the threshold t
  2. m positive votes in the last n predictions, where a YES vote requires a prediction over the threshold t

For each combination of decision rule, hyper-parameters and classifier, there were 4 metrics I was looking at:

  1. Real Canadiens goal that the model detected (true positive)
  2. Opposing team goal that the model detected (really bad false positive)
  3. No goal but the model thought there was one (false positive)
  4. Canadiens goal the model did not detect (false negative)

SVMs ended up being able to get more true positives but did a worst job on false positives. What I ended up using was a logistic regression with the second decision rule. To trigger a goal, there needs to be 5 positives votes out of the last 20 and votes are cast if the probability of a goal is over 90%. The cross-validation results for that rule were 23 Habs goals detected, 11 not detected, 2 opposing team goals falsely detected and no other false positives.

Looking at the Habs’ 2014-15 season statistics, they scored an average of 2.61 goals per game and got scored 2.24 times. This means I can loosely expect the algorithm to not detect 1 Habs goal per game (0.84 to be more precise) and to go off for a goal by the opposing team once every 4 games.

Note that the trained model only works for the specific TV station and commentator I trained on. I trained on regular season games aired on TVA Sports because they are airing the playoffs. I tried testing on a few games aired on another station and basically detected no goals at all. This means performance is likely to go down if the commentator catches a cold.

Philips hue light show

Now that we’re able to do a reasonable job at identifying goals, it was time to create a light show that rivals those crazy Christmas ones we’ve all seen. This has 2 components: playing the Habs’ goal song and flashing the lights to the music.

The goal song I play is not the current one in use at the Bell Center, but the one that they used in the 2000s. It is called “Le Goal Song” by the Montréal band L’Oreille Cassée. To the best of my knowledge, the song is not available for sale and is only available on Youtube.

Philips hues are smart LED multicolor lights that can be controlled using an iPhone app. The app talks to the hue bridge that is connected to your wifi network and the bridge talks to the lights over the ZigBee Light Link protocol. In my living room, I have the 3 starter-kit hue lights, a light-strip under my kitchen island and a Bloom pointing at the wall behind my TV. Hues are not specifically meant for light shows; I usually use them to create an interesting atmosphere in my living room.

I realized the lights can be controller using a REST API that runs on the bridge. Using the very effective phue library, we can interface with the hue bridge API from python. At that point, it was simply a question of programming a sequence of color and intensity calls that would roughly go along with the goal song I wanted to play.

Below is an example of using phue to make each light cycle through the colors blue, white and red 10 times.


import time
from phue import Bridge
# connect to the hue bridge
b = Bridge(bridge_ip)
b.connect()
# Setup colors
colors = {
"bleu": [0.1393, 0.0813],
"blanc": [0.3062, 0.3151],
"rouge": [0.674, 0.322]
}
colorKeys = colors.keys()
# change each light color 10 times
for cycle in xrange(10):
for light in xrange(5):
# on each cycle, each light goes to the next color, which is
# either bleu, white or red.
next_color = colors[colorKeys[(cycle + light) % 3]]
b.set_light(light, 'xy', next_color, transitiontime=2.5)
time.sleep(1)

view raw

lightshow.py

hosted with ❤ by GitHub

I deployed this up as a simple REST API using bottle. This way, the celebratory light show is decoupled from the trigger. The lights can be triggered easily by calling the /goal endpoint.

Hooking up to the live audio stream

My classifier was trained on audio clips offline. To make this whole thing come together, the missing piece was the real-time scoring of a live audio feed.

I’m running all of this on OSX and to get the live audio into my python program, I needed two components: Soundflower and pyaudio. Soundflower acts as a virtual audio device and allows audio to be passed between applications, while pyaudio is a library that can be used to play an record audio in python.

The way things need to be configured is the system audio is first set to the Soundflower virtual audio device. At that point, no sound will be heard because nothing is being sent to the output device. In python, you can then configure pyaudio to capture audio coming into the virtual audio device, process it, and then resend it out to the normal output device. In my case, that is the HDMI output going to the TV.

As you can see from the code snippet below, you start listening to the stream by giving pyaudio a callback function that will be called each time the captured frames buffer is full. In the callback, I add the frames to a ring buffer that keeps 2 seconds worth of audio, because that is the size of the training examples I used to train the model. The callback gets called many times per second. Each time, I take the contents of the ring buffer and score it using the classifier. When a goal is detected by the model, this triggers a REST call to the /goal endpoint of the light show API.


import pyaudio
import librosa
import numpy as np
import requests
# ring buffer will keep the last 2 seconds worth of audio
ringBuffer = RingBuffer(2 * 22050)
def callback(in_data, frame_count, time_info, flag):
audio_data = np.fromstring(in_data, dtype=np.float32)
# we trained on audio with a sample rate of 22050 so we need to convert it
audio_data = librosa.resample(audio_data, 44100, 22050)
ringBuffer.extend(audio_data)
# machine learning model takes wavform as input and
# decides if the last 2 seconds of audio contains a goal
if model.is_goal(ringBuffer.get()):
# GOAL!! Trigger light show
requests.get("http://127.0.0.1:8082/goal")
return (in_data, pyaudio.paContinue)
# function that finds the index of the Soundflower
# input device and HDMI output device
dev_indexes = findAudioDevices()
stream = pa.open(format = pyaudio.paFloat32,
channels = 1,
rate = 44100,
output = True,
input = True,
input_device_index = dev_indexes['input'],
output_device_index = dev_indexes['output'],
stream_callback = callback)
# start the stream
stream.start_stream()
while stream.is_active():
sleep(0.25)
stream.close()
pa.terminate()

view raw

gistfile1.py

hosted with ❤ by GitHub

Full architecture

goal_detector_architecture2

My TV subscription allows me to stream the hockey games on a computer in HD. I hooked up a Mac Mini to my TV and that Mac will be responsible for running all the components of the system:

  1. displaying the game on the TV
  2. sending the game’s audio feed to the Soundflower virtual audio device
  3. running the python goal detector that capture the sound from Soundflower, analyses it, calls the goal endpoint if necessary and resends the audio out to the HDMI output
  4. running the light show API that listens for calls to the goal endpoint

Since the algorithm is not perfect, I also hooked up the Griffin USB button that I mentioned at the very beginning of the post. It can be used to either start or stop the light show in case we get a false negative or false positive respectively. It was very easy to do this because a push of the button simply calls the /goal endpoint of the API that can decide what it should do with the trigger.

Production results and beyond

After two playoff games against the Ottawa Senators, the model successfully detected 75% of the goals (missing 1 per game) and got no false positives. This is in line with the expected performance, and the USB button was there to save the day when the detection did not work.

This was done in a relatively short amount of time and represents the simplest approach at each step. To make this work better, there are a number of things that could be done. For instance, aligning the audio files of the positive examples, trying different example length, trying more powerful classifiers like a convolutional neural net, doing simple image analysis of the video feed to try to determine on which side of the ice we are, etc.

In the mean-time, enjoy the playoffs and Go Habs Go!

Media Coverage

Mini-Documentary: ESPN Fan Stories: Robot Fan Cave – April 2019

This awesome mini-doc was produced by our friends at Hodge Films for ESPN.

Talks

In the media

62 thoughts on “Hacking an epic NHL goal celebration with a hue light show and real-time machine learning

  1. Jim Martin

    Just a thought here, but could you add a variable to the machine learning for home vs away? You could get that data from a number of different sources.
    At home games, the goal horn always sounds which could help increase the positive hits drastically.(home game + “goal” + goal horn = good goal) At away games, the sound of the goal horn could decrease the negative events as well. (Away game + “goal” + goal horn = bad goal) Also on away games “goal + no goal horn = good goal” could also help increase the positive hits.

  2. Michael

    How easy is this to implement for someone without any programming experience? I am mainly looking to be able to press a button when my team scores and have the light show play as well as a goal horn sound play though my speakers. It would be perfect if you created an install that could be used on a PC and is triggered when a USB button was pressed. I would pay for something like this.

  3. Mark

    I see you did this for the playoffs last season but I have to wonder, did you do anything to deal with nationally syndicated broadcasts? In my area, after the first round all other playoff games are broadcast (and therefore called) by NBC’s national broadcast team instead of the local broadcast team. On those national broadcasts, have you worked out a way to differentiate between which team scores? The play-by-play guy (on the east coast it’s generally Doc Emerick for NBC) doesn’t differentiate goal calls for either team. How would you factor this?

  4. Stu

    You can avoid the resampling step, by giving pyaudio (or pysoundcard, which succeeds pyaudio) the audio settings you want directly.

    @Michael – having something like this happen when you press a button would be very straightforward and make a good learning project, start with being able to play a sound, then move to working on getting it working when pressing a button.

  5. Dan Wigi

    How would one go about making this work for just pressing the Griffin USB?

    I have read your article a few times and I am confident I understand how to program the light show. However, I am not sure how you would go about syncing that with an audio clip since I would not be running it with live stream.

    Any help or tips to get me in the right direction would be appreciated!
    Thank you!

  6. François Maillet Post author

    The script triggered by pressing the griffin USB should start both the light show and the music at the same time. In my setup, the USB button ran the exact same script as the automatic trigger from the audio stream. That script did a REST call to the light show API and started VLC with the goal song. Hope this helps!

  7. Mark Young

    Hi, I have been looking for this solution for years! Can I buy a set up from you? Very anxious to do this as soon as possible please! I have a substantial amount of smart devices, over 50 hue lights, just started using openhab, but I’m not married to anything, just want this working like yours for Jets games.

  8. Mark Young

    Hi, I am trying something similiar but need advice as I do not have any electronics knowledge. I am using Home Assistant, and have hard wired a Wemos D1 mini to the goal light speaker wire. It worked as an automation trigger for about 30 minutes then fried the D1. Apparently I need an opto isolator but these seems to be only for Arduinos. I now have tried a schlage z wave door sensor as it has a 2 wire block, but I get “interference” I think because the signal is several seconds instead of just a momentary type switch. I’d like to solder to the pin outs instead of the bare wires, any suggestions about reducing the voltage, combining the 2 circuits OR what pinouts I could use besides the bare audio wires?

  9. Mike H

    What you have done with this is awesome. I have a whole Hue setup and am jealous. Maybe someday. Lets go Flyers.

    I am trying to work on a similar but far far less complex project. Not related to hue. Full disclosure, I am ok on the computer but have zero coding experience.. wondering if you do read this that you could at least point me in the right direction of how you would attack it. (software, hardware, etc). and I’m on Windows, not Mac.

    I want to run a Sega Genesis emulator on a PC or Raspberry pi(NHL 94 specifically). When the home team scores in the game, its essentially the exact same sound byte that plays for about 2 to 3 seconds. I would like the PC or Pi to have something running in the background listening for that byte at all times, and when a goal is scored, it would trigger the PC or PI to transmit a stored IR signal to a NHL Fan Fever Goal Light that I already own.(https://www.amazon.com/Goal-Light-Horn-Team-Labels/dp/B009P9Y4JA). I have isolated the short sound byte in wav and mp3 format.

    Any thoughts or input would be greatly appreciated. Thx for reading

  10. antimaterie

    hi,
    thx for sharing! I have a question:
    in your code, where does the class “RingBuffer” come from?

  11. Ya Ab

    Thanks for this! A few minor changes needed to the code to display the MPS of a sound file
    1. replace `import librosa` with `import librosa.display`

    2. librosa.logamplitude() has been removed. Replace that line with:
    `log_S = librosa.power_to_db(S, ref=np.max)`

    3. (For people using PyCharm) at the bottom of the file add `plt.show()`

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.