Tag Archives: python

Building an indoor NFT hydroponics system with Raspberry Pi monitoring

I love plants. I’m not quite sure why, but over the recent years, it’s been a growing love affair; pun intended.

I’m lucky to have room on my rooftop to grow lots of veggies in the summer. I’ve been building up capacity and I’m now up to 7 irrigated containers. I even managed to grow corn that was 9+ feet tall during my first summer! I’m now engaged in a ruthless battle with our beloved squirrels over domination of the garden’s bounty.

However, each year as our Montréal winter drew closer, I watched, powerless, as my once strong crops wrinkled away. This got me interested in finding out what my options were when it came to growing veggies indoors, and that lead me to hydroponics.

There is a sustainable development aspect to all of this which I find quite interesting. Growing most of the food we need locally seems like something we will have to do as a society pretty soon. I’ve been a big fan of Lufa Farms, who have been pioneers in that area, ever since they started. There are similar initiatives in most major cities. This technology is even being put in containers that are then shipped to the arctic to grow fresh food in the most arid climate on the planet!

This post isn’t an exact step-by-step guide. There is lots to know and learn, and great resources already exist online. Here, I’m going to take you through my journey and point you to relevant resources along the way.

Continue reading

Actually, Marty didn’t go Back To The Future: Graphing the train sequence of BTTF3

In a hurry? Go straight to the graphs.

The dataset and notebook detailing how this was done are available in the companion repository.

Two weeks ago was Back To The Future Day. October 21st, 2015 is the day Marty and Doc Brown travel to at the beginning of the second movie. The future is now the past. There were worldwide celebrations and jokes, from the Queensland police deploying a hoverboard unit, Universal Pictures releasing a Jaws 19 trailer and even Health Canada issuing an official recall notice of DeLorean DMC-12 because of a flux capacitor defect that could prevent the car from traveling through time.

I love the trilogy and as many people probably did that week, I rewatched the movies. I also wondered if there was any fun BTTF data science project I could do. While watching the climactic sequence at the end of the third movie, I realized that as the steam locomotive pushes the DeLorean down the tracks, we get many data points as to the speed of the DeLorean. Marty is essentially reciting a dataset, all the way  from 1885.

That made me ask the 1.21 Giga Watts question: Do they really make it to 88 miles per hour before they run out of tracks?

Continue reading

Hacking an epic NHL goal celebration with a hue light show and real-time machine learning

** UPDATE from April 1st 2019 (and it’s not an April’s fool :P) : Just launched a Kickstarter campaign to turn this project into a mobile app. Ever wanted an epic light show celebration in your living room when your favorite sports team scores a goal? This is your chance!

Back the campaign now

See media coverage of this blog post.

In Montréal this time of year, the city literally stops and everyone starts talking, thinking and dreaming about a single thing: the Stanley Cup Playoffs. Even most of those who don’t normally care the least bit about hockey transform into die hard fans of the Montréal Canadiens, or the Habs like we also call them.

Below is a Youtube clip of the epic goal celebration hack in action. In a single sentence, I trained a machine learning model to detect in real-time that a goal was just scored by the Habs based on the live audio feed of a game and to trigger a light show using Philips hues in my living room.

The rest of this post explains each step that was involved in putting this together. A full architecture diagram is available if you want to follow along.

Continue reading

Efficient log processing

I’ve recently learned a couple of neat tricks to process large amounts of text files more efficiently from my new co-worker @nicolaskruchten. Our use-case is efficiently going through tens of gigabytes of logs to extract specific lines and do some operation on them. Here are a couple of things we’ve done to speed things up.

Keep everything gziped

Often, the bottleneck will be IO. This is especially true on modern servers that have a lot of cores and ram. By keeping the files we want to process gziped, we can use zcat to directly read the compressed files and pipe the output to whichever script we need. This reduces the amount of data that needs to be read from the disk. For example:

If you’re piping into a Python script, you can easily loop over lines coming from the standard input by using the fileinput module, like this:

Use parallel to use all available cores

GNU parallel is the coolest utility I’ve discovered recently. It allows you to execute a script that needs to act on a list of files in parallel. For example, suppose we have a list of 4 log files (exim_reject_1.gz, exim_reject_2.gz, etc) and that we need to extract the lines that contain gmail.com. We could run a grep on each of those files sequentially but if our machine has 4 cores, why not run all the greps at once? It can be done like this using parallel:

Breaking down the previous command, we tell parallel to run, using 4 cores, the command zcat {} | grep gmail.com, where {} will be substituted with each of the files matching the selector exim_reject*.gz. Each resulting command from the substitutions of {} will be run in parallel.

What’s great about it is that you can also collect all the results from the parallel executions and pipe them into another command. We could for example decide to keep the resulting lines in a new file like this:

Use a ramdisk

If you’ll be doing a lot of reading and writing to the disk on the same files and have lots of ram, you should consider using a ramdisk. Doing so will undoubtedly save you lots of IO time. On Linux, it is very easy to do. The following command would create an 8GB ramdisk:

In the end…

By using all the tricks above, we were able to considerably improve the overall runtime or our scripts. Well worth the time it took to refactor our initial naive pipeline.

64-bit Scientific Python on Windows

Getting a 64-bit installation of Python with scientific packages on our dear Windows isn’t as simple as running an apt-get or port command. There is an official 64-bit Python build available but extensions like numpy, scipy or matplotlib only have official 32-bit builds. There are commercial distributions such as Enthought that offer all the packages built in 64-bit but at around 200$ per license, this was not an option for me.

Stumbled upon the Python Extension Packages for Windows page that contains dozens of extensions compiled for Python 2.5, 2.6 and 2.7 in 32 and 64 bits. With these packages, I was able to get a working installation in no time.

Boston Music Hackday

I was thrilled to attend the Boston Music Hackday this week-end. A lot of people hacked up some pretty cool projects, many of us coding until the very early morning Sunday (aka 4am), only to get back up a few hours later (aka 8am) to keep at it until the dreaded 15h45 deadline, when we all had to submit our demos. The organisers did a wonderful job and the event was a success at every level.

The hack I did was called the PartyLister. The goal was mainly trying to come up with a way to generate steerable playlists that would also be personalized for a group of people (ie.: taking into account each of their musical taste and making sure everyone gets a song he likes once in a while). Given the very limited amount of time available to hack this up, I had to keep things simple and so I decided to use only social tags to do all the similarity computation. I excepted the quality of the playlists would suffer but the goal was really to develop a way to include multiple listeners in the track selection process. The algorithm should then be used in conjunction with something like the playlist generation model I presented at this year’s ISMIR.

My hack: PartyLister

Imagine you’re hosting a party and using the PartyLister as DJ for the night. Each of your guests will need to supply the software with his last.fm username and we’ll be good to go.

We go out and fetch from the last.fm API the social tags associated with the artists (and their top tracks) that our listeners know about. We also use the EchoNest API to get similar artists so we can present new artists to our listeners. From a user’s top artists, we can create a tag cloud that represents the user’s general musical taste (UMT). We’re also allowing each user to specify a set of tags that represent their current musical taste using a steerable tag cloud.

Suppose you have 3 guests at your party, where two like pop and the other likes metal. By doing a naive combination of the users’ musical taste, we’ll probably end up playing pop music, leaving our metalhead friend bored. To solve this, I added a user weight term which is determined by looking at the last 5 songs that played and computing the average similarity between the user’s musical taste and those songs. If we’re only playing pop songs, the metalhead will have a very low similarity between his taste and what played and so we’ll increase his weight and lower the pop lovers’ weights. When we pick the next song, this weighting scheme will allow the metalhead’s taste to count more than the pop lovers’, even if there are more of them. This will make us play a more metal-like track. After a while the weights will equal out and we’ll start playing pop music again.

For sparseness reasons, I operated on artists instead of tracks. A simplified version how I weighted each candidate artist is below. Lambda is simply a knob to determine how much the users’ musical taste will count, cd() represents the cosine distance and UMT represents a combination of the user’s general musical taste and his steerable cloud.

The following plot represents a running average of the cosine distance (dissimilarity) between users’ musical taste and the last 5 songs that played. It represents a 160 songs playlist with 3 listeners in the system.

dave_paul_frank_160

As you can see, as a user’s running average increases, his weight is also increased so that we start playing more songs that fit his taste. His average then decreases as the other users’ weights go up forcing a return to music that fits their taste a little more. The plot shows that the system seems to be doing what we want, that is taking into account the musical taste of multiple users and playing music that each person will like once in a while. Integrated in a real playlist generation model, I believe this could produce interesting results.

I also played with a discovery setting, where users could specify if they wanted to discover new songs or stick to what they know. This was achieved by adding a bonus or penalizing each candidate’s score, based on the discovery setting (float between 0 and 1) and the proportion of users who knew (had already listened to) the artist in question.

PartyLister was not a very visually or sonically attractive hack like some of the others but I still managed to win a price based on popular vote. Thanks to all the great sponsors, there were a lot of prizes and so lots of winners.

Below is the Université de Montréal delegation, Mike Mandel (who also won a price for his Bowie S-S-S-Similarities) and myself, with our bounty.

P1020732

I really hope to attend another hackday soon as it was all a lot of fun. Time to go get some sleep now.

My time at Sun Labs and pyaura

My internship at Sun Microsystems Labs, which has been going on for about 15 months – 9 of those full time at their campus in the Boston area – is coming to an end. During the course of those months, I’ve met a lot of very smart and fun people, I’ve worked on very challenging and stimulating problems and I’ve discovered a bunch of really good New England beers.

All my work has been centered around the Aura datastore, an open-source, scalable and distributed recommendation platform. The datastore is designed to handle millions of users and items and can generate content-based recommendations based on each item’s aura (aka tag cloud).

Last summer, under the supervision of Paul Lamere, I worked a lot more on our music recommendation web application, called the Music Explaura and designed a steerable recommendation interface. (We also have a Facebook companion app to the Explaura that was created by Jeff Alexander.)

This summer, I worked with Steve Green on many different things, including what I’d like to talk about in this post, pyaura, a Python interface to the datastore.

pyaura

The idea behind pyaura is to get the best of both world. While the datastore is very good at what it does – storing millions of items and being able to compute similarity between all of them very quickly – the Java framework surrounding it is a bit too rigid to quickly hack random research code on top of it. While my actual goal was to experiment with ways of doing automatic cleanup and clustering of social tags, I felt I was missing the flexibility I wanted and was used to getting when working on projects using Python’s interactive environment.

Without going into details, since the datastore is distributed and has many different components, it uses a technology called Jini to automatically hook them all up together. Jini takes care of automatic service discovery so you don’t have to manually specify IP adresses and so on. It also allows you to publicly export functions that remote components can call. A concrete example would be the datastore head component allowing the web server component to call it’s getSimilarity() function on two items. The computation goes on in the datastore head and then the results get shipped across the wire to the web server so it can serve its request. However, Jini only supports Java leaving us no direct way to connect to the datastore using Python.

After looking around for a bit, I stumbled upon a project called JPype, which essentially allows you to launch a JVM inside Python. This allows you to instantiate and use Java objects in a completely transparent way from within Python. Using JPype, I built two modules which together, allow very simple access to the datastore though Python.

  • AuraBridge: A Java implementation of the Aura datastore interface. The bridge knows about the actual datastore because it can locate it and talk to it using Jini.
  • pyaura: A set of Python helper functions (mostly automatic type conversion). pyaura instantiates an AuraBridge instance using JPype and uses it as a proxy to get data to and from the datastore.

Example

To demonstrate how things become easy when using pyaura, imagine you are running an Aura datastore and have collected a lot of artist and tag information from the web. You might be interested in quickly seeing the number of artists that have generally been tagged by the each individual tag you know about. With these few lines of code, you can get a nice histogram that answers just that questions:

The above code produces the following plot:

tag_counts_histo

This is the result we expect, as this was generated with a datastore containing 100,000 artists. As less and less popular artists are added to the datastore, the effects of sparsity in social data kick in. Less popular artists are indeed tagged with less tags than popular artists, leading to the situation where very few tags were applied to more than 5000 artists.

This is a small example but it shows the simplicity of using pyaura. With very few lines of code, you can do pretty much anything with the data stored in Aura. This hopefully will make the Aura datastore more accessible and attractive to projects looking to take advantage of both its scalability and raw power as well as have the flexibility to quickly hack on top of it.