DEV Community

Bryan Hughes for Microsoft Azure

Posted on

Building an Interactive LED Art Piece for JSConf EU

JSConf EU is arguably the most important JavaScript conference of the year. Many important technologies have been announced at JSConf EU, such as Node.js, and a wide array of developers come from all over the globe to attend. Microsoft is sponsoring the conference this year, and we're bringing something fun along: an LED powered art piece that you can control from the booth and with your own custom Serverless implementation!

This series of blog posts will dive into how we built this project.

I've been involved in LED art for a while now after I was inspired by my experiences with Burning Man and related events. I've been developing a wireless lighting control system for programmable LED art for a few years now, and it's pretty rock-solid at this point. This is the perfect system to base our booth code on. I took on the lead role for the project since I created the base infrastructure for it, and I'm fortunate to work with my amazing teammates Jan Schenk, Tanya Janca, Tierney Cyren, and Suz Hinton to bring it to life. If you're curious what this system looks like, here's a tech demo:

For JSConf EU, I designed an art piece loosely inspired by bamboo forests. This device will feature multiple LED "shoots" that are controlled from the cloud.

LED piece CAD rendering

This piece uses custom hardware I originally designed for Burning Man based on the ESP8266 12-S WiFi enabled microcontroller. These little boards control the lights themselves and connect to a Raspberry Pi over WiFi. The Raspberry Pi will connect to Azure using Azure IoT Hub, and from there we can hook into all sorts of cloud goodness!

The system is comprised of three sets of code: browser UI code, server code in the form of Azure Functions, and controller code running on the Raspberry Pi. Here is the complete architecture diagram (minus the ESP8266s ):

Architecture Diagram

The Browser UI code consists of two clients. The first client will be running on tablets at the booth, and will be running the new Chromium-based version of Edge. This client will allow people to walk up to the booth and give them basic control over the animations. This interface will be intentionally limited though. For more advanced animations, we'll have another web UI that will allow people to submit custom animations, implementing using their own Azure Function, where they have full control over the animations.

The server code consists of four Azure Functions. One Azure Function will implement the endpoint for submitting basic animations, and another will implement the endpoint for submitting custom animations. Another Azure Function will be used to return the current queue of animations, so folks can see when their animation will run. The final Azure Function will run on a timer, and pulls the next animation off of the queue and sends it to the Raspberry Pi for display using Azure IoT Hub. The animation queue is implemented using Azure Storage Queues. Once nice side effect of using a queue for animations is that we don't need a database at all! This simplifies the code and improves security and privacy of user data (there's nothing to leak if it doesn't exist).

The last piece is the controller code running on the Raspberry Pi. This code is the smallest piece of code, and mostly just glues IoT Hub to the RVL-Node package I wrote to communicate with the ESP8266s.

All of the code for this project is written using TypeScript. The web UIs are written using React.js and Redux as well. The RVL-Node module is written with a combination of TypeScript for the Node.js side of things, and C++ via Web Assembly for the logic side of things.

All of these pieces together should create a fun, unique booth experience. I can't wait to show it to you all!

Oldest comments (0)