DEV Community

Rey Riel for Qlik Branch

Posted on • Originally published at branch-blog.qlik.com on

Ski Simulators, Qlik Core and Real-Time Analytics — a Qonnections Story

Ski Simulators, Qlik Core and Real-Time Analytics — a Qonnections Story

Qlik Core, React and a whole bunch of open source. Read about the fun I had developing an awesome app to go with some cool hardware.

Myself on the super fun SkyTechSport Ski Simulator

Another Qonnections has come and gone, and this year I got to be part of something really fun. Our keynote speaker for the conference was Lindsey Vonn, the US alpine ski racer with 3 olympic medals and 7 world cup medals. Because of this Qlik wanted to do something really cool and I had Adam Mayer — a Senior Manager here at Qlik for Technical Product Marketing — approach me to lead the development portion of this exciting project.

Myself and Lindsey Vonn at Qonnections

To get this job done Qlik teamed up with SkyTechSport, a badass company that makes killer equipment to help athletes stay on top of their game. The plan was simple: SkyTechSport would provide the super cool Ski Simulator for our attendees to ride and the people to maintain it, do a bit of development on their end to get us access to the data points the simulator generates and we would build some awesome data visualization to go around it. Our implementation would include both a real-time in game dashboard as well as a post game leaderboard to track who was topping the list. All of this would encompass a charitable effort where Qlik would donate $1 to the Special Olympics for every gate that was passed in a successful run. I was to be in charge of the real-time app and the amazing Arturo Munoz would handle the leaderboard. Some great development ahead for sure, but challenges immediately started to present themselves.

Source Code for project: https://github.com/Qlik-Branch/qonnections-ski-simulator

The first challenge that needed to be dealt with was how the simulator was passing the data. The simulator is a fast piece of equipment and the software behind it is built for the visual and physical feedback, so all the data happens in milliseconds. 30 milliseconds to be exact. So the simulator is saving the data to one file every 30 milliseconds. Over a network. And not just saving the data, overwriting the data. This brought up two concerns.

First is that we needed to make sure the network our systems were connected on weren’t going to be bogged down by external influences. Simple enough, we just have a dedicated router with the systems hard-wired to it and problem solved.

The second concern required a little more thinking and some serious testing. We wanted to make sure we got all the data. That meant getting every write of data with this 30 millisecond timeframe with no file lock issues. After a while of trying to figure out if both writing and reading a file over a network within 30 milliseconds was even feasible I decided to come up with a solution that would simply eliminate our restriction: move the file. If we could move the file out of the way before the simulator had a chance to overwrite it we could work with the data in our own time. The result was actually a really simple script that would just constantly try to move this file to a different folder with the file being named with a timestamp:

First gate passed. YAY!. The next thing to figure out was where the data was going and how it was going to get there. The answer? The awesome Qlik Core mixed with R&D’s super cool command line tool corectl. By having Docker Desktop installed on the system we used I could write three files and have the entire back end setup done. The first file is the docker-compose.yml file that will tell docker the engine we want set up:

The above file tells docker we want to use the latest (at the time of writing) qlikcore/engine image, accept the End User License Agreement, store our Qlik apps in a /docs directory (which is mounted to a local core-docs directory) and route the standard engine port 9076 to our local port 19076 . We’re also mounting a local data directory as well for when we want to load data. Once we have this file we can run docker-compose up -d and docker will have our engine running in no time.

The second file we need is a file called corectl.yml which is leverage by corectl:

This file tells corectl everything it needs to know to create the Qlik app we want. It points to the engine, indicates the name of the app we want, a connection to the data folder we need and a path to the load script that will take in the data necessary.

The final file necessary is our load script that we reference in the file above:

The key thing to note in the load script above is the ADD keyword in the second block. This allows us to leverage the partial data load feature of the engine meaning we could load new data in quickly without losing the data already in the app, keeping our round trip from data load to front end output quick. So with the load script and the corectl file I could run corectl build and have our Qlik app up and ready to go.

Now with the app up and the data being saved from oblivion I turned to the script that would actually handle the simulators data. Using enigma.js for engine interaction we first wanted to create a generic object for the attendees badge ID as well as the race ID. That way we could subscribe to the object and keep an eye on it to know when a badge was scanned:

When a badge was scanned on the front end it would update this generic object and our script can start looking for new race files. Once the race has started it was a simple loop that loads in any existing data files, saves this data to the /unprocessed/ski-data.csv file referenced in the load script and tell the engine to do a partial reload:

finally we can look through the current data to see if a finishing status is found and if so we can clear out the generic object and stop looking for files:

Once we have our data loading script running and waiting, it’s time to get the front end in place. This front end ended up being a React app designed by Arturo, built by myself and incorporates enigma.js, d3.js, picasso.js and Qlik GeoAnalytics. There’s a bunch of parts involved in it, but the important bits are that we set the generic object when a badge is scanned and create some hypercubes that will update when the partial reload happens.

With all the pieces put together it was time to do some serious testing. The upside to the way the simulator saves data is that it was incredibly easy to simulate. I just needed to write new file every 30 milliseconds and watch all the scripts do the rest.

The one concern I had through the whole thing was the speed. This was meant to be an in-game dashboard, meaning it had to update quickly and there were a lot of moving parts. The simulator saves the data, the rename script moves the data, the data load script reads and writes the data, the engine reloads the data, recalculates the data to send down to the front and sends it, then the front end re-renders with the new data. I wasn’t expecting to be blown away by the entire round trip taking under 400 milliseconds! With metric in place to measure how long the engine was taking we had 200 millisecond partial reloads happening within that time too. It’s exciting to see Qlik’s engine be put to the test in a real-time use case and come out shining.

In the end we had a great attraction in the Expo that showed off the awesome power of Qlik and Qlik Core. We raised a significant donation for the Special Olympics and generated a ton of excitement throughout the week.

I wanted to give a big shout out to everybody I worked with both developing and staffing the booth. Katie Abbott and Mike Marolda killed it with logistics and helping day of, Adam Mayer was fantastic with all the organization and Arturo Munoz was a design wizard, thanks to all for making this such a success.


Top comments (0)