<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rey Riel</title>
    <description>The latest articles on DEV Community by Rey Riel (@rjriel).</description>
    <link>https://dev.to/rjriel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rjriel"/>
    <language>en</language>
    <item>
      <title>Quick tips for securing your API</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Wed, 10 Mar 2021 17:08:49 +0000</pubDate>
      <link>https://dev.to/rjriel/quick-tips-to-securing-your-api-24d2</link>
      <guid>https://dev.to/rjriel/quick-tips-to-securing-your-api-24d2</guid>
      <description>&lt;p&gt;Here at Citadel we deal with payroll data. Important data. Private data. When dealing with private data your users need to know that you’re keeping their data safe for as long as you have it in your control. The fact that data is encrypted when it’s stored means you’re keeping data safe at rest, but what about in transmission? When you give users access to their data how do you ensure it stays protected? Here’s some quick tips that we employ here at Citadel with our APIs that we think you can use as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use SSL/TLS‍
&lt;/h2&gt;

&lt;p&gt;It’s a pretty straightforward concept that is now an industry standard anybody can implement. Make sure your API uses an SSL/TLS connection when sending and receiving data. What is SSL/TLS? SSL stands for Secure Socket Layers and TLS stands for Transport Layer Security. They’re basically the same thing while also being different, but in essence they are a protocol that encrypts and authenticates data transfer between two systems.&lt;/p&gt;

&lt;p&gt;By using SSL/TLS you will keep the data between your API and the system sending/receiving data secured with end to end encryption. In the past it was slowly gaining traction in the industry, mostly due to implementation and cost, but with services like Let’s Encrypt now offering free SSL certificates there’s really no reason to not have SSL implemented with your API.&lt;/p&gt;

&lt;p&gt;Here at Citadel we ensure all data across all our applications and APIs are encrypted with SSL/TLS v1.2+ and redirect or decline any non-encrypted traffic to the correct location. To us encryption is a fundamental in all web applications, particularly ones dealing with sensitive data like payroll information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Properly Implement CORS.
&lt;/h2&gt;

&lt;p&gt;‍CORS stands for Cross-Origin Resource Sharing and it’s an important part of security when managing an API. To put into simple terms, when a webpage makes a request to an API the users browser will first send what’s called a “preflight” request, telling the API server what headers it will be sending (including what domain the request is coming from) and what HTTP method it will use. The API server will then respond with what methods it accepts and where it accepts them from. If the domain making the request isn’t in the list the browser will refuse to send the request at that point.&lt;/p&gt;

&lt;p&gt;So why bother with CORS? A lot of times engineers simply set the CORS policy to accept anything (just passing an asterisk), which is usually a result of laziness, lack of knowledge or simply wanting to get off the ground quickly and forgetting. The problem is this can be dangerous because it can allow malicious developers to start making requests from their own web applications without your API’s permission and opens the possibility for anybody anywhere to access your data. By properly implementing a CORS policy you can restrict browser API calls to only your web application. If you want to allow clients to use your API in their application, implement a feature in which your users must list the domain their application uses and allow access only to those approved. This ensures your API data is being accessed browser-side by only the applications your API is aware of.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implement Authentication and Authorization
&lt;/h2&gt;

&lt;p&gt;‍When handling payroll data here at Citadel it’s important to us to ensure whoever is accessing any of our API endpoints is authorized to do so. No API endpoint can be accessed without the proper identifiers and authentication keys. This ensures that not only do we know that whoever is accessing our endpoints are allowed to do so, it allows us to keep track of who is accessing those endpoints. In the case of a security breach, we can simply deauthorize keys and track down who and how the keys were compromised.&lt;/p&gt;

&lt;p&gt;So what about if you have a public facing web application that needs to make calls to your API and doesn’t have a login? This is one of the primary use cases at Citadel, and even then we’re covered. Clients who implement our Bridge into their front end application still need to make a back end call to obtain a token for the session their front end is using. This ensures we know any subsequent calls from that session are coming from a permitted application.&lt;/p&gt;

&lt;p&gt;So if you’re looking to allow users to access your API’s on the backend, make sure you have some sort of authentication in place. If your users need to use your API’s from the front end implement some sort of session token policy so you know the front end calls are still being accessed from an authorized source. Implementing Authentication in your API is key to keeping the data that your API provides secure and in the hands of only the systems that are authorized to use it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Be mindful of what data your API returns
&lt;/h2&gt;

&lt;p&gt;‍I see it a lot in tech these days. Engineers will create an API endpoint that makes a call to a NoSQL database and dumps the JSON result to their HTTP response. While this is certainly easy and makes for small, clean code this is a very dangerous habit to get into. There could be API keys, encrypted or unencrypted values that users shouldn’t be seeing or database identifiers the user has no business knowing about. At Citadel we’ve implemented a layer in between the database and the API ensuring that only the fields we want in our responses are what’s sent. Not only does this make us mindful of what data is going out, but it future-proofs our security by ensuring that even if we add more fields to our database later on, they won’t accidentally leak out to the user.&lt;/p&gt;

&lt;p&gt;So the key point here is to make sure you have a data transformation layer that is mindful of exactly what fields it’s returning. It’s easy to simply throw back whatever data is in the database, but it’s also a security nightmare.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep systems and libraries up to date
&lt;/h2&gt;

&lt;p&gt;‍API’s are almost always built using 3rd party libraries and hosted on servers running operating systems and software. All of these points are opportunities for security vulnerabilities and new vulnerabilities are discovered every day. Here at Citadel we’ve implemented procedures to keep up to date with all firmware, software and libraries our APIs use, as new releases come out every day intended to fix security flaws and vulnerabilities. When developing an API make sure you have your own processes in place to keep your OS, software and libraries updated with the latest versions and patches.&lt;/p&gt;

&lt;p&gt;One of the key components you can rely on as well is using open source libraries, which are fantastic for ensuring your projects remain secure. With open source projects there’s an entire community of developers dedicated to making a project successful, which means many eyes watching out and fixing security concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Throttle your API
&lt;/h2&gt;

&lt;p&gt;‍One of the big concerns when it comes to API’s is that it’s very easy to access data quickly. It’s very easy to write a script that can make one API call after another in a fraction of a second, opening the door to the possibility of scraping all the data from an API in a very short amount of time. By throttling your API you limit the amount of times a particular entity can make requests. Whether you throttle by IP address or authentication keys, it’s important to limit how many calls a system is making in quick succession to lower the possibility of your data being scraped by a script or bot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitor your API
&lt;/h2&gt;

&lt;p&gt;‍Implement logging with your API and track it’s usage. At Citadel we log as much data about the use of our API as we can and feed these logs into analytic software that allows us to closely monitor whether or not our API’s are being abused or if there’s a potential security threat. By monitoring your API you’re not only able to be proactive in preventing security concerns, but you’ll also gain valuable insight into how your users are interacting with your product and potentially make changes to improve their experience. This can be manually or by using third party services like AWS GuardDuty or Wallarm.&lt;/p&gt;

&lt;p&gt;In a world where data is the most valuable commodity on the internet users are more conscious about how secure companies are keeping their data. It’s important now more than ever to keep a constant eye on the security of your API and ensure your keeping yours and your users data safe.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Learn more about how Citadel’s APIs can make employment and income verification easy and affordable at &lt;a href="https://citadelid.com"&gt;https://citadelid.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Standardizing Data — Making data consistent with 30+ data sources</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Fri, 05 Mar 2021 21:38:29 +0000</pubDate>
      <link>https://dev.to/rjriel/standardizing-data-making-data-consistent-with-30-data-sources-1o3k</link>
      <guid>https://dev.to/rjriel/standardizing-data-making-data-consistent-with-30-data-sources-1o3k</guid>
      <description>&lt;p&gt;When you offer an employment/income verification product like we do at Citadel one of the important steps in delivering a top tier developer experience is ensuring high quality integration with multiple data sources, in our case payroll providers. There’s multiple ways we can accomplish this and in our opinion data standardization is the way to go.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is data standardization?
&lt;/h1&gt;

&lt;p&gt;So what is data standardization? By definition it’s the process of converting data to a common format to enable users to process and analyze it. In a normal setup this is a fairly straightforward process. Functions and checks are created to ensure that the data a user enters into the system conforms to certain formats and data is only permitted into the system when it matches those formats. This is important because once data is in the system in the right format it can be extracted with confidence knowing the data will all look the same.&lt;/p&gt;

&lt;p&gt;With Citadel however new challenges were created since we’re not able to impose rules upon our multitude of data suppliers when the data is given to us. We still got the job done, but it wasn’t easy.&lt;/p&gt;

&lt;h1&gt;
  
  
  What to think of when standardizing data.
&lt;/h1&gt;

&lt;p&gt;When the decision was made to standardize the payroll data we receive from payroll providers, there were many processes to think of and many lessons we learned along the way. Here are the most important ones we figured out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aMR0WkFJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pg7z1ayt99359djlnbrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aMR0WkFJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pg7z1ayt99359djlnbrg.png" alt="sample  JSON data"&gt;&lt;/a&gt;&lt;/p&gt;
Just a small sample of data from the Citadel API



&lt;p&gt;&lt;strong&gt;Integrating with each provider is a largely manual process.&lt;/strong&gt; From the beginning you need a product manager and software engineer to individually assess each payroll provider. There’s different data formats (not everybody sends JSON you see) and no 2 providers use the same field names. Because of this it’s very difficult to create a process that can be largely automated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s not just about mapping fields, but massaging data.&lt;/strong&gt; Not all data is free entry, and as engineers we know that enumerated data is much easier to work with. It’s consistent and predictable. Unfortunately this isn’t quite so easy when dealing with multiple data sources. Take pay frequency for example. While provider A may call it “bi-weekly”, provider B might say “every two weeks”. As a result we need to look at what each provider is passing through for the fields we enumerate and if needed create a translation between the data they provide and the values we store.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Why even standardize the data?…We love the developer&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Not all providers are going to provide the data we store.&lt;/strong&gt; At Citadel we strive to give all the data needed to effectively verify employment and income through payroll providers because we believe payroll providers are the ultimate source of truth for this type of verification. We provide over 100 data points to allow accurate and efficient verification, but unfortunately not every provider provides all the data points we store. Some providers don’t give a basis of pay. Some providers don’t provide job titles. Some providers do provide job titles but some employers don’t provide that information. If the data comes from the provider we definitely capture it, but sometimes it’s just not available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All providers are going to provide more data than what we store.&lt;/strong&gt; With support for over 30 of the largest payroll providers in the US covering over 85% of Americans who have a payroll provider, there’s hundreds of different data points you couldn’t even imagine a payroll provider would provide. It’s important to distinguish between which data points are important to store, which are consistent between providers and which aren’t necessary for a verification or few providers actually give.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Providing new fields means going back and investigating all providers.&lt;/strong&gt; Every week we get new field requests from developers and we’re more than happy to oblige. More data for the developer makes for better informed verifications. But when a new field request comes in that means we need to go back through each provider and map that data point.&lt;/p&gt;

&lt;h1&gt;
  
  
  So why even standardize data?
&lt;/h1&gt;

&lt;p&gt;With all the different things to think about above, some would ask why even standardize the data? Why not simply store the data as is and spit it out to the developer when they request it? The answer is actually simple.&lt;/p&gt;

&lt;p&gt;We love the developer. We want the developer to get up and running with our API’s fast and we want them to be happy while working with Citadel. With Citadel APIs being built by developers for developers we know that if we were to forego all the headaches above ourselves then our developer community would need to put their time and effort into it and we just don’t want to subject them to that.&lt;/p&gt;

&lt;p&gt;We’re happy to go back through each provider to map new fields because we know you won’t have to. We’re happy to massage the data for each enumerated field so that you developers can be confident the data we provide you is the way you expect it.&lt;/p&gt;

&lt;p&gt;Data normalization is not a straightforward process when dealing with multiple data providers and integration takes real time and effort to create quality data developers can rely on. But we’re happy to do it to make your development with Citadel a piece of cake.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Learn more about how Citadel’s APIs can make employment and income verification easy and affordable at &lt;a href="https://citadelid.com"&gt;https://citadelid.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
Cover image provided by https://www.thebluediamondgallery.com/wooden-tile/d/data.html



</description>
      <category>fintech</category>
      <category>developer</category>
      <category>integration</category>
      <category>verification</category>
    </item>
    <item>
      <title>Ski Simulators, Qlik Core and Real-Time Analytics — a Qonnections Story</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Wed, 29 May 2019 17:56:44 +0000</pubDate>
      <link>https://dev.to/qlikbranch/ski-simulators-qlik-core-and-real-time-analytics-a-qonnections-story-2ld1</link>
      <guid>https://dev.to/qlikbranch/ski-simulators-qlik-core-and-real-time-analytics-a-qonnections-story-2ld1</guid>
      <description>&lt;h3&gt;
  
  
  Ski Simulators, Qlik Core and Real-Time Analytics — a Qonnections Story
&lt;/h3&gt;

&lt;p&gt;Qlik Core, React and a whole bunch of open source. Read about the fun I had developing an awesome app to go with some cool hardware.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_39VyV-D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALfWIDmRMp3N_FBmWW4TZSg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_39VyV-D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2ALfWIDmRMp3N_FBmWW4TZSg.jpeg" alt=""&gt;&lt;/a&gt;Myself on the super fun SkyTechSport Ski Simulator&lt;/p&gt;

&lt;p&gt;Another Qonnections has come and gone, and this year I got to be part of something really fun. Our keynote speaker for the conference was Lindsey Vonn, the US alpine ski racer with 3 olympic medals and 7 world cup medals. Because of this Qlik wanted to do something really cool and I had &lt;a href="https://twitter.com/AdamMayerwrk"&gt;Adam Mayer&lt;/a&gt; — a Senior Manager here at Qlik for Technical Product Marketing — approach me to lead the development portion of this exciting project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GFW_uIfB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AiBSsTcq2enqv3VLMhU8YWQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GFW_uIfB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AiBSsTcq2enqv3VLMhU8YWQ.jpeg" alt=""&gt;&lt;/a&gt;Myself and Lindsey Vonn at Qonnections&lt;/p&gt;

&lt;p&gt;To get this job done Qlik teamed up with &lt;a href="http://www.skytechsport.com/"&gt;SkyTechSport&lt;/a&gt;, a badass company that makes killer equipment to help athletes stay on top of their game. The plan was simple: SkyTechSport would provide the super cool &lt;a href="http://www.skytechsport.com/alpine-simulator"&gt;Ski Simulator&lt;/a&gt; for our attendees to ride and the people to maintain it, do a bit of development on their end to get us access to the data points the simulator generates and we would build some awesome data visualization to go around it. Our implementation would include both a real-time in game dashboard as well as a post game leaderboard to track who was topping the list. All of this would encompass a charitable effort where Qlik would donate $1 to the Special Olympics for every gate that was passed in a successful run. I was to be in charge of the real-time app and the amazing &lt;a href="https://twitter.com/arturoQV"&gt;Arturo Munoz&lt;/a&gt; would handle the leaderboard. Some great development ahead for sure, but challenges immediately started to present themselves.&lt;/p&gt;

&lt;p&gt;Source Code for project: &lt;a href="https://github.com/Qlik-Branch/qonnections-ski-simulator"&gt;https://github.com/Qlik-Branch/qonnections-ski-simulator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first challenge that needed to be dealt with was how the simulator was passing the data. The simulator is a fast piece of equipment and the software behind it is built for the visual and physical feedback, so all the data happens in milliseconds. 30 milliseconds to be exact. So the simulator is saving the data to one file every 30 milliseconds. Over a network. And not just saving the data, overwriting the data. This brought up two concerns.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;First is that we needed to make sure the network our systems were connected on weren’t going to be bogged down by external influences. Simple enough, we just have a dedicated router with the systems hard-wired to it and problem solved.&lt;/p&gt;

&lt;p&gt;The second concern required a little more thinking and some serious testing. We wanted to make sure we got all the data. That meant getting every write of data with this 30 millisecond timeframe with no file lock issues. After a while of trying to figure out if both writing and reading a file over a network within 30 milliseconds was even feasible I decided to come up with a solution that would simply eliminate our restriction: move the file. If we could move the file out of the way before the simulator had a chance to overwrite it we could work with the data in our own time. The result was actually a really simple script that would just constantly try to move this file to a different folder with the file being named with a timestamp:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;First gate passed. YAY!. The next thing to figure out was where the data was going and how it was going to get there. The answer? The awesome &lt;a href="https://core.qlik.com"&gt;Qlik Core&lt;/a&gt; mixed with R&amp;amp;D’s super cool command line tool &lt;a href="https://github.com/qlik-oss/corectl"&gt;corectl&lt;/a&gt;. By having Docker Desktop installed on the system we used I could write three files and have the entire back end setup done. The first file is the docker-compose.yml file that will tell docker the engine we want set up:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The above file tells docker we want to use the latest (at the time of writing) qlikcore/engine image, accept the End User License Agreement, store our Qlik apps in a /docs directory (which is mounted to a local core-docs directory) and route the standard engine port 9076 to our local port 19076 . We’re also mounting a local data directory as well for when we want to load data. Once we have this file we can run docker-compose up -d and docker will have our engine running in no time.&lt;/p&gt;

&lt;p&gt;The second file we need is a file called corectl.yml which is leverage by corectl:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This file tells corectl everything it needs to know to create the Qlik app we want. It points to the engine, indicates the name of the app we want, a connection to the data folder we need and a path to the load script that will take in the data necessary.&lt;/p&gt;

&lt;p&gt;The final file necessary is our load script that we reference in the file above:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The key thing to note in the load script above is the ADD keyword in the second block. This allows us to leverage the partial data load feature of the engine meaning we could load new data in quickly without losing the data already in the app, keeping our round trip from data load to front end output quick. So with the load script and the corectl file I could run corectl build and have our Qlik app up and ready to go.&lt;/p&gt;

&lt;p&gt;Now with the app up and the data being saved from oblivion I turned to the script that would actually handle the simulators data. Using &lt;a href="https://github.com/qlik-oss/enigma.js"&gt;enigma.js&lt;/a&gt; for engine interaction we first wanted to create a generic object for the attendees badge ID as well as the race ID. That way we could subscribe to the object and keep an eye on it to know when a badge was scanned:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;When a badge was scanned on the front end it would update this generic object and our script can start looking for new race files. Once the race has started it was a simple loop that loads in any existing data files, saves this data to the /unprocessed/ski-data.csv file referenced in the load script and tell the engine to do a partial reload:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;finally we can look through the current data to see if a finishing status is found and if so we can clear out the generic object and stop looking for files:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once we have our data loading script running and waiting, it’s time to get the front end in place. This front end ended up being a React app designed by Arturo, built by myself and incorporates &lt;a href="https://github.com/qlik-oss/enigma.js"&gt;enigma.js&lt;/a&gt;, &lt;a href="https://d3js.org/"&gt;d3.js&lt;/a&gt;, &lt;a href="https://picassojs.com/"&gt;picasso.js&lt;/a&gt; and &lt;a href="https://www.qlik.com/us/products/qlik-geoanalytics"&gt;Qlik GeoAnalytics&lt;/a&gt;. There’s a bunch of parts involved in it, but the important bits are that we set the generic object when a badge is scanned and create some hypercubes that will update when the partial reload happens.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;With all the pieces put together it was time to do some serious testing. The upside to the way the simulator saves data is that it was incredibly easy to simulate. I just needed to write new file every 30 milliseconds and watch all the scripts do the rest.&lt;/p&gt;

&lt;p&gt;The one concern I had through the whole thing was the speed. This was meant to be an in-game dashboard, meaning it had to update quickly and there were a lot of moving parts. The simulator saves the data, the rename script moves the data, the data load script reads and writes the data, the engine reloads the data, recalculates the data to send down to the front and sends it, then the front end re-renders with the new data. I wasn’t expecting to be blown away by the entire round trip taking under 400 milliseconds! With metric in place to measure how long the engine was taking we had 200 millisecond partial reloads happening within that time too. It’s exciting to see Qlik’s engine be put to the test in a real-time use case and come out shining.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/FE29gTd3aVc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;In the end we had a great attraction in the Expo that showed off the awesome power of Qlik and Qlik Core. We raised a significant donation for the Special Olympics and generated a ton of excitement throughout the week.&lt;/p&gt;

&lt;p&gt;I wanted to give a big shout out to everybody I worked with both developing and staffing the booth. &lt;a href="https://www.linkedin.com/in/abbottkatherine/"&gt;Katie Abbott&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/michaelmarolda/"&gt;Mike Marolda&lt;/a&gt; killed it with logistics and helping day of, Adam Mayer was fantastic with all the organization and Arturo Munoz was a design wizard, thanks to all for making this such a success.&lt;/p&gt;




</description>
      <category>docker</category>
      <category>d3js</category>
      <category>qlikcore</category>
      <category>react</category>
    </item>
    <item>
      <title>Qlik Core for Developers: Lessons Learned in Workshop Creation</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Tue, 12 Mar 2019 15:29:57 +0000</pubDate>
      <link>https://dev.to/qlikbranch/qlik-core-for-developers-lessons-learned-in-workshop-creation-hl0</link>
      <guid>https://dev.to/qlikbranch/qlik-core-for-developers-lessons-learned-in-workshop-creation-hl0</guid>
      <description>&lt;p&gt;Another &lt;a href="https://forwardjs.com/" rel="noopener noreferrer"&gt;ForwardJS&lt;/a&gt; has come and gone in San Francisco and as usual, I had a blast while I was there. This time around, I was representing &lt;a href="https://www.qlik.com" rel="noopener noreferrer"&gt;Qlik&lt;/a&gt;with a sponsored workshop, so it was my first crack at getting &lt;a href="https://core.qlik.com" rel="noopener noreferrer"&gt;Qlik Core&lt;/a&gt; into the hands of fresh-faced developers new to the engine — and here’s some things I learned.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AJKkXJyQtdjzHJQNpT-Ab3g.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AJKkXJyQtdjzHJQNpT-Ab3g.jpeg"&gt;&lt;/a&gt;An example of data retrieval, presented at ForwardJS San Francisco, January 2019&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Don’t make assumptions about what tech your developers may know&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://core.qlik.com/" rel="noopener noreferrer"&gt;Qlik Core&lt;/a&gt; is a containerized version of our engine, so we have a Docker image on &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker Hub&lt;/a&gt; for it.&lt;/p&gt;

&lt;p&gt;I made the silly mistake of assuming that developers would know about &lt;a href="https://bit.ly/fjs-docker" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;and the whole concept of containerization, when half the developers at the workshop was unaware of this awesome technology.&lt;/p&gt;

&lt;p&gt;When building out a workshop you should always have time in the beginning to explain the technologies you’re working with an assume the developer has no knowledge of what those technologies are or how they work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AHSRSgjMbB9y5o46X_sjYHA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AHSRSgjMbB9y5o46X_sjYHA.jpeg"&gt;&lt;/a&gt;Qlik Core — Dockerized Engine&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Test your workshop setup on multiple operating systems&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To show my workshop attendees the apps files that were generated when creating a new app in &lt;a href="https://www.qlik.com" rel="noopener noreferrer"&gt;Qlik&lt;/a&gt;, I created a volume for the Docker image that was a directory in the main project folder.&lt;/p&gt;

&lt;p&gt;While this setup worked perfectly on my MacOS, the same Dockerfile caused problems for my attendees who used Windows, particularly the ones who were using Docker for the first time.&lt;/p&gt;

&lt;p&gt;There was some sort of permissions or sharing issue that left other attendees sitting around while I tried to help these developers get their Docker setup working correctly.&lt;/p&gt;

&lt;p&gt;If I had run through the workshop on a Windows machine, I may have been able to pick up on this beforehand instead of wasting valuable time. Always make sure to test your setup in multiple environments so there aren’t any surprises.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Give your attendees examples of what they’re working towards&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;One of the things I did that I feel was key to a smooth running workshop is to have codebases for each of the steps in my workshop. The code for my workshop project sits in a &lt;a href="https://github.com/rjriel/forward-workshop" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;repository and I have a separate branch for each of the learning points that I have.&lt;/p&gt;

&lt;p&gt;There will inevitably be points in your workshop where people will have issues getting the project to do what they need it to and you’ll have to move on. Without having codebases to get them to the next point, your attendees will be separated from the pack and not have a good experience.&lt;/p&gt;

&lt;p&gt;Creating branches will allow users to simply update their project to the next step so they can keep up with the rest of the class.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A8nZCMYBQ2ikeyTv3ZMIVgQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2A8nZCMYBQ2ikeyTv3ZMIVgQ.jpeg"&gt;&lt;/a&gt;An example of set up infrastructure&lt;/p&gt;

&lt;h3&gt;
  
  
  Simplicity is key, advanced topics are for the back pocket
&lt;/h3&gt;

&lt;p&gt;With a powerful engine such as &lt;a href="https://core.qlik.com/" rel="noopener noreferrer"&gt;Qlik Core&lt;/a&gt; there’s so much that can be done, but the last thing developers want to delve into on day one with a piece of tech is the nitty gritty stuff. Luckily the basics of &lt;a href="https://core.qlik.com/" rel="noopener noreferrer"&gt;Qlik Core&lt;/a&gt; are super simple and easy to pick up, with a bunch of tools created in house to make developers lives easier.&lt;/p&gt;

&lt;p&gt;When building out a workshop to introduce developers to a technology, don’t even bother bringing up the complicated stuff. Trust that the developers that want to know the really advanced stuff are going to ask you for it, and that’s a great thing to leave for the end or talk about after the workshop.&lt;/p&gt;

&lt;p&gt;Get the basics, build the foundation, and the developer will let you know when they want to start adding the bells and whistles.&lt;/p&gt;




</description>
      <category>devrel</category>
      <category>qlik</category>
      <category>github</category>
      <category>workshops</category>
    </item>
    <item>
      <title>QlikHacks 2018 — An Ottawa Hackathon</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Tue, 02 Oct 2018 20:11:20 +0000</pubDate>
      <link>https://dev.to/qlikbranch/qlikhacks-2018an-ottawa-hackathon-3gbl</link>
      <guid>https://dev.to/qlikbranch/qlikhacks-2018an-ottawa-hackathon-3gbl</guid>
      <description>&lt;h3&gt;
  
  
  QlikHacks 2018 — An Ottawa Hackathon
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vjWQWOgE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AWjbY1YbR-h4zDVZk9qsfaQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vjWQWOgE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AWjbY1YbR-h4zDVZk9qsfaQ.png" alt=""&gt;&lt;/a&gt;Me at the deCODE Hackathon in Ottawa&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Hey, I saw you do a talk at &lt;a href="https://www.meetup.com/Ottawa-JavaScript/events/dwlbtlywpblb/"&gt;Ottawa JS&lt;/a&gt;. You work for that Queue-Lick company, right?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is pretty standard for me in Ottawa and I’m definitely not the only one. &lt;a href="http://qlik.com"&gt;Qlik&lt;/a&gt; has a ton of great employees working in Kanata but very few people in Ottawa know who Qlik is, what Qlik does or even that it’s pronounced “click.”&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We wanted to do something to fix that while connecting the Ottawa developer and designer community AND making the world a better place.&lt;/p&gt;

&lt;p&gt;Our solution?&lt;/p&gt;

&lt;p&gt;A HACKATHON at our Qlik Ottawa office!!! From Friday, October 19th to Sunday, October 21st we are hosting developers and designers of Ottawa for a hackathon that’ll be a ton of fun. 🎉 And rest assured when we say developers and designers, we mean ALL developers and designers are invited.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Regardless of your skill set, whether you’re a student or a professional, we’d like you to get involved. You don’t need to know Qlik or &lt;a href="https://developer.qlik.com"&gt;Qlik Branch&lt;/a&gt;, you don’t need to know JavaScript, you can come in with any skills and have a great time.&lt;/p&gt;

&lt;p&gt;I won’t go into too much detail right now, but at Qlik we take our &lt;a href="https://www.qlik.com/us/company/social-responsibility"&gt;Corporate Social Responsibility&lt;/a&gt; (CSR) very seriously. Because of this, we’re partnering with the United Nations to get the most out of this hackathon and make the 🌍 a better place.&lt;/p&gt;

&lt;h3&gt;
  
  
  Are you in? We hope so. Here’s the schedule of events…
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Friday, October 19
&lt;/h4&gt;

&lt;p&gt;We’ll kick off the hackathon in the evening with a fun meet and greet. We’ll have food, give a couple of talks on who we are and what to expect during the hackathon, then music and mingling so you can meet your fellow hackathon participants and potential teammates.&lt;/p&gt;

&lt;h4&gt;
  
  
  Saturday, October 20
&lt;/h4&gt;

&lt;p&gt;At 9 AM, we’ll begin with a workshop detailing some necessities and then the hard work begins. Breakfast, lunch, dinner, snacks, refreshments and some entertainment will all be provided by us. We’ll give you the whole day and will have mentors on hand to help you with any challenges you may come across. We’ll close up shop around 11 PM to give you time to head home and rest up for the next day.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sunday, October 21
&lt;/h4&gt;

&lt;p&gt;Starting at 9 AM, you’ll have the opportunity to finish up your work before you’ll present your results and let the judges decide which team will take home the prize. Yup, that’s right, we even have a prize. 🏆&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We’re super excited to be doing this and hope you’re excited to join. To sign up for this awesome event, please visit &lt;a href="http://bit.ly/QlikHack18"&gt;http://bit.ly/QlikHack18&lt;/a&gt;. If you have any questions at all you can contact me at &lt;a href="//mailto:rie@qlik.com"&gt;rie@qlik.com&lt;/a&gt;. See you there!&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>programming</category>
      <category>developer</category>
      <category>javascript</category>
      <category>ottawa</category>
    </item>
    <item>
      <title>RHoK: Hacking Towards a Better Ottawa</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Fri, 13 Jul 2018 12:15:07 +0000</pubDate>
      <link>https://dev.to/qlikbranch/rhok-hacking-towards-a-better-ottawa-23fe</link>
      <guid>https://dev.to/qlikbranch/rhok-hacking-towards-a-better-ottawa-23fe</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F992%2F1%2Ar2sbyEzCFKnoQMPM4C8EGg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F992%2F1%2Ar2sbyEzCFKnoQMPM4C8EGg.jpeg"&gt;&lt;/a&gt;Photo Credit: &lt;a href="https://rhok.ca/events/rhok-8/" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://rhok.ca/events/rhok-8/" rel="noopener noreferrer"&gt;https://rhok.ca/events/rhok-8/&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Hardly a week goes by where I get to think of how awesome my job is.
&lt;/h3&gt;

&lt;p&gt;The second weekend of April had me back at the Adobe conference room in Ottawa where just a week before, &lt;a href="https://forwardjs.com/ottawa" rel="noopener noreferrer"&gt;ForwardJS Ottawa&lt;/a&gt; bought twenty awesome talks to the Nation’s capital.&lt;/p&gt;

&lt;p&gt;This time, it was for &lt;a href="http://rhok.ca" rel="noopener noreferrer"&gt;Random Hacks of Kindness&lt;/a&gt; (RHoK), a twice a year hackathon where organizations submit projects in need and participants apply their skills to hack out a solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2APLs9Qo2BgLTfOfKKlZv3Ig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2APLs9Qo2BgLTfOfKKlZv3Ig.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rhok.ca/projects/project-1-growing-futures-hydroponic-monitoring-system/" rel="noopener noreferrer"&gt;&lt;strong&gt;Project one&lt;/strong&gt;&lt;/a&gt; involved &lt;a href="https://www.growingfutures.ca/" rel="noopener noreferrer"&gt;Growing Futures&lt;/a&gt;, an organization committed to bettering the next generation by using hydroponics to teach our youth about good food and good business. They were seeking a way to remotely manage the growing system of hydroponic stacks around the city. The resulting solution was a fantastic mix of hardware, software and data visualization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rhok.ca/projects/project-2-rideau-rockcliffe-crc-gifts-in-kind/" rel="noopener noreferrer"&gt;&lt;strong&gt;Project two&lt;/strong&gt;&lt;/a&gt; was submitted by the &lt;a href="https://crcrr.org/en/programs/gifts-in-kind-program" rel="noopener noreferrer"&gt;Gifts in Kind&lt;/a&gt; program, which takes on the massive and highly important task of connecting donors with gift in-kind donations to non-profit organizations in Ottawa. With a limited operating budget, the program needs help capturing usage data of its donations and a smarter way to let donors and recipients connect to the program. The RHoK team for this project did a great job laying the groundwork to an intuitive system that will connect donors and recipients to the program with greater ease.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rhok.ca/projects/project-3-code-my-robot-twitch-integration/" rel="noopener noreferrer"&gt;&lt;strong&gt;Project three&lt;/strong&gt;&lt;/a&gt; saw &lt;a href="http://codemyrobot.ca/" rel="noopener noreferrer"&gt;codemyrobot.ca&lt;/a&gt; seeking help. An awesome program that provides free robots to school libraries while giving students access to some robot coding fun, the current submission process for their robot challenges is highly manual and server intensive. The team tasked with fixing the problem came up with a great solution to not only automate the submission process, but also keep the safety and privacy of students as top priority while ensuring the system could scale with the barrage of videos coming their way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rhok.ca/projects/project-4-isisters-technology-learning-platform/" rel="noopener noreferrer"&gt;&lt;strong&gt;Project four&lt;/strong&gt;&lt;/a&gt; was submitted to help &lt;a href="https://isisters.org/" rel="noopener noreferrer"&gt;iSisters&lt;/a&gt; educate women in need. The iSisters organization was created to mentor women in need at no cost to them. Unfortunately the technology its currently built on is outdated and needs major modernization. Their team did a great job of revamping the site and giving it a crisp facelift and more intuitive UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rhok.ca/projects/project-5-fairvote-ca-canvasing-tool/" rel="noopener noreferrer"&gt;&lt;strong&gt;Project five&lt;/strong&gt;&lt;/a&gt; was about an undertaking by &lt;a href="https://www.fairvote.ca/" rel="noopener noreferrer"&gt;Fair Vote Canada&lt;/a&gt; to try and make every Canadians vote more equal. The organization was looking for software to allow its canvassers an easier time tracking and managing canvassing areas. The resulting project will surely help Fair Vote Canada take great leaps towards achieving its goal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rhok.ca/projects/project-6-carlington-community-health-centre/" rel="noopener noreferrer"&gt;&lt;strong&gt;The final project&lt;/strong&gt;&lt;/a&gt; (which I was a part of) was submitted by the &lt;a href="http://www.carlington.ochc.org/" rel="noopener noreferrer"&gt;Carlington Community Health Centre&lt;/a&gt; to breathe life into an Ottawa Bad Date List that would help the sex-trade workers in the city have a safer working environment. The organization was looking for an app that could be used by phone or desktop that would allow workers to not only see vital information to clients that were posing harm to the trade, but submit their own information quickly to ensure the safety of their fellow workers. The end product in the hackathon was a great step towards achieving that goal.&lt;/p&gt;

&lt;p&gt;The final presentations of the hackathon were live-streamed &lt;a href="https://rhok.ca/hackathon-finale-and-project-presentations/" rel="noopener noreferrer"&gt;here&lt;/a&gt; (audio was missing for the first 13 minutes but they caught all the presentations), but the amazing thing that comes out of this is that these projects never seem to end when the weekend does.&lt;/p&gt;

&lt;h3&gt;
  
  
  Participants of the RHoK hackathons seem to have a habit of continuing on to make sure these projects reach a conclusion and it’ll be exciting to see the results of all the hard work in the real world.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F592%2F1%2A9-Rn3nnJIBhoZlr7ejKy9g.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F592%2F1%2A9-Rn3nnJIBhoZlr7ejKy9g.jpeg"&gt;&lt;/a&gt;Photo Credit: &lt;a href="https://rhok.ca/events/rhok-8/" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://rhok.ca/events/rhok-8/" rel="noopener noreferrer"&gt;https://rhok.ca/events/rhok-8/&lt;/a&gt;&lt;/p&gt;




</description>
      <category>hackathon</category>
    </item>
    <item>
      <title>The ForwardJS Battle — Part 2: Ottawa</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Fri, 01 Jun 2018 12:56:32 +0000</pubDate>
      <link>https://dev.to/qlikbranch/the-forwardjs-battle--part-2-ottawa-5h8m</link>
      <guid>https://dev.to/qlikbranch/the-forwardjs-battle--part-2-ottawa-5h8m</guid>
      <description>

&lt;p&gt;With &lt;a href="https://dev.to/qlikbranch/the-forwardjs-battle--part-1-san-fran-4icf-temp-slug-6327752"&gt;ForwardJS San Francisco&lt;/a&gt; behind me, I turned my sights on bringing Forward home with me to &lt;a href="https://forwardjs.com/ottawa"&gt;Ottawa&lt;/a&gt; for the second year in a row.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iJNtCUjy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A1QhTaYQnuOywTusj5758sg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iJNtCUjy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2A1QhTaYQnuOywTusj5758sg.jpeg" alt=""&gt;&lt;/a&gt;Here I am taking ForwardJS Ottawa attendees through Halyard.js.&lt;/p&gt;

&lt;p&gt;ForwardJS Ottawa in 2017 was a cool, calm introduction to the Canadian capital. But this year, we wanted to go bigger…and better.&lt;/p&gt;

&lt;p&gt;The only way to do that was to pack the conference with great workshops and two days of stellar speakers, and boy did we deliver.&lt;/p&gt;

&lt;p&gt;Day 1 saw ForwardJS jump out the gate in full force. With &lt;a href="https://medium.com/u/fadbcd01c7a3"&gt;Andy Mockler&lt;/a&gt; and &lt;a href="https://twitter.com/kristencodes"&gt;Kristin Spencer&lt;/a&gt; starting the show schooling us on soft skills, the day was filled with awesome talks like &lt;a href="https://medium.com/u/624aec3174db"&gt;Adam Daw&lt;/a&gt; touting the &lt;a href="https://medium.com/coventure/js-minus-js-the-future-of-the-javascript-community-is-better-through-transpilation-b980d59eaa93"&gt;power of transpilation&lt;/a&gt;, &lt;a href="https://medium.com/u/6d5313b8bef9"&gt;Jan C. Liz-Fonts&lt;/a&gt; showing how to harness &lt;a href="https://forwardjs.com/ottawa/schedule#lecture-403"&gt;Blockchain in Javascript&lt;/a&gt; and &lt;a href="https://twitter.com/ThisIsMaryCodes"&gt;Mary Snow&lt;/a&gt; decoding the &lt;a href="https://forwardjs.com/ottawa/schedule#lecture-387"&gt;NodeJS Event Loop&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That first day brought so much great knowledge — so the speakers on day 2 really had their work cut out for them. And they were more than up to the task. The day saw &lt;a href="https://twitter.com/VossJenn"&gt;Jenn Voss&lt;/a&gt; bring us &lt;a href="https://forwardjs.com/ottawa/schedule#lecture-391"&gt;Tales from the QA Crypt&lt;/a&gt;, &lt;a href="https://medium.com/u/cf9894fccad7"&gt;Eric Adamski&lt;/a&gt;’s amazing energy with &lt;a href="https://forwardjs.com/ottawa/schedule#lecture-402"&gt;Rx and Async&lt;/a&gt;, &lt;a href="http://kscoult"&gt;Ksenia Coulter&lt;/a&gt; teaching us to &lt;a href="https://forwardjs.com/ottawa/schedule#lecture-399"&gt;Get the Most out of Code Reviews&lt;/a&gt; and &lt;a href="https://twitter.com/_briantavares"&gt;Brian Tavares&lt;/a&gt; closing out with a fantastic &lt;a href="https://forwardjs.com/ottawa/schedule#lecture-388"&gt;React Native&lt;/a&gt; talk.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ee2sr1Jf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AtycsPdCz-1xigrlA5WwMCw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ee2sr1Jf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AtycsPdCz-1xigrlA5WwMCw.jpeg" alt=""&gt;&lt;/a&gt;ForwardJS donuts, anyone?&lt;/p&gt;

&lt;p&gt;I’m more than willing to admit bias and won’t deny it could be in part to my involvement in organizing or the fact it was in my hometown, but in the battle of San Francisco vs. Ottawa I’d say the “north of the border” contender took the ForwardJS crown this year.&lt;/p&gt;





</description>
      <category>datavisualization</category>
      <category>node</category>
      <category>reactnative</category>
      <category>blockchain</category>
    </item>
    <item>
      <title>The ForwardJS Battle — Part 1: San Fran</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Fri, 11 May 2018 12:54:45 +0000</pubDate>
      <link>https://dev.to/qlikbranch/the-forwardjs-battle--part-1-san-fran-1910</link>
      <guid>https://dev.to/qlikbranch/the-forwardjs-battle--part-1-san-fran-1910</guid>
      <description>&lt;p&gt;February of 2018 saw me leaving the cold winter of Ottawa to bathe in the mild warmth of San Francisco. This wasn’t a vacation. I was tasked with donning the Qlik cap and representing our developer relations team with a talk. With five days full of workshops — I couldn’t just fly in, speak and fly out. I had to take advantage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F301%2F1%2AFJXu3AAyzpu7qLm1-nbJxw%402x.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F301%2F1%2AFJXu3AAyzpu7qLm1-nbJxw%402x.jpeg"&gt;&lt;/a&gt;Donning my Qlik cap while wear a ForwardJS hat&lt;/p&gt;

&lt;p&gt;Day 1 of ForwardJS had me digging into WebAPIs with &lt;a href="https://medium.com/u/c2e120a1a32" rel="noopener noreferrer"&gt;Aysegul Yonet&lt;/a&gt;. A tiny class with fewer than ten attendees meant we could get a lot done quickly and have some hacking fun of our own. While Aysegul covered the basics of a bunch of the different options now available to us devs in the browser, the funnest part was coding a script to see exactly how much of my hard drive IndexedDB would allow a site to eat up. Not very often a workshop has me routing for a script to win against my machine.&lt;/p&gt;

&lt;p&gt;Day 2 for me was the conference itself, which meant a less technical talk on the downfall of this years Ottawa Senators entitled “&lt;a href="https://forwardcourses.com/lectures/346" rel="noopener noreferrer"&gt;My Hockey Team Sucks&lt;/a&gt;” and a day of chilling at the Qlik table with &lt;a href="https://medium.com/u/edeece4a7db5" rel="noopener noreferrer"&gt;Ana Nennig&lt;/a&gt; pimping our awesome Branch swag and talking about the awesome power of the QIX engine.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-963582133112520704-383" src="https://platform.twitter.com/embed/Tweet.html?id=963582133112520704"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-963582133112520704-383');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=963582133112520704&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;Day 3 brought me &lt;a href="https://medium.com/u/69ba9847d710" rel="noopener noreferrer"&gt;Brian Holt&lt;/a&gt; and his React + Redux workshop. I was really impressed with how Brian started with the basics, making code each element using React.createElement to see how the process works. A little while later (after getting over that icky feeling in my stomach that came from putting HTML in Javascript) we were introduced to the wonderful world of Redux, reducers and Redux middleware. This was a great workshop that Brian has brought around the world (which he tracks in the repo), so if it’s coming to a conference near you I suggest grabbing a seat.&lt;/p&gt;

&lt;p&gt;Last day of the conference had me sitting in on Webpack 101 with Freddy Rangel. Starting at the basics of initialization and running all the way through tree-shaking, Freddy did a great job of helping me de-mystify one of those things I usually leave to the Angular CLI to figure out. It’s also always nice when a teacher introduces a new teaching technique and Freddy’s approach of having a separate GitHub branch for each module made keep pace a breeze for everyone in the workshop.&lt;/p&gt;

&lt;p&gt;Overall I’d say the San Fran ForwardJS was a pretty great success and I knew I’d have to work extra hard with &lt;a href="https://medium.com/u/150969363672" rel="noopener noreferrer"&gt;Dave Nugent&lt;/a&gt; to make sure its Canadian counterpart lived up to the Forward reputation.&lt;/p&gt;




</description>
      <category>javascript</category>
      <category>react</category>
      <category>angular</category>
      <category>webapi</category>
    </item>
    <item>
      <title>deCODE Hackathon + Qlik = Awesome Results</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Fri, 10 Nov 2017 14:39:59 +0000</pubDate>
      <link>https://dev.to/qlikbranch/decode-hackathon--qlik--awesome-results-53lo</link>
      <guid>https://dev.to/qlikbranch/decode-hackathon--qlik--awesome-results-53lo</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F945%2F1%2AOkHNHWwwPnn9sYBruCdJKQ.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F945%2F1%2AOkHNHWwwPnn9sYBruCdJKQ.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second weekend of October myself and a band of &lt;a href="https://www.qlik.com/us/" rel="noopener noreferrer"&gt;Qlik&lt;/a&gt; developers teamed up with some of the top students in Ottawa for this year’s &lt;a href="http://hackdecode.io/" rel="noopener noreferrer"&gt;deCODE Hackathon&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Challenge
&lt;/h4&gt;

&lt;p&gt;But what challenge should our team tackle? After some discussion with Qlik’s corporate social responsibility (CSR) team, we decided the coolest thing for our team to work on would be the &lt;a href="https://oceanconference.un.org/commitments/#visual" rel="noopener noreferrer"&gt;Data Competition&lt;/a&gt; that the United Nations had at &lt;a href="https://oceanconference.un.org/" rel="noopener noreferrer"&gt;The Oceans Conference&lt;/a&gt; in June. As part of this competition, “participants were encouraged to use their imagination to produce visualizations that highlight insights from all the 1,380 voluntary commitments available in The Ocean Conference Registry of Voluntary Commitments.” The overall goal? &lt;strong&gt;#SAVEOUROCEAN&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Work
&lt;/h4&gt;

&lt;p&gt;With our challenge selected, it was time to get to work. We gave the students access to the data with &lt;a href="http://playground.qlik.com" rel="noopener noreferrer"&gt;Qlik Playground&lt;/a&gt;, taught them &lt;a href="http://help.qlik.com/en-US/sense-developer/3.0/Content/APIs-and-SDKs.htm" rel="noopener noreferrer"&gt;Qlik’s APIs&lt;/a&gt; and let them dive in…&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;And boy did they impress us.&lt;/strong&gt; The first thing I was impressed by was how quickly the students not only learned the Qlik APIs, but how quickly they realized the power of it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When they learned that they didn’t have to bother with keeping the state of their selections, but simply make a selection call and let the Qlik Associative Engine handle all the data updating — they were blown away. They definitely put that power to use…and quickly in true hackathon style.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Awesome Results
&lt;/h4&gt;

&lt;p&gt;Over the span of just 22 hours the team brought together an awesome page that not only showed off the data, but allowed the individual user to find the answers themselves by filtering based on country, company, target —pretty much whatever they wanted.&lt;/p&gt;

&lt;p&gt;This project even recently made a United Nation’s &lt;a href="https://oceanconference.un.org/OceanAction/2" rel="noopener noreferrer"&gt;Ocean Action Newsletter&lt;/a&gt; (under Updates from Voluntary Commitments). We’re so incredibly proud of this team, especially the hardworking and eager students. This is well-deserved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So where is this awesome visualization?&lt;/strong&gt; Check out the &lt;a href="http://playground.qlik.com/showcase" rel="noopener noreferrer"&gt;Qlik Playground Showcase&lt;/a&gt; and view the “UN Our Oceans Challenge” project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And how can you help us with this important initiative?&lt;/strong&gt; If you want to get involved or play with the code yourself, the project is open sourced on &lt;a href="https://github.com/QlikHackathon/decode-oct-2017" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; as well.&lt;/p&gt;

&lt;p&gt;Enjoy! And let us know what you think in the comments.&lt;/p&gt;




</description>
      <category>data</category>
      <category>datascience</category>
      <category>socialresponsibilit</category>
      <category>hackathons</category>
    </item>
    <item>
      <title>From Custom Blogs to Medium: Our Transition (Part 2)</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Wed, 08 Mar 2017 14:39:12 +0000</pubDate>
      <link>https://dev.to/qlikbranch/from-custom-blogs-to-medium-our-transition-part-2-3dej</link>
      <guid>https://dev.to/qlikbranch/from-custom-blogs-to-medium-our-transition-part-2-3dej</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AIDHG41ZqotgLUpwMj6gqpA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F1%2AIDHG41ZqotgLUpwMj6gqpA.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It didn’t take long to realize that it wasn’t just the APIs, but the RSS feed itself was limited. After publishing a bunch of stories to our publication, we realized that stories were being removed from the Sense search. Would you like to guess why? The RSS feed ONLY GIVES THE LATEST 10 STORIES!!! There’s a lesson in load testing for you.&lt;/p&gt;

&lt;p&gt;I was now stuck with two problems:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Problem 1: How to properly keep the app up to date&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I decided to take another look at possible solutions. My brain wouldn’t stop iterating “there’s got to be a better way than this.”&lt;/p&gt;

&lt;p&gt;After doing some more digging and searching, I learned about &lt;a href="https://en.wikipedia.org/wiki/PubSubHubbub" rel="noopener noreferrer"&gt;PubSubHubbub&lt;/a&gt;. Yes, that’s really the name. And you would assume that is some sort of company or product name, but it’s actually a protocol that’s pretty similar to WebHooks.&lt;/p&gt;

&lt;p&gt;Basically you subscribe to a PubSubHubbub feed giving a callback url. The PubSubHubbub provider then sends a POST request to the callback url to confirm the subscription.&lt;/p&gt;

&lt;p&gt;Once this process is done, the provider will send a GET request to the callback URL any time new content is published to it. Sounds perfect!&lt;/p&gt;

&lt;p&gt;This time I was skeptical however. APIs didn’t help. RSS was limiting. Surely there was going to be something wrong. My assumption proved correct after doing a test run of the process and learning that the GET request from Superfeedr (Medium’s PubSubHubbub provider) didn’t actually have the content of the article, making this useless.&lt;/p&gt;

&lt;p&gt;Having learned this, I decided the best thing to do was to stick with the RSS feed at this point and simply &lt;a href="https://github.com/Qlik-Branch/branch-resource-library/blob/67f075a487d2751406cb8a3990230a54284943e1/feed-pull.js" rel="noopener noreferrer"&gt;remove the code that deleted stories from the db&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem 2: How to get the missing stories back
&lt;/h3&gt;

&lt;p&gt;Since the RSS only shows the last 10 stories and the PubSubHubbub method is for future stories, I was left with two possible options. The first was to manually enter the information into the database, which as a coder would go against every impulse in my brain.&lt;/p&gt;

&lt;p&gt;So the only option left is writing a script to download the data through HTTP requests. That involved doing the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Find out where Medium lists stories for a publication (&lt;a href="https://medium.com/" rel="noopener noreferrer"&gt;https://medium.com/&lt;/a&gt;//)&lt;/li&gt;
&lt;li&gt;Analyze the HTML to determine how to pull out the link for each story&lt;/li&gt;
&lt;li&gt;Analyze the HTML of the story page to determine how to best get the data from it&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;1 and 2 were pretty easy by themselves and Medium was very helpful with the third by providing most of the data we needed in meta tags. As for the content itself, it was simply a matter of finding the right div. With all this information, I just needed to find an npm package that would help me parse through the HTML (thank you &lt;a href="https://www.npmjs.com/package/cheerio" rel="noopener noreferrer"&gt;cheerio&lt;/a&gt;) and write the &lt;a href="https://gist.github.com/rjriel/8ec8cbfb0b87f1d5c65989d2e675873b" rel="noopener noreferrer"&gt;script&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  So now we’re done, right?
&lt;/h3&gt;

&lt;p&gt;We could be, or we could probably make things better. There still isn’t any code to remove stories if they have been deleted on Medium and there’s probably a way to use PubSubHubbub to use a subscription process instead of having a script running every once in a while.&lt;/p&gt;

&lt;p&gt;If you’d like to have some fun, I encourage you to fork the &lt;a href="https://github.com/Qlik-Branch/branch-resource-library" rel="noopener noreferrer"&gt;repo&lt;/a&gt;, try to solve these problems and submit a pull request if you do.&lt;/p&gt;




</description>
      <category>superfeedr</category>
      <category>media</category>
      <category>webhooks</category>
      <category>html</category>
    </item>
    <item>
      <title>From Custom Blogs to Medium: Our Transition (Part 1)</title>
      <dc:creator>Rey Riel</dc:creator>
      <pubDate>Thu, 23 Feb 2017 20:20:11 +0000</pubDate>
      <link>https://dev.to/qlikbranch/from-custom-blogs-to-medium-our-transition-part-1-4kgp</link>
      <guid>https://dev.to/qlikbranch/from-custom-blogs-to-medium-our-transition-part-1-4kgp</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hzWtY2xv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AIDHG41ZqotgLUpwMj6gqpA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hzWtY2xv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AIDHG41ZqotgLUpwMj6gqpA.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I first signed on to the Dev Relations team at Qlik and began working with our main site &lt;a href="http://branch.qlik.com"&gt;Qlik Branch&lt;/a&gt; we were working with our own custom built blogs section. While this seemed easy enough for us to start with, there were a few problems with this.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Exposure:&lt;/strong&gt; Our blogs were only available in our site and required existing users or posting links in order for the world to know about them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Third Party Tools:&lt;/strong&gt; In order to allow the ability to use formatting in our blogs, we needed to use a third party content editor. When I came in we were using &lt;a href="http://summernote.org/"&gt;Summernote&lt;/a&gt;, then we tried moving to markdown with &lt;a href="https://simplemde.com/"&gt;SimpleMDE&lt;/a&gt;. While these options were both fairly robust, any found bugs became an issue and customization was a pain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintenance:&lt;/strong&gt; We were in charge of the storage of the blogs as well as the process for moderating them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After much back and forth within the team and discussion with others, we decided that Medium would be the way to go. While this would solve the problems above we had one major hurdle that we needed to figure out.&lt;/p&gt;

&lt;p&gt;While we were going use Medium to host our blogs (hereafter referred to as Mediums term “stories”) we still wanted to use the &lt;a href="https://help.qlik.com/en-US/sense-developer/3.2/Subsystems/EngineAPI/Content/Classes/AppClass/App-class-SearchAssociations-method.htm"&gt;Search&lt;/a&gt; capabilities available in the &lt;a href="https://help.qlik.com/en-US/sense-developer/3.0/Subsystems/EngineAPI/Content/introducing-engine-API.htm"&gt;Qlik Engine API&lt;/a&gt;. This involved any time a story was published that we pull all the information (including the story’s content) so that the Engine could load it into our Sense App.&lt;/p&gt;

&lt;p&gt;The first solution I searched for was using the Medium API. As a developer this seemed like the obvious solution, but I quickly ran into a big problem here. The Medium API itself is fairly limited and doesn’t allow you to either retrieve a list of stories for a publication or retrieve the information for an individual story. Without these abilities the API was pretty much a no go for me.&lt;/p&gt;

&lt;p&gt;Now I needed to find any type of source that could list a publications stories. After tons of searching, the only solution I could find was good ole RSS. Each publication has an RSS feed of its stories, and when you set the “RSS Feed” setting to &lt;strong&gt;Full&lt;/strong&gt; on a given publication you get all the content of the story as well. So I wrote a &lt;a href="https://github.com/Qlik-Branch/branch-resource-library/blob/1dbf848215506a0c6be4f02e2cb9fb8dd54d3a8d/feed-pull.js"&gt;quick script&lt;/a&gt; that will ping the RSS feed every X minutes, compare each story in the feed with a quick checksum to see if there are any changes to the stories and clear out the ones that aren’t in the feed (in case stories were removed for some reason). Problem solved!!!!!&lt;/p&gt;

&lt;p&gt;Actually, it wasn’t. More to come on that.&lt;/p&gt;




</description>
      <category>qlik</category>
      <category>api</category>
      <category>rss</category>
      <category>media</category>
    </item>
  </channel>
</rss>
