<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: leibole</title>
    <description>The latest articles on DEV Community by leibole (@leibole).</description>
    <link>https://dev.to/leibole</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/leibole"/>
    <language>en</language>
    <item>
      <title>Evaluating Supabase</title>
      <dc:creator>leibole</dc:creator>
      <pubDate>Wed, 21 Oct 2020 08:23:36 +0000</pubDate>
      <link>https://dev.to/leibole/evaluating-supabase-kl4</link>
      <guid>https://dev.to/leibole/evaluating-supabase-kl4</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I'm a freelance web developer working on various projects regularly. I use Firebase a lot when I need to get up and running quickly. I recently heard about "Supabase - the open source Firebase"  (in a great podcast episode), and thought I'd try it to see how it stands to the promise. Tl;dr: in a lot of ways it's already better :) &lt;br&gt;
The nickname "Open source firebase" does it a bit of injustice. Unlike firebase, Supabase is based on an SQL database - Postgresql (with all the pros and cons). Supabase also offers a great hosted version of its open source project, with a decent management UI including many features, as well as real time capabilities, on top of a ready made javascript library (more clients to come).&lt;/p&gt;

&lt;h1&gt;
  
  
  My Use Case
&lt;/h1&gt;

&lt;p&gt;I tested Supabase with one of my existing projects (built with Firebase's Firestore). It is a software for managing zoos, used for keeping track of all the animals in a given zoo. The main entities in the db are "Animals" and "Events". An animal can have many events, and each event can be reported for exactly one animal.&lt;br&gt;
The scale of the project is not big in total, however each zoo has quite a lot of data. There are hundreds of zoos in the system, and each zoo can have thousands of animals and tens of thousands of events.&lt;/p&gt;

&lt;h1&gt;
  
  
  Supabase Evaluation
&lt;/h1&gt;

&lt;p&gt;To test Supabase, I focused on a number of important criterias: setup, project integration and management UI. Here are my conclusions on there:&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Setting up a Supabase database is very quick and easy. I got a database up and running in about 5 minutes. You also get auto generated docs with the details of the new project. It took a few more minutes to set up my tables from the UI, and configure the schema for my two tables (animals and events).&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration
&lt;/h2&gt;

&lt;p&gt;The integration into my existing project was really easy, and copied directly from the generated docs. It looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;supabaseUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://edvkppzqwycrasvjykbo.supabase.co&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;supabaseKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;&amp;lt;LONG_KEY&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;supabaseUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;supabaseKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;
     &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;events&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
     &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;*&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Unlike most of the projects I work on, this code "just worked" on the first attempt. I was very surprised. There are still some ways for Supabase to go in terms of authentication - the key used here is only suitable for server environments, but still the integration is easy.&lt;br&gt;
User Interface&lt;br&gt;
The user interface offered by Supabase is very useful. A user can create, edit and manage tables directly from the UI, as well as run queries and see results. It is still very glitchy, I met a lot of bugs in just my short usage. Nonetheless, its usability is already far broader than that of Firebase or Firestore.&lt;/p&gt;

&lt;h2&gt;
  
  
  The supabase UI
&lt;/h2&gt;

&lt;p&gt;Performance Evaluation&lt;br&gt;
The main reason that led me to look for a Firebase alternative is the lacking performance. At the moment I'm sometimes querying for thousands of records at once, which can take a few seconds in Firebase. This hurts user experience and leads to compromises in the UI I implement, to keep these performance issues from showing. &lt;br&gt;
I tested the performance in a few stages:&lt;br&gt;
Migrate the data:&lt;br&gt;
I chose a single zoo and transferred its data. I wrote a script to read the data from Firebase, and write it to Supabase.&lt;br&gt;
All it took to write 31,666 rows of data to Supabase was this one row (I wrote a few more lines of code for preparing the data):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

   &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;animals&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;insert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;animalsToWrite&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It also worked super fast, around 10-15 seconds for the write to complete.&lt;br&gt;
Compare reading the data in Supabase and Firebase:&lt;br&gt;
Here is the code for reading the events rows from Firebase:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;readFirebaseData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;events&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;
   &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zoos&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;example&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;events&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;timeEnd&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And The result - 28 seconds for reading 16,753 documents from firebase:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fb9tswhi3xcw4igqcrq87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fb9tswhi3xcw4igqcrq87.png" alt="Screen Shot 2020-10-21 at 11.20.23"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly, the code for testing Supabase was:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;readData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;events&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;events&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;*&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
   &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;timeEnd&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And result - a whooping 31,666 rows read in the 1.5 seconds:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fojy408k3qhwvx7d3bapq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fojy408k3qhwvx7d3bapq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Bonus: Easy BI Integration
&lt;/h1&gt;

&lt;p&gt;As part of my project the web application is connected to a BI system - Google Datastudio. To use it with Firebase, I need to first copy the entire DB into a structured DB, like Big Query. I use a process that runs once a day, and copies all the Firebase data into BigQuery.&lt;br&gt;
When using Supabase, copying the data is not needed. Supabase provides each project with a dedicated DB. The DB URL is easily found in the management UI. I just passed this URL to the BI system and Violla! The system is connected to a great BI.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Downsides
&lt;/h1&gt;

&lt;p&gt;Like anything, Supabase has its downsides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The real-time functionality still cannot be securely enough used from client code. &lt;/li&gt;
&lt;li&gt;Authentication still has a ways to go for it to be possible.&lt;/li&gt;
&lt;li&gt;The UI is very glitchy and very raw. I found numerous annoying bugs just by using it for about half an hour. I had to connect with my local psql client to get around them.&lt;/li&gt;
&lt;li&gt;The pricing is free for now, which seems weird. I worry that when I get to larger amounts of data I might be limited. Another worry is that they will begin to charge large sums when I'm seriously locked in.&lt;/li&gt;
&lt;li&gt;I didn't see a parallel to Firebase Functions, where I could extend the app's functionality with custom serverless code, triggered by events from the Firebase database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusions
&lt;/h1&gt;

&lt;p&gt;Supabase looks very promising. Offering an easy to setup and use Postgresql DB, with great client libraries seems like a no brainer. The performance is great, and ease of use is as good as it gets.&lt;br&gt;
Nonetheless, the product is still in alpha, and it shows. I will wait a couple of months for some of the issues to be sorted out. After that, I will definitely attempt to migrate my app to Supabase. &lt;br&gt;
The performance enhancements could be gained just by moving to a standard managed postgres DB, but combining the ease of use Supabase offers drives it over the edge for me.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>postgres</category>
    </item>
    <item>
      <title>React (injected) Chrome extension</title>
      <dc:creator>leibole</dc:creator>
      <pubDate>Sat, 18 Apr 2020 15:04:54 +0000</pubDate>
      <link>https://dev.to/leibole/react-injected-chrome-extension-2bj0</link>
      <guid>https://dev.to/leibole/react-injected-chrome-extension-2bj0</guid>
      <description>&lt;h1&gt;
  
  
  Intro
&lt;/h1&gt;

&lt;p&gt;In this post, I'll quickly run through some very useful information on how to inject a React app into an existing web page. I've used it to extend a specific web app that had no other way of extending, but it can be useful for many other scenarios. There's also a double bonus: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I'll show how to run the extension in a dev environment.&lt;/li&gt;
&lt;li&gt;We'll see how to auto-reload the extension once the code is changed.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Stage 1: Create React App
&lt;/h1&gt;

&lt;p&gt;It seems like every React how to tutorial starts with this line, and so does this one. Create a new react app using Create React App. I have actually created a react app with typescript enabled:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npx create-react-app my-app --template typescript&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Now we have a basic react app, with the react default content. Let's replace the contents off App.tsx with the most basic content to inject:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import React from 'react';

const App = () =&amp;gt; {
  return &amp;lt;div&amp;gt;Some injected content&amp;lt;/div&amp;gt;
}

export default App;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Stage 2: Extension manifest file
&lt;/h1&gt;

&lt;p&gt;Each extension needs a manifest file (see &lt;a href="https://developer.chrome.com/extensions/manifest"&gt;extension manifest file&lt;/a&gt;). Our file should be located in the public folder, and should look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "Extension name",
  "version": "1.0",
  "manifest_version": 2,
  "browser_action": {
    "default_popup": "index.html"
  },
  "content_security_policy": "script-src 'self' 'sha256-&amp;lt;the extension hash&amp;gt;'; object-src 'self'",
  "background": { "scripts": ["hot-reload.js"] }, // see bonus :)
  "content_scripts": [
    {
      "matches": ["&amp;lt;all_urls&amp;gt;"],
      "css": ["/static/css/main.css"],
      "js": ["/static/js/main.js"]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Stage 3: Eject Create React App
&lt;/h1&gt;

&lt;p&gt;I always prefer to avoid ejecting a Create React App (CRA) project, but we have to in this case. We want the output files to always be named main.js and main.css and avoid the random hash in the file name that's used by default in CRA. So let's run&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm run eject&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;We need to edit the webpack.config.js file: we need to remove the "chunkhash" from the output file names, both main.js and main.css.&lt;br&gt;
We can now run&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;npm run build&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 and get the built files output. One thing is still missing: the actual injection code.&lt;/p&gt;
&lt;h1&gt;
  
  
  Stage 4: Injecting the React App
&lt;/h1&gt;

&lt;p&gt;Now usually in a normal React App, we'll create a&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;&amp;lt;div id="root&amp;gt;&amp;lt;/div&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
 inside the index.html file, and then call&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ReactDOM.render(&amp;lt;App /&amp;gt;, document.getElementById("root"));&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;to insert the app.&lt;br&gt;
The injection code is very similar: we choose where to inject the app (for example - the body element), and append a div to it with the id "root":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const app = document.createElement("div");
app.id = "root";
document.body.append(app);
ReactDOM.render(
  &amp;lt;App /&amp;gt;,
  document.getElementById("root")
);
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And just like that, the React App is appended to the body.&lt;/p&gt;

&lt;h1&gt;
  
  
  Bonus #1: Run in Dev mode
&lt;/h1&gt;

&lt;p&gt;The basic usage of the app is now as an injected div. But in that case, whenever we make a change we have to reload the extension, and even worse, the code is the built code, uglified and minified and unreadable. What can we do in development?&lt;br&gt;
Just have the app inject itself as a normal React app. include the normal root code in the index.html file, and in index.tsx check if the environment is development, and if so attach the React app to the root element:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (process.env.NODE_ENV === "development") {
  ReactDOM.render(
    &amp;lt;App /&amp;gt;,
    document.getElementById("root")
  );
} else {
  const app = document.createElement("div");
  app.id = "root";
  document.body.append(app);
  ReactDOM.render(
    &amp;lt;App /&amp;gt;,
    document.getElementById("root")
  );
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Bunos #2: Auto reload the extension on file changes
&lt;/h1&gt;

&lt;p&gt;To avoid having to manually reload the extension on each rebuild, I've used a hot-reload code snippet I found in this repo &lt;a href="https://github.com/xpl/crx-hotreload"&gt;https://github.com/xpl/crx-hotreload&lt;/a&gt;. Just copy the file hot-reload.js from this repo into your public folder, and include this line in the manifest file (already included in the example manifest file above):&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;"background": { "scripts": ["hot-reload.js"] }&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;The file from the repo needs a small change to work well with the React ecosystem: a short timeout (~10 seconds) needs to be added before reloading the extension, to allow for the build to complete.&lt;/p&gt;

&lt;p&gt;Good luck, you're welcome to comment if you have any questions.&lt;/p&gt;

</description>
      <category>react</category>
      <category>chromeextension</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Working with BigQuery Analytic Functions</title>
      <dc:creator>leibole</dc:creator>
      <pubDate>Mon, 21 Oct 2019 08:05:10 +0000</pubDate>
      <link>https://dev.to/leibole/working-with-bigquery-analytic-functions-4ip4</link>
      <guid>https://dev.to/leibole/working-with-bigquery-analytic-functions-4ip4</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Introduction&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Most of the day-to-day database users can make do with the standard database functionality. Want to get all the friends of a certain user? Just join the users table with the friends table and there you have it (btw — it’s never that simple). If you’re a more advanced user, you might use aggregations and ordering.&lt;/p&gt;
&lt;p&gt;However, sometimes you can’t get the insights you want with just joins and aggregations. When I faced such a problem, my first instinct was to load all the data into a simple python/javascript/ruby/(favorite dynamic programming language) and calculate it there. But it meant loading a big amount of data into the script, while losing the parallel computation power BigQuery has to offer. Also, it meant more moving parts in the system and more deployment details.&lt;/p&gt;
&lt;p&gt;That’s when a simple google search (I think it was “BigQuery calculate row based on other rows”) led me to BigQuery Analytic Functions (the concept is not specific to BigQuery).&lt;/p&gt;
&lt;p&gt;In this post, I will describe what these functions are, what was the problem I was trying to solve, how I solved it with Analytic functions and my conclusions from the process.&lt;/p&gt;
&lt;p&gt;Ok then…&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. The problem at hand (simplified)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The dataset I was working with is local rain data. It basically has three columns: date, location and amount. Each row represents the amount of rain (in mm) that poured on each date in a certain location.&lt;/p&gt;
&lt;p&gt;What I wanted to extract was an aggregation by “rain events” in a location. A “rain event” is defined as one or more consecutive days of rain. Once there’s a rain-free day, the event is over. That means I would like to aggregate together consecutive rainy days and get the average and total rain for each “rain event”.&lt;/p&gt;
&lt;p&gt;Let’s take a look at a quick example. The dataset:&lt;/p&gt;
&lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--9jws8FqY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AYYpAvRiujTcco5N5q0G91g.png"&gt;Original rain days table&lt;p&gt;We can see three different rain events (consecutive days of rain):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2019–01–01 to 2019–01–03 in New York&lt;/li&gt;
&lt;li&gt;2019–03–31 to 2019–04–01 in New York&lt;/li&gt;
&lt;li&gt;2019–05–25 to 2019–05–27 in LA&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And the resulting table I wished to get from the input dataset should look something like:&lt;/p&gt;
&lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--7f7qzGB2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2AkaYEpqPyJcb9AjPlp4w5Bg.png"&gt;Required results table&lt;p&gt;How can this be calculated with an SQL query? Grouping by month or week won’t work. As far as I saw, the best choice was to use Analytic Functions.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. Analytic functions: description and syntax&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Analytic functions perform operations on groups of rows, same as aggregations (GROUP BY and the likes). There are two major differences:&lt;/p&gt;
&lt;p&gt;a. Analytic functions return a single value for each of the input rows.&lt;/p&gt;
&lt;p&gt;b. The group of rows for the operation is defined using a window.&lt;/p&gt;
&lt;p&gt;Analytic functions are added as part of the SELECT clause, and with them, one has to specify three parameters:&lt;/p&gt;
&lt;p&gt;a. The partition (breaks input table into different groups).&lt;/p&gt;
&lt;p&gt;b. The order (orders the rows in each partition); Order influences the operation’s result, as we’ll see later.&lt;/p&gt;
&lt;p&gt;c. The window — each row is operated on with a group of other rows, which are defined by the window parameter.&lt;/p&gt;
&lt;p&gt;Let’s have a look at a simple example and explain the syntax:&lt;/p&gt;
&lt;blockquote&gt;SELECT&lt;br&gt;SUM(amount) OVER (PARTITION BY location ORDER BY date ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING)&lt;br&gt;FROM rain_data&lt;/blockquote&gt;
&lt;p&gt;The partition and order parts are pretty clear, just mention the column names like ordinary GROUP BY and ORDER BY inputs.&lt;/p&gt;
&lt;p&gt;The window part needs some explaining: it states that for each row, the operation should look at a window which includes the row itself, one row before it (&lt;em&gt;1 PRECEDING&lt;/em&gt;), and one row after it (&lt;em&gt;1 FOLLOWING&lt;/em&gt;). So the operation calculates the sum of three rows, but it does that for every row in the input table and gives the fitting result for each row.&lt;/p&gt;
&lt;p&gt;The flow (leaving out the order part) is depicted in the following chart:&lt;/p&gt;
&lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--xJF-TVNX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/0%2ACNVBLvu4Jn-jsh46"&gt;&lt;p&gt;More information on the syntax can be found in the BigQuery documentation (it’s not that bad): &lt;a href="https://cloud.google.com/bigquery/docs/reference/standard-sql/analytic-function-concepts"&gt;&lt;/a&gt;&lt;a href="https://cloud.google.com/bigquery/docs/reference/standard-sql/analytic-function-concepts"&gt;&lt;/a&gt;&lt;a href="https://cloud.google.com/bigquery/docs/reference/standard-sql/analytic-function-concepts"&gt;https://cloud.google.com/bigquery/docs/reference/standard-sql/analytic-function-concepts&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The example query did not solve my problem. What did solve my problem? On to the next section.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4. Solving the problem&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;So how can Analytic functions help? In my case, I wanted to partition by location, order by date, and use a window of two rows.&lt;/p&gt;
&lt;p&gt;Why two rows? For each row we would calculate the date difference between the current row and the previous one. If the difference is larger than a single day, then it belongs to a different rain event.&lt;/p&gt;
&lt;p&gt;Lucky for me, BigQuery has a built-in Analytic function for working with the previous row: the LAG function (for some reason it is called a navigation function). The LAG function references the previous row, according to a provided Partition and Order (a window is not necessary!).&lt;/p&gt;
&lt;p&gt;So Let’s see how the initial query looks like:&lt;/p&gt;
&lt;blockquote&gt;&lt;em&gt;SELECT * FROM (&lt;br&gt;SELECT *,&lt;br&gt;DATE_DIFF(date, LAG(date) OVER(PARTITION BY location ORDER BY date), day) AS days_from_last_rain&lt;br&gt;FROM &lt;code&gt;rain_example&lt;/code&gt;&lt;br&gt;) AS t&lt;br&gt;WHERE&lt;br&gt;days_from_last_rain &amp;gt; 1&lt;br&gt;OR days_from_last_rain IS NULL&lt;/em&gt;&lt;/blockquote&gt;
&lt;p&gt;The result of this query (on the example table above):&lt;/p&gt;
&lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--snhVUx3E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/746/0%2A4Uhh6rHss9D6_sXQ"&gt;&lt;p&gt;This is the set of rainy days that were the first in a rain event. We got it by selecting the diff between the date of the previous row and the date of the current row. We then filtered all those with a diff of 1 day, as those have preceding rainy days.&lt;/p&gt;
&lt;p&gt;Now we can connect each rainy day from the original dataset to the correct rain event, and aggregate each group to get the average and sum&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;5. Conclusions&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;There’s a bunch of ‘hidden’ functionality when you dive into the BigQuery documentation (or any other database for that matter). So next time before you load data into a script and start hacking, give the documentation a look (or just google it).&lt;/p&gt;
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SZ_Wf4e3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://medium.com/_/stat%3Fevent%3Dpost.clientViewed%26referrerSource%3Dfull_rss%26postId%3Dad1d9f0cba" width="1" height="1"&gt;

</description>
      <category>bigquery</category>
      <category>bigdata</category>
      <category>database</category>
      <category>sql</category>
    </item>
    <item>
      <title>Repost medium article</title>
      <dc:creator>leibole</dc:creator>
      <pubDate>Fri, 21 Sep 2018 22:08:18 +0000</pubDate>
      <link>https://dev.to/leibole/repost-medium-article-4a39</link>
      <guid>https://dev.to/leibole/repost-medium-article-4a39</guid>
      <description>&lt;p&gt;How do I repost an article I wrote on medium to dev.to?&lt;/p&gt;

</description>
      <category>help</category>
    </item>
    <item>
      <title>Real world data processing with Google Cloud Platform</title>
      <dc:creator>leibole</dc:creator>
      <pubDate>Fri, 21 Sep 2018 22:00:25 +0000</pubDate>
      <link>https://dev.to/leibole/real-world-data-processing-with-google-cloud-platform-4go6</link>
      <guid>https://dev.to/leibole/real-world-data-processing-with-google-cloud-platform-4go6</guid>
      <description>

&lt;h4&gt;How I built a data pipeline from an end user’s legacy software to a modern cloud infrastructure using Google Cloud Platform and very little money&lt;/h4&gt;

&lt;h3&gt;Introduction&lt;/h3&gt;

&lt;p&gt;My name is Ido, and I’ve been in the tech industry for quite a while. My latest project is aimed at bringing some modern cloud and data technology into the world of farming. I’ve started with a simple enough goal: deliver the data from present-day (out dated) farm management software into the cloud. There we can conduct advanced analysis and help farmers better manage their farms.&lt;/p&gt;

&lt;p&gt;I am on a very low budget, without outside investors or any spare money. Also, as it’s only me working on this project, I will need to keep any solution as simple as possible, and beware of wasting my time on unnecessary optimisations/features.&lt;/p&gt;

&lt;p&gt;To incorporate for these requirements I’ve used Google Cloud Platform’s set of tools. I’ve found that they have a good approach to data processing and analysis. Moreover, they really believe in managed services, which should save me a lot of time. They do have some shortcomings, which I’ll also go into details about.&lt;/p&gt;

&lt;p&gt;On to the real stuff…&lt;/p&gt;

&lt;h3&gt;High Level Description&lt;/h3&gt;

&lt;p&gt;This is what we set out to do in the project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data files are collected on each farm’s local computer. The computers are connected to the internet and the files are stored locally in .csv format.&lt;/li&gt;
&lt;li&gt;Send these files to the cloud.&lt;/li&gt;
&lt;li&gt;Write these files’ contents into a cloud based database, after (deep) parsing and cleansing.&lt;/li&gt;
&lt;li&gt;Visualise and display the aggregated data on a web, mobile friendly, page.&lt;/li&gt;
&lt;li&gt;Make the data accessible for further analysis and whatever data science magic we may want to apply to it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Architecture&lt;/h3&gt;

&lt;p&gt;I’ll start with a diagram that describes the full flow the data makes from farm to its end goal (be it dashboards or data science stuff). Following it I’ll describe each stage and how it’s implemented on google cloud:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9jhVa5yR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/517/1%2AEqCQM-_GgAEQ5cBrsrWJ7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9jhVa5yR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/517/1%2AEqCQM-_GgAEQ5cBrsrWJ7g.png" alt=""&gt;&lt;/a&gt;Data pipeline architecture diagram&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Local computer to cloud storage: local .csv files are uploaded to Google Cloud Storage, a normal cloud object storage. Done once a day, and fairly simple.&lt;/li&gt;
&lt;li&gt;Pub/Sub job: a Pub/Sub topic gets a job on each new Google storage object uploaded. Very easy to configure this with google’s CLI tool. You can just specify a path prefix for queueing, as well as an HTTP endpoint that will be called each time a new file is created. And of course google Pub/Sub itself is managed and infinitely scalable.&lt;br&gt;
I did have a couple of (self-inflicted) messes with it: at my first usage, I didn’t specify a path prefix, and on my triggered HTTP endpoint I copied the uploaded file to a new location. This of course triggered a new job, and so on, causing an infinite loop. Oops.
Another thing is that you have to make sure to return status 200 OK to the PubSub service, otherwise is will keep bombing you with requests. It’s part of that ‘deliver at least once’ promise you receive from the service.&lt;/li&gt;
&lt;li&gt;App Engine: this is where I implemented the HTTP endpoint. It copies the new data file to another cloud storage bucket according to its type (a few types of .csv files can be uploaded) and source (which farm it came from). I used the python language for the implementation.&lt;br&gt;
I found App Engine to be an interesting solution for simple work. It’s easier to set up than real servers, but has a lot of pitfalls that if you’re not careful can (and did) cost a lot of time. I’ll go over some quick points:

&lt;ul&gt;
&lt;li&gt;There are two App Engine environments: Standard and Flexible. Basically standard is like a sandbox version of the programming language you’re using, but it’s supposed to be easier to deploy (it’s not really easier). The biggest difference I saw: the flexible environment is much more expensive than the standard.
&lt;/li&gt;
&lt;li&gt;When using the standard environment to interact with other Google Cloud services, and running your program locally, the standard environment will work with ‘mock’ services on your local machine. When I worked with google’s Datastore service (a managed cloud DB), I expected to see the changes from my local environment reflected in the cloud DB, but instead it was reflected in a local mock that was hard to find.
&lt;/li&gt;
&lt;li&gt;Logging and versioning and a lot of other stuff are right out of the box, which is very convenient I have to say.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Data parse and cleanse: I did this on a new and cool tool called Google Dataprep (by Trifacta). It’s very useful and simple for parsing and cleaning data, I’ll go over it in the next section (recommended tools). It has a native write mechanism to Google BigQuery, which is next in the pipeline.&lt;/li&gt;
&lt;li&gt;Write to cloud DB: I used Google’s BigQuery as my cloud DB. It’s a really awesome tool, and provides very fast data queries. See more in the recommended tools section.&lt;/li&gt;
&lt;li&gt;Visualise the data: I’ve used google’s Data Studio for connecting to BigQuery and visualising the data. It is a fairly new BI tool. It’s very lean compared to classic BI tools (Tableau, QlikView etc.), but also very clean and intuitive. It’s still in beta, and that was definitely felt with many minor bugs and annoying pitfalls.
Being a simple tool is also good: It is very easy to learn and get by with, and it looks very good and clear. It also has native and direct connectors to BigQuery and integrates very well with it. The data is displayed fast, and very easy to configure.&lt;/li&gt;
&lt;li&gt;Research the data: I haven’t done much of this, but it’s a very small step to make. Google offers managed Jupiter notebooks that are directly connected to BigQuery, for researching the data, and creating production machine learning models.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;Recommended Tools&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Dataprep: “An intelligent cloud data service to visually explore, clean, and prepare data for analysis.” (from the product page). It is basically a UI based parsing and cleansing tool, and it is very good at that. You can define parsing and cleaning using a very comprehensive UI. It saved me a ton of time on writing regex’s and applying rules on different types of encodings, deleting empty rows etc.&lt;br&gt;
Some more advantages: it integrates well with Cloud Storage and BigQuery, it has a convenient job scheduling mechanism, and it looks great.
The worst thing about it (it could be an advantage for some): it runs the parsing and cleaning job created on the UI with a very big overhead. I think this happens because it’s aimed at much larger scale data than what I’m using (I won’t go into any deeper details here), but the bottom line is that even the smallest parse job takes about 5 minutes. I considered moving from it because of this, but the usability and integration to other services are so great the I stuck with it.&lt;/li&gt;
&lt;li&gt;BigQuery: a cloud hosted (and managed) analytics database. It is relational, and integrates really well with all of Google’s other services. Also, most BI and machine learning tools have good integrations for it, as it has become really popular tool. It has a good web based UI for querying and understanding the data. There’s no need to set indexes (it’s supposed to be indexed on every column).
I really enjoyed using it and found that it has mostly advantages. Most queries I’ve used just returned really quickly with the results. However, on the odd case that they don’t, it’s a little hard to optimise, as a lot of the database details are abstracted away.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;Billing/Costs&lt;/h3&gt;

&lt;p&gt;The billing has been very reasonable, although my customer base is still relatively small (I have tens of farms I work with). The entire testing and setting up of the environment were all within the free tier, so no need to commit for any payment before launching.&lt;br&gt;&lt;br&gt;
At the moment I’m supporting tens of farms with a monthly cost of 10–15$. A big part part of this cost can be ‘optimised away’, but as mentioned I try to avoid too early optimisations.&lt;/p&gt;

&lt;h3&gt;Conclusions&lt;/h3&gt;

&lt;p&gt;Google Cloud Platform helped me a lot in getting up and running with the project quickly and efficiently. Their managed services approach is great for these purposes. All their services work very well together, and offer full end to end solution for web app development and data pipelines.&lt;br&gt;&lt;br&gt;
Nonetheless, it has its good share of downsides. The documentation isn’t always comprehensive enough, and just googling for a solution doesn’t work that well with a lot of the services being new and not that common yet. Moreover, issues in managed services are harder to solve, as a lot of the details are abstracted away.&lt;br&gt;&lt;br&gt;
I left all of the technical hard core details out, but be sure that I faced many pitfalls and nitpicks. Feel free to leave comments and questions, and I can dig deep into a lot of the tools I’ve used if you’re interested in anything specific.&lt;/p&gt;


</description>
      <category>businessintelligence</category>
      <category>dataprocessing</category>
      <category>bigdata</category>
      <category>googlecloudplatform</category>
    </item>
    <item>
      <title>Service-ception: how and why we’ve built a service inside a service</title>
      <dc:creator>leibole</dc:creator>
      <pubDate>Thu, 31 May 2018 13:48:38 +0000</pubDate>
      <link>https://dev.to/leibole/service-ception-how-and-why-we-ve-built-a-service-inside-a-service-17g</link>
      <guid>https://dev.to/leibole/service-ception-how-and-why-we-ve-built-a-service-inside-a-service-17g</guid>
      <description>&lt;p&gt;The road to micro-services architecture: creating a service within a service&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here at Yotpo, we run a very high scale (tens of millions of requests per day) web system. We ship our product to eCommerce web sites, each with its own scale of shoppers, all of which we serve.&lt;/p&gt;

&lt;p&gt;One of the products we offer our customers is User Generated Images (images created by shoppers). We recently started working on a new product offering — an Album Widget. In this widget, shop owners can choose images which their shoppers uploaded, and show these images on an onsite widget. Moreover, we will allow them to moderate and add images from instagram to the album.&lt;/p&gt;

&lt;p&gt;The images, the album, and the album creation experience will all be developed and served by Yotpo. Obviously this product requires a lot of backend entities and code, which will live and engage with the rest of our existing code base.&lt;/p&gt;

&lt;p&gt;In the following post we will try to describe how we kept a service oriented approach in the development process, while using the same code base and the same process. We will discuss what we did, why we did it, and the advantages and disadvantages we found for our approach along the way.&lt;/p&gt;

&lt;h3&gt;
  
  
  The problem
&lt;/h3&gt;

&lt;p&gt;No one can ignore the crazy hype going around ‘micro-services’ architecture these days. Describing the architectural merits is beyond the scope of this post. Suffice to say that the few services we currently own have grown to massive proportions, and our goal as developers is to avoid adding any new code to them. Instead, it is preferred to write new, small services that are as decoupled as possible from existing services.&lt;/p&gt;

&lt;p&gt;However, creating a new service has a considerable overhead effort for anyone. It means creating a whole new deployment process, repository, database, tooling chain and more. The time constraints on R&amp;amp;D projects are always very pressing, and product teams often have to surrender a lot of feature content to meet them.&lt;/p&gt;

&lt;p&gt;Developers constantly seek to find a way to keep the long term service oriented strategy, while still maintaining a tight schedule.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xuOMRqAf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Asn1L_Ns5eehJaVoPfAFS1A.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xuOMRqAf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/1024/1%2Asn1L_Ns5eehJaVoPfAFS1A.jpeg" alt=""&gt;&lt;/a&gt;How it should look one day&lt;/p&gt;

&lt;h3&gt;
  
  
  The proposed solution
&lt;/h3&gt;

&lt;p&gt;So how can this problem be confronted? For us, a mid-way solution was the way to go: expanding one of the existing services, the “Social media service” (used for fetching media from instagram).&lt;/p&gt;

&lt;p&gt;But, this cannot be achieved effectively just by adding new controllers and models the good old way we know. It is important to make sure all the new behaviour is in a whole separate flow, as well as a separate folder structure. The new code addition should get a new name, and treated as a complete separate service. This separation might seem ‘semantic’ and imposed, but keeping it helps keep the new ‘service within a service’ as decoupled as possible from the existing service.&lt;/p&gt;

&lt;p&gt;This solution keeps the service oriented architecture as its guiding principle, while avoiding most of the overhead effort tied with creating a new micro service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges
&lt;/h3&gt;

&lt;p&gt;During the development process we obviously faced some unforeseen challenges. The most interesting ones were:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inter-service communication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The new (albums) service and the old (social media) service still need to communicate with each other.&lt;/p&gt;

&lt;p&gt;For example, an album can contain images from the social service. Whenever there’s a request for an album, it will need to return data for its social images. That data exists in the social images service.&lt;/p&gt;

&lt;p&gt;At this point it will be very tempting to mix the services. For example, use a model from the social media service inside one of the albums service’s controllers. To avoid that, we introduced a new entity which will prevent the coupling — SocialImagesProvider.&lt;/p&gt;

&lt;p&gt;This entity will be the only point of communication from the albums service to the social images service. If and when we decide to extract one of the services into a separate microservice, the SocialImagesProvider will be the only entity needed to be reimplemented.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outer-Service communication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Communication to external services was also difficult. Just as an album can contain a social image (from instagram), it can contain a user generated image. So again, when there’s a request for an album containing a user generated image, the album service will need to fetch the image’s data. Only this time, the data isn’t in its brother-service, it’s in one of our other services.&lt;/p&gt;

&lt;p&gt;This basically requires to perform an inter-service join, so as to get all the image’s details. This join is complicated, as it contains two services, and is not very reliable. We will need to reconsider this implementation very soon as we continue to scale our application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advantages
&lt;/h3&gt;

&lt;p&gt;There are many advantages to writing the new service inside the old one, amongst them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using a single database: this saves the time of deploying and maintaining a new database. The tables for the new service are new tables that do not relate at all to the existing ones.&lt;/li&gt;
&lt;li&gt;Known and familiar code: the code base, architecture, database and more are all familiar to all the developers on the project, so it’s still possible to move quickly without learning a new programming languages/architecture/database.&lt;/li&gt;
&lt;li&gt;Inner-service communication: the two brother-services can communicate efficiently with other: they basically run on the same process.&lt;/li&gt;
&lt;li&gt;Easy to extract micro-service in the future: the new service is well defined and separated from the existing one, so it is (relatively) easy to extract it to a separate micro-service in the future.&lt;/li&gt;
&lt;li&gt;Decoupled: the new service is decoupled from the old one, so the two services can advance independently. Changes to one can be made without worrying to the other. Moreover, the service’s boundaries are very clear, so it’s kept small enough, and easier to fully grasp by developers.&lt;/li&gt;
&lt;li&gt;No ‘new service overhead’.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Disadvantages
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Services have to be deployed together: even though a change to one service is independent of the other, deployment is still joint. If one of the services breaks, the other will break as well.&lt;/li&gt;
&lt;li&gt;Some coupling is inevitable.&lt;/li&gt;
&lt;li&gt;Project still grows larger: the original service still grows, the code base becomes larger and harder to fully grasp, and the database grows significantly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;In the post we have described our approach to developing a new meaningful feature, while trying to maintain a balance between a tight schedule and a new architectural approach — micro-services.&lt;/p&gt;

&lt;p&gt;The path we chose — a service with a service — has advantages and disadvantages, just like any other path. The development and deployment went very smoothly, and the product is up and running to the satisfaction of our customers.&lt;/p&gt;

&lt;p&gt;In our near future road map, the new service will be extracted, and given a life of its own, with a new database, deployment, and everything else that accompanies a decoupled micro-service.&lt;/p&gt;




</description>
      <category>rails</category>
      <category>webdev</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
