<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Georgi Tenev</title>
    <description>The latest articles on DEV Community by Georgi Tenev (@jorotenev).</description>
    <link>https://dev.to/jorotenev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jorotenev"/>
    <language>en</language>
    <item>
      <title>Sibling Docker containers during BitBucket Pipelines - A Kafka use-case</title>
      <dc:creator>Georgi Tenev</dc:creator>
      <pubDate>Sun, 08 Mar 2020 23:05:16 +0000</pubDate>
      <link>https://dev.to/jorotenev/sibling-docker-containers-during-bitbucket-pipelines-a-kafka-use-case-bmd</link>
      <guid>https://dev.to/jorotenev/sibling-docker-containers-during-bitbucket-pipelines-a-kafka-use-case-bmd</guid>
      <description>&lt;p&gt;In this article I will share my team's experience with setting up an infrastructure on BitBucket Pipelines to facilitate integration tests.  &lt;/p&gt;

&lt;p&gt;I will go over the challenges we faced when orchestrating our service's external dependencies as docker containers and making them communicate with each other both locally and during the CI integration tests.   &lt;/p&gt;

&lt;p&gt;Specifically, I will outline the docker-in-docker problem, how to solve it both locally, locally within docker &amp;amp; in CI and how to actually put it all together via docker-compose.yml &amp;amp; and a bit of Bash. &lt;/p&gt;

&lt;p&gt;Our particular experience was with Kafka, but the broad ideas can also be applied to other stacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;For our current (greenfield) project my team is developing an application which is processing events from a Kafka stream. To ensure the velocity of the team doesn't decrease with the advancement of the project, we wanted to setup a suite of integration tests that will run on each commit during the CI testing phase within BitBucket Pipelines.&lt;/p&gt;

&lt;p&gt;We had some specific requirements on how to ensure we can easily run the tests as often as possible - not only during the CI phase, but also locally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running the tests straight from our IDEs without too much wizardry was a must - PyCharm lets you click Run on a specific test which is like the best right :) Having a proper debugger is quite integral to us.&lt;/li&gt;
&lt;li&gt;When running the tests locally, we wanted to be able to execute them directly via a native Python interpreter but also from within a container which resembles the one which will run during CI.
PyCharm lets you use an interpreter from a docker container - we do this when we want to run our tests in a CI-like container and still have a proper debugger. For normal development/testing we prefer to use the native Python interpreter because of smoother development experience.&lt;/li&gt;
&lt;li&gt;Have as close to prod environment as possible both locally and in CI. I.e. it must be easy to spawn the external dependencies and be able to have full control over them regardless of the environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before we continue, I will reiterate the three environment we'll be addressing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;local - dev laptop using the native Python interpreter (via &lt;a href="https://github.com/pyenv/pyenv"&gt;pyenv&lt;/a&gt; + &lt;a href="https://github.com/pypa/pipenv"&gt;pipenv&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;local docker - again on dev laptop but python is within docker&lt;/li&gt;
&lt;li&gt;CI - BitBucket Pipeline build which runs our scripts in a dedicated container&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The tech stack
&lt;/h2&gt;

&lt;p&gt;Our service is a Python 3.7 app, powered by Robinhood's &lt;a href="https://github.com/robinhood/faust"&gt;Faust&lt;/a&gt; asynchronous framework. We use Kafka as a message broker. To ensure the format of the Kafka messages produced and consumed we use &lt;a href="https://avro.apache.org/docs/current/spec.html"&gt;AvroSchemas&lt;/a&gt; which are managed by a &lt;a href="https://github.com/confluentinc/schema-registry"&gt;Schema Registry&lt;/a&gt; service. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ae8xGF2A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4aixk4ys9rnej285cl9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ae8xGF2A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4aixk4ys9rnej285cl9x.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Project structure
&lt;/h2&gt;

&lt;p&gt;Here's the project structure, influenced by this handy &lt;a href="https://github.com/marcosschroh/faust-docker-compose-example"&gt;faust skeleton project&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./project-name
  scripts/
    ...
    run-integration-tests.sh
  kafka/ # external services setup
    docker-compose.yml
    docker-compose.ci.override.yml
  project-name/
    src/...
    tests/...
  bitbucket-pipelines.yml 
  docker-compose.test.yml
  ...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;kafka/docker-compose.yml&lt;/code&gt; has the definitions of all the necessary external dependencies of our main application. If run locally via &lt;code&gt;cd kafka &amp;amp;&amp;amp; docker-compose up -d&lt;/code&gt;, the containers will be launched and they will be available on &lt;code&gt;localhost:&amp;lt;service port&amp;gt;&lt;/code&gt; - we can then run via our native Python the app and tests against the containerised services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt; &lt;span class="c1"&gt;# kafka/docker-compose.yml&lt;/span&gt;
 &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.5'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;zookeeper&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;confluentinc/cp-zookeeper"&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zookeeper&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;32181:32181&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ZOOKEEPER_CLIENT_PORT=32181&lt;/span&gt;
  &lt;span class="na"&gt;kafka&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;confluentinc/cp-kafka&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;on-failure"&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka&lt;/span&gt;
    &lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kafka-topics"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--list"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--zookeeper"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;zookeeper:32181"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="na"&gt;start_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;9092:9092&lt;/span&gt; &lt;span class="c1"&gt;# used by other containers launched from this file&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;29092:29092&lt;/span&gt; &lt;span class="c1"&gt;# use outside of the docker-compose network&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;zookeeper&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KAFKA_ZOOKEEPER_CONNECT=zookeeper:32181&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KAFKA_ADVERTISED_LISTENERS=PLAINTEXT_HOST://localhost:29092,PLAINTEXT://kafka:9092&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KAFKA_BROKER_ID=1&lt;/span&gt;

  &lt;span class="na"&gt;schema-registry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;confluentinc/cp-schema-registry&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;schema-registry&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;schema-registry&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kafka&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;zookeeper&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8081:8081"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:32181&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SCHEMA_REGISTRY_HOST_NAME=schema-registry&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SCHEMA_REGISTRY_DEBUG=true&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;A peculiarity when configuring the kafka container is the &lt;code&gt;KAFKA_ADVERTISED_LISTENERS&lt;/code&gt; - for a given port you can specify a single listener name (e.g. localhost for port 29092). Kafka will accept connections on this port if the client used &lt;code&gt;kafka://localhost:29092&lt;/code&gt; as the URI (plus &lt;code&gt;kafka://kafka:9092&lt;/code&gt;, for the example above). Setting 0.0.0.0 instead of localhost causes kafka to return an error during launch - so you need to specify an actual listener hostname. We will see below why this is important. &lt;/p&gt;

&lt;h2&gt;
  
  
  CI BitBucket Pipelines &amp;amp; Sibling Docker containers
&lt;/h2&gt;

&lt;p&gt;As per the &lt;a href="https://confluence.atlassian.com/bitbucket/use-services-and-databases-in-bitbucket-pipelines-874786688.html"&gt;Bitbucket documentation&lt;/a&gt;, it's possible to launch external dependencies as containers during a Pipeline execution. This sounded appealing at first but it meant it would lead to difficulties when setting up the test infrastructure locally as the &lt;code&gt;bitbucket-pipelines.yml&lt;/code&gt; syntax is custom. Since it was a requirement for us to be able to easily have the whole stack locally and have control over it, we went for the approach of using docker-compose to start the dockerised services. &lt;/p&gt;

&lt;p&gt;To configure the CI we have a &lt;code&gt;bitbucket-pipelines.yml&lt;/code&gt; with, among others, a Testing phase, which is executed in a &lt;code&gt;python:3.7&lt;/code&gt; container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;definitions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;step&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;step-tests&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Tests&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python:3.7-slim-buster&lt;/span&gt;
        &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="c1"&gt;# ...&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;scripts/run-integration-tests.sh&lt;/span&gt; &lt;span class="c1"&gt;# launches external services + runs tests. &lt;/span&gt;
        &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;run-integration-tests.sh&lt;/code&gt; script will be executed inside a &lt;code&gt;python:3.7&lt;/code&gt; container when running the tests via docker (locally &amp;amp; on CI) - from this script we launch the kafka &amp;amp; friends containers via docker-compose and then start our integration tests.&lt;br&gt;
However, doing so directly would mean that we will have a "docker in docker" situation which is considered &lt;a href="https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/"&gt;painful&lt;/a&gt;. By "directly" I mean launching a container from within a container.&lt;br&gt;&lt;br&gt;
A &lt;a href="https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/"&gt;solution&lt;/a&gt; to avoid "docker in docker" is to connect to the host's docker engine when starting additional containers within a container instead of to the engine within the container. By connecting to the host engine, sibling containers will be started, instead of child ones.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GHoYNiT1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tgrpm0uzsqfdb9zcbpcp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GHoYNiT1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tgrpm0uzsqfdb9zcbpcp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The good news is that this seems to be done out of the box when we are using BitBucket pipelines! So it's a concern just when we're running locally in docker. &lt;/p&gt;
&lt;h1&gt;
  
  
  The pain &amp;amp; the remedy
&lt;/h1&gt;

&lt;p&gt;There are two main problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to use the host Docker engine from within a container instead the engine within the container&lt;/li&gt;
&lt;li&gt;how to make two sibling containers communicate with each other&lt;/li&gt;
&lt;li&gt;how to configure our tests &amp;amp; services so that communication can happen in all three of our environments&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Use host Docker engine
&lt;/h2&gt;

&lt;p&gt;In the root of our project we have a &lt;code&gt;docker-compose-test.yml&lt;/code&gt; which just starts a python container and executes the &lt;code&gt;scripts/run-integration-tests.sh&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.5'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tty&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./Dockerfile-local&lt;/span&gt; &lt;span class="c1"&gt;# installs docker, copies application code, etc. nothing fancy&lt;/span&gt;
    &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash scripts/run-integration-tests.sh&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock&lt;/span&gt; &lt;span class="c1"&gt;# That's the gotcha&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When the above container launches, it will have docker installed and the docker commands we run, will be directed to our host's docker engine - because we mounted the host's socket file in the container.&lt;/p&gt;

&lt;p&gt;To start this test container we actually combine both the &lt;code&gt;kafka/docker-compose.yml&lt;/code&gt; &amp;amp; &lt;code&gt;docker-compose-test.yml&lt;/code&gt; -  via &lt;code&gt;docker-compose --project-directory=.  -f local_kafka/docker-compose.yml -f ./docker-compose-test.yml -f ... &amp;lt;build|up|....&lt;/code&gt;. This is just an easy way to extend the compose file with all the kafka services with an additional service. We have this in a Makefile to save ourselves from typing too much. E.g. with the Makefile snippet below we can just do &lt;code&gt;make build-test&lt;/code&gt; to build the test container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;docker-compose &lt;span class="nt"&gt;--project-directory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;  &lt;span class="nt"&gt;-f&lt;/span&gt; local_kafka/docker-compose.yml &lt;span class="nt"&gt;-f&lt;/span&gt; ./docker-compose-test.yml
build-test:
    &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; build &lt;span class="nt"&gt;--no-cache&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Sibling inter-container communication
&lt;/h2&gt;

&lt;h3&gt;
  
  
  When running the containers locally
&lt;/h3&gt;

&lt;p&gt;It's cool that we can avoid the docker-in-docker situation but there's one "gotcha" we need to address :)&lt;br&gt;&lt;br&gt;
How to communicate from within one sibling container to another container?  &lt;/p&gt;

&lt;p&gt;Or in our particular case, how to make the python container with our tests connect to the kafka container - we can't simply use &lt;code&gt;kafka://localhost:29092&lt;/code&gt; within the python container as localhost will refer to, well.., the python container host.  &lt;/p&gt;

&lt;p&gt;The solution we found is to go through the host and then use the port of the kafka container which it exposed to the host.&lt;br&gt;&lt;br&gt;
In newer versions of Docker, from within a container, &lt;code&gt;host.docker.internal&lt;/code&gt; will point to the host which started the container. This is handy because if we have the kafka running on the host as a container and exposing its 29092 port, then from our python tests container we can use &lt;code&gt;host.docker.internal:29092&lt;/code&gt; to connect to the sibling container with kafka! Good stuff! &lt;/p&gt;

&lt;p&gt;So in essence:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;when running the tests from the host's native python we can target &lt;code&gt;localhost:29092&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;and when running the tests from within a container, we need to configure them to use &lt;code&gt;host.docker.internal:29092&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  When running the containers on BitBucket
&lt;/h3&gt;

&lt;p&gt;Now there's another gotcha, as one might expect. When we are in CI and we start our containers, &lt;code&gt;host.docker.internal&lt;/code&gt; from within a container does not work. Would have been nice if it did, right... The good thing is that there is an address that we can use to connect to the host (the bitbucket "host", that is).&lt;br&gt;&lt;br&gt;
I found it by accident when inspecting the environment variables passed to our build - the &lt;code&gt;$BITBUCKET_DOCKER_HOST_INTERNAL&lt;/code&gt; environmental variable keeps the address that we can use to connect to the BitBucket "host".&lt;/p&gt;

&lt;p&gt;Now the above is actually the significant findings that I wanted to share.&lt;br&gt;
In the next section I will go over how to actually put together the files/scripts so that it all glues together in all of our three envs.&lt;/p&gt;
&lt;h1&gt;
  
  
  Gluing it all together
&lt;/h1&gt;

&lt;p&gt;Given the above explanation, the trickiest part, in my opinion, is how to configure the kafka listeners in all three envs &amp;amp; how to configure the tests to use the correct kafka address (either localhost/host.docker.internal/$BITBUCKET_DOCKER_HOST_INTERNAL). We need to configure the kafka listeners accordingly, simply because our test kafka client will use different addresses to connect to kafka, thus the kafka container should be able to listen on the corresponding address.&lt;/p&gt;

&lt;p&gt;To handle the kafka listener part we make use of the docker-compose files extend functionality. We have the "main" &lt;code&gt;kafka/docker-compose.yml&lt;/code&gt; file which orchestrates all containers and configures them with settings that will work fine for when using the containers from our local python via localhost. We also added a simple &lt;code&gt;docker-compose.ci.override.yml&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.5'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kafka&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KAFKA_ADVERTISED_LISTENERS=PLAINTEXT_HOST://${BITBUCKET_DOCKER_HOST_INTERNAL:-host.docker.internal}:29092,PLAINTEXT://kafka:9092&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This file will be used when starting the kafka &amp;amp; friends when running tests within local docker &amp;amp; CI - if the &lt;code&gt;$BITBUCKET_DOCKER_HOST_INTERNAL&lt;/code&gt; is set, then its values are going to be used - we are in CI then. If it's not set, the default host.docker.internal is used - meaning we are in local docker env.&lt;/p&gt;




&lt;p&gt;Our tests/app are configured via env vars. Our &lt;code&gt;run-integration-tests.sh&lt;/code&gt; looks something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; docker-compose &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; docker-compose &lt;span class="nt"&gt;-v&lt;/span&gt;
docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; local_kafka/docker-compose.yml &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose-test-ci.override.yml down
docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; local_kafka/docker-compose.yml &lt;span class="nt"&gt;-f&lt;/span&gt; docker-compose-test-ci.override.yml up &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="c"&gt;# install app dependancies&lt;/span&gt;

&lt;span class="c"&gt;# wait for kafka to start&lt;/span&gt;
&lt;span class="nv"&gt;sleep_max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;15
&lt;span class="nv"&gt;sleep_for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;span class="nv"&gt;slept&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;span class="k"&gt;until&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;docker inspect &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="o"&gt;{{&lt;/span&gt;.State.Health.Status&lt;span class="o"&gt;}}&lt;/span&gt; kafka&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"healthy"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;slept&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;-gt&lt;/span&gt; &lt;span class="nv"&gt;$sleep_max&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Waited for kafka to be up for &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;sleep_max&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;s. Quitting now."&lt;/span&gt;
    &lt;span class="nb"&gt;exit &lt;/span&gt;1
  &lt;span class="k"&gt;fi
  &lt;/span&gt;&lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="nv"&gt;$sleep_for&lt;/span&gt;
  &lt;span class="nv"&gt;slept&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;&lt;span class="nv"&gt;$sleep_for&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nv"&gt;$slept&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"waiting for kafka..."&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;span class="c"&gt;# !! figure out in which environment we are running and configure our tests accordingly&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CI&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_HOST_ADDRESS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"running locally natively"&lt;/span&gt;
  &lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"running locally within docker"&lt;/span&gt;
    &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KAFKA_BOOTSTRAP_SERVER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;host.docker.internal:29092
    &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SCHEMA_REGISTRY_SERVER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://host.docker.internal:8081
  &lt;span class="k"&gt;fi
else
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"running in CI environment"&lt;/span&gt;
  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KAFKA_BOOTSTRAP_SERVER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$BITBUCKET_DOCKER_HOST_INTERNAL&lt;/span&gt;:29092
  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SCHEMA_REGISTRY_SERVER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://&lt;span class="nv"&gt;$BITBUCKET_DOCKER_HOST_INTERNAL&lt;/span&gt;:8081
&lt;span class="k"&gt;fi
&lt;/span&gt;pytest src/tests/integration_tests
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The script can be used in all of our three envs - it will set the KAFKA_BOOTSTRAP_SERVER (our tests' configuration) env var accordingly.&lt;/p&gt;

&lt;p&gt;FYI In the Makefile example couple of paragraphs ago, I've omitted the ci.override.yml file for brevity.&lt;/p&gt;

&lt;p&gt;Thanks for reading the article - I hope you found useful information in it! &lt;br&gt;
Please let me know if you know better ways to solve the problems from above! I couldn't find any, thus wanted to save others from the pain I went through :)&lt;/p&gt;

</description>
      <category>ci</category>
      <category>docker</category>
      <category>bitbucket</category>
      <category>kafka</category>
    </item>
    <item>
      <title>What are the skills a junior DevOps should have </title>
      <dc:creator>Georgi Tenev</dc:creator>
      <pubDate>Fri, 23 Aug 2019 09:27:46 +0000</pubDate>
      <link>https://dev.to/jorotenev/what-are-the-skills-a-junior-devops-should-have-501l</link>
      <guid>https://dev.to/jorotenev/what-are-the-skills-a-junior-devops-should-have-501l</guid>
      <description>&lt;p&gt;Hi folks! :) &lt;br&gt;
I've been working as a DevOps for nearly two years. I was offered to teach a small group of students a DevOps course. The outcome for them should be that they are hireable at a junior level. &lt;br&gt;
What skills and technologies you think are a must for them to know and understand? &lt;/p&gt;

&lt;p&gt;Cheers :)&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Protecting your Git branches on AWS CodeCommit</title>
      <dc:creator>Georgi Tenev</dc:creator>
      <pubDate>Tue, 19 Mar 2019 18:30:31 +0000</pubDate>
      <link>https://dev.to/jorotenev/protecting-your-git-branches-on-aws-codecommit-4kol</link>
      <guid>https://dev.to/jorotenev/protecting-your-git-branches-on-aws-codecommit-4kol</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5smzztl5bm80v2dlcxb2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5smzztl5bm80v2dlcxb2.jpg" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Borovets, Rila mountain, Bulgaria&lt;/em&gt;&lt;/p&gt;



&lt;p&gt;This post gives a quick recipe on how to enable only selected users to merge/commit into CodeCommit Git branches.&lt;/p&gt;
&lt;h1&gt;
  
  
  The problem
&lt;/h1&gt;

&lt;p&gt;Let's consider a common CD/CI scenario:&lt;br&gt;
1) you have a team which pushes to a Git repo with important code. &lt;br&gt;
2) when ready for a deploy, someone merges &lt;code&gt;dev&lt;/code&gt; into &lt;code&gt;master&lt;/code&gt;&lt;br&gt;
3) the merge event triggers a preconfigured deployment pipeline which would package the code from the repo and then delpoy it to production&lt;/p&gt;

&lt;p&gt;The above is very nice and all, but care must be taken when it comes to who should be allowed to merge into the production branch.&lt;/p&gt;
&lt;h1&gt;
  
  
  The solution
&lt;/h1&gt;

&lt;p&gt;(jump straight to the sample IAM policies)&lt;/p&gt;

&lt;p&gt;Since we are in the context of AWS, the solution lies in using the IAM service (Identity and Access Management).&lt;br&gt;&lt;br&gt;
Here are the main steps of the solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have a generic group for all developers, say &lt;code&gt;Dev-Group&lt;/code&gt;. All developers  within your team belong to this group. Generic CodeCommit actions are allowed  - e.g. &lt;code&gt;pull&lt;/code&gt;, &lt;code&gt;clone&lt;/code&gt;, etc. &lt;/li&gt;
&lt;li&gt;Also attach to the &lt;code&gt;Dev-Group&lt;/code&gt; a policy which specifies the repositories which we want to protect via the &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notresource.html"&gt;NotResource&lt;/a&gt;. For all other repos developers can have full access, if so desired.&lt;/li&gt;
&lt;li&gt;Have a second IAM Group - e.g. &lt;code&gt;ImportantRepo-PowerUser&lt;/code&gt; - it has a policy which enables the &lt;code&gt;merge&lt;/code&gt;/&lt;code&gt;commit&lt;/code&gt; actions for the &lt;code&gt;ImportantRepo&lt;/code&gt; repo.&lt;/li&gt;
&lt;li&gt;Add the selected few developers which should be able to trigger a build to this &lt;code&gt;ImportantRepo-PowerUser&lt;/code&gt; group&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why bother writing this article - well, it's not that obvious how to implement the above. The tricky part is the &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow"&gt;evaluation logic of IAM&lt;/a&gt;. Due to it, the naïve solution below won't work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;all devs in &lt;code&gt;Dev-Group&lt;/code&gt;, the selected few in &lt;code&gt;SelectedFew&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;use a &lt;code&gt;Deny&lt;/code&gt; statement in a policy in &lt;code&gt;Dev-Group&lt;/code&gt; to deny devs merge into master&lt;/li&gt;
&lt;li&gt;attach a policy to &lt;code&gt;SelectedFew&lt;/code&gt; which allows its users to merge into     &lt;code&gt;master&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If in the &lt;code&gt;Dev-Group&lt;/code&gt; you add a statement which explictly &lt;strong&gt;forbids&lt;/strong&gt; (with a &lt;code&gt;Deny&lt;/code&gt;) users of the group to merge into master, then it's not possible to allow these actions in a different policy - so it won't be possible to allow our selected few developers to merge.  &lt;/p&gt;

&lt;p&gt;This is where the &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notresource.htmlhttps://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_notresource.html"&gt;&lt;code&gt;NotResource&lt;/code&gt;&lt;/a&gt; comes in handy. With it you can exclude the protected repositories from a given statement &lt;strong&gt;without explicitly using the &lt;code&gt;Deny&lt;/code&gt; action&lt;/strong&gt;. I.e. you can give full perms to all repositories excluding the protected ones. Since we haven't explicitly Deny-ied actions on the protected repositories, we can later add explicit Allow in a different policy - in a policy attached to the &lt;code&gt;SelectedFew&lt;/code&gt; group.&lt;/p&gt;
&lt;h2&gt;
  
  
  Show me the code
&lt;/h2&gt;
&lt;h3&gt;
  
  
  dev-group-generic-policy
&lt;/h3&gt;

&lt;p&gt;We need the Dev-Group which gives generic permissions to all devs and also lists the protected repositories.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// the policy atached to the Dev-Group
{
  "Version": "2012-10-17",
  "Statement": [
    // generic permissions for all devs
    {
      "Effect": "Allow",
      "Action": [
        "codecommit:Get*",
        "codecommit:GitPull",
        &amp;lt;whatever you need but not Push/Merge/etc.&amp;gt;
      ],
      "Resource": "*"
    },

    // the gotcha - in the NotResource put the important repos whose branches you want to protect
    {
      "Effect": "Allow",
      "Action": [
        "codecommit:GitPush",
        "codecommit:DeleteBranch",
        "codecommit:Merge*"
      ],
      "NotResource": [
        "arn:aws:codecommit:*:*:&amp;lt;important-repo-name-1&amp;gt;",
        "arn:aws:codecommit:*:*:&amp;lt;important-repo-name-2&amp;gt;"
      ]
    },

    // still allow all devs to push to non-production branches of the imporant repos
    {
      "Effect": "Allow",
      "Action": [
        "codecommit:GitPush",
        "codecommit:DeleteBranch",
        "codecommit:Merge*"
      ],
      "Resource": [
        "arn:aws:codecommit:*:*:&amp;lt;important-repo-name-1",
        "arn:aws:codecommit:*:*:&amp;lt;important-repo-name-2"
      ],
      "Condition": {
        "StringNotEquals": {
          "codecommit:References": [
            "refs/heads/master",
            "refs/heads/prod",
            "refs/heads/Stg"
          ]
        }
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  important-repo-1-power-users
&lt;/h3&gt;

&lt;p&gt;A group for the power users for a specific important repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// the policy attached to the group for users that can push to all branches for a given important repo 
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "codecommit:GitPush",
        "codecommit:DeleteBranch",
        "codecommit:Merge*"
      ],
      "Resource": [
        "arn:aws:codecommit:*:*:&amp;lt;important-repo-name-1&amp;gt;",
      ]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make the solution more generic, you can use some naming convention for the repositories - i.e. all repo names starting with &lt;code&gt;xyz-&lt;/code&gt; are considered important and only users in group &lt;code&gt;xyz-power-users&lt;/code&gt; can merge into the prod branch of these repos.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>git</category>
      <category>codecommit</category>
      <category>devops</category>
    </item>
    <item>
      <title>Creating a native mobile app with NativeScript — tips and tricks</title>
      <dc:creator>Georgi Tenev</dc:creator>
      <pubDate>Tue, 20 Mar 2018 10:56:22 +0000</pubDate>
      <link>https://dev.to/jorotenev/creating-a-native-mobile-app-with-nativescripttips-and-tricks-27o5</link>
      <guid>https://dev.to/jorotenev/creating-a-native-mobile-app-with-nativescripttips-and-tricks-27o5</guid>
      <description>&lt;p&gt;Couple of months ago I decided to write an app that I can use to keep track of my expenses. I wanted a contemporary-looking app, with no ads, for free. I wanted to give mobile app development a try. In January I did an AWS training, so I also wanted to apply what I’ve learned, for a backend API.&lt;/p&gt;

&lt;p&gt;In this post I will share some parts of the process of designing, implementing and deploying the Para app, together with some of the difficulties and bugs I’ve encountered along the way.  &lt;/p&gt;

&lt;p&gt;The aim is to share my findings and ultimately help people that are new to mobile app development, and in particular to the NativeScript platform.&lt;/p&gt;

&lt;p&gt;I will focus on the app itself. I’ll talk about the backend API in another post. The API is developed with Python + &lt;a href="http://flask.pocoo.org/" rel="noopener noreferrer"&gt;Flask&lt;/a&gt; &amp;amp; DynamoDB, deployed on AWS via &lt;a href="https://github.com/Miserlou/Zappa" rel="noopener noreferrer"&gt;Zappa&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The code for the mobile &lt;a href="https://github.com/jorotenev/para" rel="noopener noreferrer"&gt;app&lt;/a&gt; (TypeScript) and the &lt;a href="https://github.com/jorotenev/para_api" rel="noopener noreferrer"&gt;backend&lt;/a&gt; (Python) is open-sourced.&lt;/p&gt;

&lt;p&gt;The article starts with discussion about Firebase Auth and how I’ve used it. If you only care about the NativeScript-specific gotchas — scroll down to the &lt;strong&gt;Helpful NativeScript-specific plugins and gotchas&lt;/strong&gt; section.&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;When planning how to make the app, I spent some time researching platforms for cross-platform mobile app development. I wanted to write the code once, and then be able to run Para natively on both Android &amp;amp; iOS. The main candidate platforms were React Native and NativeScript. I already had some experience with JavaScript/TypeScript, but none with React*. NativeScript lets you use pure TypeScript and it’s developed by Telerik, which was originally a Bulgarian company. Also, it seemed that the majority of the comparison articles I found on Google, were suggesting that NativeScript is better.&lt;/p&gt;

&lt;p&gt;All of this was enough for me to try NativeScript first.&lt;/p&gt;

&lt;p&gt;Reading this post will definitely be more effective if you’ve already read the official NativeScript &lt;a href="https://docs.nativescript.org/" rel="noopener noreferrer"&gt;Getting Started&lt;/a&gt; guide.&lt;/p&gt;

&lt;p&gt;* I know, I know, it’s next on the list.&lt;/p&gt;

&lt;h1&gt;
  
  
  Definition of done
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;App to help me keep track of my expenses.&lt;/li&gt;
&lt;li&gt;Easy to add new expenses and view existing ones.&lt;/li&gt;
&lt;li&gt;Supports email &amp;amp; social login.&lt;/li&gt;
&lt;li&gt;Offers statistics (“How much I’ve spent this week/month”).&lt;/li&gt;
&lt;li&gt;Costs me 0$ per month to run the app. Given that it’s published on an app store and has more users than me and my dad.&lt;/li&gt;
&lt;li&gt;Runs natively on Android and iOS.&lt;/li&gt;
&lt;li&gt;Can handle 500 users that simultaneously use the app.&lt;/li&gt;
&lt;li&gt;Can handle a user which uses the app on multiple devices.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  10 000 ft. overview
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fhqri3tp4lq1qj8xpk1qe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fhqri3tp4lq1qj8xpk1qe.png"&gt;&lt;/a&gt;&lt;br&gt;
The app is responsible for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;validating &amp;amp; collecting user input (when adding new expenses / updating an expense)&lt;/li&gt;
&lt;li&gt;displaying the user’s expenses + statistics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The API is responsible for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;validate &amp;amp; store expenses to a database&lt;/li&gt;
&lt;li&gt;retrieve expenses and stats about them from the database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Firebase is responsible for:&lt;br&gt;
*storing user sensitive auth data&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;auth related activities like email confirmation, resetting passwords, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both the app and the API share the same representation of what is an expense (declaratively, by using a &lt;a href="https://spacetelescope.github.io/understanding-json-schema/" rel="noopener noreferrer"&gt;json schema&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;By design, no user credentials (emails, passwords) are stored in the database I manage (the DynamoDB).&lt;/p&gt;
&lt;h2&gt;
  
  
  Authentication and Firebase
&lt;/h2&gt;

&lt;p&gt;The difficulty — how to authenticate the mobile app to the backend API with minimal effort, while offering an adequate solution.  &lt;/p&gt;

&lt;p&gt;I used &lt;a href="https://firebase.google.com/" rel="noopener noreferrer"&gt;Firebase&lt;/a&gt; for auth related activities in the app. Firebase identifies each user of a given project with an &lt;code&gt;user uid&lt;/code&gt; (it’s a long string).&lt;/p&gt;

&lt;p&gt;When a user logs in (either by entering email + password or via Facebook), his &lt;code&gt;user uid&lt;/code&gt; is accessible from within the app code. Currently, in the context of the app, all expenses belong to the currently logged in user, thus the uid doesn’t appear in the schema of expense objects.&lt;/p&gt;

&lt;p&gt;The uid is useful when the app communicates with the backend API though — because the API needs to perform CRUD operations in the context of the correct user.&lt;/p&gt;

&lt;p&gt;Sending a uid directly would not be safe because impersonating users would be possible. Firebase offers the ability to generate &lt;a href="https://firebase.google.com/docs/auth/admin/verify-id-tokens" rel="noopener noreferrer"&gt;ID Tokens&lt;/a&gt;. Here’s my mental model about them. It’s a JSON Web Token (JWT), a string, generated by the Firebase &lt;strong&gt;Client&lt;/strong&gt; SDK (i.e. a SDK that runs on the phone). The string contains the currently logged-in user’s uid, the time the user logged in and some other user info (email, etc.), which Firebase has about the user. The nice thing about this string is that when our backend receives it, the server can verify the integrity of the string — i.e. if the string says that the user with uid XXXYYY sent the string — we can be certain that indeed it was this user that sent the string.&lt;/p&gt;

&lt;p&gt;The general workflow is that when the phone makes an API request to our backend, the ID Token is included in a HTTP header. When the server receives the request, it decodes the token, by using the Firebase &lt;strong&gt;Admin&lt;/strong&gt; SDK. Then the &lt;code&gt;user uid&lt;/code&gt; is available to the server and it’s possible to process the request in the correct context.&lt;/p&gt;

&lt;p&gt;Why I chose this approach? It releases me from the hassle of worrying about storing user emails, passwords, dealing with email confirmation, forgotten passwords, etc. It is a hassle. And since it’s easy to mess it up, I delegated that to Firebase. All I care about is ensuring I can show the login/signup page to my app users and once they’ve enter what they should, ask Firebase to give me info about the user (his uid and id token).&lt;/p&gt;

&lt;p&gt;Another noteworthy aspect of using Firebase is having isolated environments.&lt;/p&gt;

&lt;p&gt;I want to have completely separate development and production environments. Meaning, that I can develop locally and not touch the production Firebase project — which has all my users. Achieving this is a matter of creating a per-environment Firebase project and &lt;a href="https://github.com/jorotenev/para/blob/master/hooks/before-prepare/choose_firebase_config.js" rel="noopener noreferrer"&gt;choosing&lt;/a&gt; the correct Firebase configuration file during app build (or “&lt;a href="https://docs.nativescript.org/docs-cli/project/configuration/prepare" rel="noopener noreferrer"&gt;prepare&lt;/a&gt;”) time. I have the production file stored securely and can use it to make production app builds which are uploaded to the app/play store. I can share the development configuration with other collaborators so that they can develop locally. &lt;/p&gt;

&lt;p&gt;Since the backend’s Firebase Admin SDK needs to have credentials to the same Firebase project, I store the credentials as an environmental variable — and the correct credentials are used for a given server environment. This is really handy because it’s not possible for a development app to access a production server — can happen if I misconfigure the app (e.g. point to the prod API when I develop locally). You can tell I don’t trust myself a lot :)&lt;/p&gt;

&lt;p&gt;That’s it for the 10,000 ft. overview.&lt;/p&gt;

&lt;p&gt;I will show some of the NativeScript-specific plugins I found useful and some problems / gotchas related to them.&lt;/p&gt;
&lt;h1&gt;
  
  
  Helpful NativeScript-specific plugins and gotchas
&lt;/h1&gt;

&lt;p&gt;When using NativeScript, you can benefit from npm packages. E.g. I used the popular &lt;code&gt;moment.js&lt;/code&gt; and &lt;code&gt;underscore&lt;/code&gt; JavaScript libraries without any issues. Here’s a &lt;a href="https://market.nativescript.org/" rel="noopener noreferrer"&gt;page&lt;/a&gt; with verified NativeScript-targeted packages. Here’s a &lt;a href="https://docs.nativescript.org/plugins/plugins" rel="noopener noreferrer"&gt;page&lt;/a&gt; with overview of how to install and use them. &lt;/p&gt;

&lt;p&gt;Just as a quick note, be careful when installing packages from npm — sometimes the packages assume they’ll run either in a browser or a Node.js environment — and can fail if ran within NativeScript (e.g. I saw a few packages failing to import &lt;code&gt;process&lt;/code&gt;).&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;code&gt;nativescript-plugin-firebase&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Instead of using directly the official Firebase Android/iOS Client SDKs, I used the &lt;a href="https://github.com/EddyVerbruggen/nativescript-plugin-firebase" rel="noopener noreferrer"&gt;nativescript-plugin-firebase&lt;/a&gt;. The benefit of that is that I don’t need to write Java and Swift — I write TypeScript, once. As mentioned above, I use Firebase for all things that are auth — registering, signing in and logging out users. Changing their password and recovering their password are also made possible by the plugin. And most importantly, getting the user’s uid and ID Token.&lt;/p&gt;

&lt;p&gt;The one thing that wasn’t immediately obvious to me was how to ensure that once the user logs in, he stays logged in, until he explicitly logs out. The problem was that if you login and then minimize the app and restore it after an hour, auto logging in will succeed, however the ID Token will be expired — tokens are only valid for an hour since their creation (i.e. last time you entered email + password and signed in). And if I try to send an expired token to the backend — the backend will bark.&lt;/p&gt;

&lt;p&gt;The trick, that worked for me, is to use &lt;code&gt;firebase.getAuthToken({forceRefresh: true})&lt;/code&gt; when auto-signing in the user. This will force the creation of a fresh ID Token. Here’s an example &lt;a href="https://github.com/jorotenev/para/blob/1693b196bd5dda42bc94f7d3734dd7bf899063d8/app/auth/login/login-view.ts#L23" rel="noopener noreferrer"&gt;usage&lt;/a&gt;. In the code I check if the user has already logged in, and if so, request a fresh token immediately. The login-page is the default first page of the app and it’s shown always when you open the app. The check from above ensures that if a user is logged in, he’ll be redirected to the “home” page directly.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;code&gt;nativescript-pro-ui&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.nativescript.org/ui-for-nativescript" rel="noopener noreferrer"&gt;pro-ui package&lt;/a&gt; (it’s actually a bundle of packages since a couple of weeks) provides reusable UI components.&lt;/p&gt;

&lt;p&gt;I use it for three different things in my app — for the list of all expenses, for the side drawer and for a data-form to create/edit expenses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fomj4f1psrp99s2zg6409.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fomj4f1psrp99s2zg6409.png" alt="add new expense"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;code&gt;RadListView&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Why using the Pro UI’s RadListView is indeed rad? Because all I need to do to show the list of expenses from above is: say how each expense of the list should be displayed (i.e. given an expense data object, which attributes to show and where to put them). That’s done via XML, provide a collection of expenses to show (it’s called an ObservableArray and you can think of it as a beefed-up array that emits events when you add/delete to it)&lt;/p&gt;

&lt;p&gt;The nice thing is that there’s a binding between the collection of expenses I provide and the RadListView — i.e. if I add/remove an expense to my collection, the RadListView will automatically update. This means I can only focus on making sure my array has the correct items and not on UI.&lt;/p&gt;

&lt;p&gt;Apart from that, I get “free” &lt;a href="http://docs.telerik.com/devtools/nativescript-ui/Controls/NativeScript/ListView/pull-to-refresh" rel="noopener noreferrer"&gt;pull-to-refresh&lt;/a&gt; and &lt;a href="http://docs.telerik.com/devtools/nativescript-ui/Controls/NativeScript/ListView/load-on-demand" rel="noopener noreferrer"&gt;loading data in batches&lt;/a&gt; (i.e. 10 expenses per batch are loaded). To get the former two I only need to provide functions which should actually refresh the data / get the next batch of items from the API and update the items in the ObservableArray.&lt;/p&gt;

&lt;p&gt;There’s an important gotcha here. When writing the &lt;a href="https://docs.nativescript.org/ui/basics" rel="noopener noreferrer"&gt;XML markup&lt;/a&gt; for the “list all expenses” page, I wanted to show different things depending on whether the user has any expenses or not. If the user doesn’t have expenses — I show a message together with an “add new” button, if there are expenses — just show them. I implemented this using styles (think CSS) by&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RadListView visibility=”{{hasItems ? ‘visible’ : ‘collapse’}}”&lt;/code&gt;. So I use some boolean variable &lt;code&gt;hasItems&lt;/code&gt; which determines whether we see the list or not.&lt;/p&gt;

&lt;p&gt;The trick is that one might think that since the collection of expenses we provide is of type ObservableArray, the &lt;code&gt;.length&lt;/code&gt; property is also observed. Well, it &lt;a href="https://github.com/NativeScript/NativeScript/issues/5476" rel="noopener noreferrer"&gt;seems&lt;/a&gt; it’s not. So &lt;a href="https://docs.nativescript.org/ui/basics#bindings" rel="noopener noreferrer"&gt;binding&lt;/a&gt; to the expression &lt;code&gt;expenses.length !== 0&lt;/code&gt; is not possible. My workaround was to subscribe to the events of the ObservableArray and adjust the value of &lt;code&gt;hasExpenses&lt;/code&gt; on each add/remove.&lt;/p&gt;

&lt;p&gt;If the above doesn’t make a lot of sense, make sure you’ve read the section about binding in the &lt;a href="https://docs.nativescript.org/ui/basics#bindings" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;code&gt;RadSideDrawer&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The drawer is quite handy — you can put the hamburger on the top left of each screen and when you press it or when you slide from left to right, the side panel drawer appears.&lt;/p&gt;

&lt;p&gt;It’s easy to have a lot of duplicated code this way though — i.e. on each page you want a side drawer, you write essentially &lt;a href="https://github.com/telerik/nativescript-ui-samples/tree/release/sdk/app/sidedrawer/getting-started" rel="noopener noreferrer"&gt;the same code&lt;/a&gt; — the content of the drawer and then the content of the page.&lt;/p&gt;

&lt;p&gt;I extracted the code for the drawer content into a separate component. I found &lt;a href="https://moduscreate.com/blog/custom-components-in-nativescript/" rel="noopener noreferrer"&gt;this article&lt;/a&gt; really informative about how to extract NativeScript UI components and reduce duplication.&lt;/p&gt;

&lt;p&gt;Frankly, I now have the equivalent of “import the side drawer” code in most of my pages which is sort of duplication as well. As a further refactoring I plan to extend the &lt;code&gt;Page&lt;/code&gt; class to &lt;code&gt;PageWithDrawer&lt;/code&gt; and use it as the root element of my pages.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;code&gt;RadDataForm&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Oh, this one’s my favourite. It’s really handy. Essentially, you provide a plain old data object, and optionally description of the properties of the object, and you get a UI data form. You can register callbacks called during validation of user entered data. You choose when the data the user entered is committed to the input data object. You can also register a callback for when data is committed. You choose if committing input to the data object happens immediately or after the user has, say, pressed the Submit button.&lt;/p&gt;

&lt;p&gt;You get different field types out of the box (e.g. text, number, email, etc.) — handy because you get some auto-validation and the proper keyboard is shown.&lt;/p&gt;

&lt;p&gt;The easiest way to configure the visible fields, their type (text, number) and validators (MinLength, NonEmpty, etc.) is via XML markup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;RadDataForm.properties&amp;gt;
    &amp;lt;EntityProperty name="name" displayName="Name" index="0" /&amp;gt;
    &amp;lt;EntityProperty name="age" displayName="Age" index="1" &amp;gt;
      &amp;lt;EntityProperty.editor&amp;gt;
        &amp;lt;PropertyEditor type="Number" /&amp;gt;
    &amp;lt;EntityProperty.validators&amp;gt;
          &amp;lt;RangeValidator minimum="1" maximum="150" /&amp;gt; 
        &amp;lt;/EntityProperty.validators&amp;gt;
      &amp;lt;/EntityProperty.editor&amp;gt;
    &amp;lt;/EntityProperty&amp;gt;
&amp;lt;/RadDataForm.properties&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I recommend reading &lt;strong&gt;all&lt;/strong&gt; of &lt;a href="http://docs.telerik.com/devtools/nativescript-ui/Controls/NativeScript/DataForm/dataform-overview" rel="noopener noreferrer"&gt;the docs&lt;/a&gt; on RadDataForm if you plan to use it. It’s not long and it will save you a ton of time.&lt;/p&gt;

&lt;p&gt;During development of the app, I found myself in a situation in which I needed very similar dataforms with the only difference being the action performed after the Submit is pressed. The prime example — creating a new expense or updating an existing one. From the perspective of the DataForm, in both cases the input data object has the same shape, with the difference that during updating an expense, the data object has actual data. When the user presses the Submit button of the form, a different API endpoint is called. But for the rest, the form is the same — validation, field types, etc.&lt;/p&gt;

&lt;p&gt;Thankfully, when creating data forms, instead of using just XML markup to describe the properties, you can pass JSON. So instead of just passing the source data object and using XML, you pass the data object and a “metadata” JSON, which describes the properties instead. Here’s how the above XML can look as JSON metadata:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "name":"name",
    "displayName":"Name",
    "index":0
},
{
    "name":"age",
    "displayName":"Age",
    "index":1,
    "editor":"Number",
    "validators":[
        {
            "name":"RangeValidator",
            "params":{
                "minimum":1,
                "maximum":150
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The examples from the NativeScript’s &lt;a href="https://github.com/telerik/nativescript-ui-samples/tree/release/sdk/app" rel="noopener noreferrer"&gt;ui-samples-repo&lt;/a&gt;, seem to all read the metadata from a JSON and pass it directly to the DataForm. What I did was to have a function which generates the JSON. This gave me a lot of flexibility when generating the metadata. There’s info in the docs on what you can and cannot do with JSON compared to via XML markup.&lt;/p&gt;

&lt;p&gt;And now the gotcha! :) I spent good couple of days chasing a bug related to RadDataForm. Essentially I was validating the form manually and committing the input to the data object manually too. So after doing that I was accessing the data object’s properties because I needed their values. The &lt;a href="https://github.com/telerik/nativescript-ui-feedback/issues/549" rel="noopener noreferrer"&gt;bug&lt;/a&gt; was that if the property was marked as a Number or Decimal in the metadata of the form, getting its value doesn’t return the number but rather some object of unknown type. The workaround is to call the &lt;code&gt;toString()&lt;/code&gt; of this object to get the string representation of the number and then convert it to a proper number. Nasty.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;i18n&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;I only found out last week that it means internationalization :)&lt;/p&gt;

&lt;p&gt;I wanted to make my app available in Bulgarian and English. I googled around for a NativeScript i18n package that is actively developed and came across — &lt;a href="https://github.com/lfabreges/nativescript-localize" rel="noopener noreferrer"&gt;nativescript-localize&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The only gotcha here. When doing debug builds it’s ok to do &lt;code&gt;L(“natural language in english”)&lt;/code&gt; and only define a translation in the non-english langs for the string — it will fallback to the english string automatically if needed. However, when doing a release build, the &lt;a href="http://www.fasteque.com/missingtranslation-issue-for-release-builds/" rel="noopener noreferrer"&gt;lint will bark&lt;/a&gt;. The article mentions some fixes, but as far as I can tell they are not applicable when using the nativescript-localize package because the package generates the string.xml on build time. In essence you might want to use non-user-friendly strings and provide natural language translations for them for each supported language.&lt;/p&gt;

&lt;p&gt;I recently found out that the popular &lt;a href="https://github.com/mashpie/i18n-node" rel="noopener noreferrer"&gt;i18next&lt;/a&gt; JavaScript package also seems to work — probs because it makes very conservative assumptions about the runtime.&lt;/p&gt;

&lt;p&gt;Using the library within TypeScript/JavaScript doesn’t deviate from the package’s suggested way. However, in NativeScript we want to translate string in XML pages too. The gotcha is setting the app resources (i.e. making a function available within XML). Doing it like this seems to work correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.setResources({
    ‘L’ : (…args) =&amp;gt; i18next.t.apply(i18next, args)
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This enables us to do stuff like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;Label text=”{{‘translation for key is ’ + L(‘key’)}}”/&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;JSON Schema&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Apparently It’s not that easy to find a package to do validation of a &lt;a href="https://spacetelescope.github.io/understanding-json-schema/" rel="noopener noreferrer"&gt;JSON Schema&lt;/a&gt; in a NativeScript app.&lt;/p&gt;

&lt;p&gt;After a lot of trial and error, I found a JavaScript package that works well. It’s called &lt;a href="https://github.com/geraintluff/tv4" rel="noopener noreferrer"&gt;tv4&lt;/a&gt;. Frankly, I haven’t tried more advanced use-cases like using schemas from different files so I can’t say how it behaves there.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;Mocking in tests&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;I like mocks a lot when testing. I tried to use different mocking frameworks. Most often the problem with the frameworks is that they make the assumption that the tests run in node/browser which is problematic in our case.&lt;/p&gt;

&lt;p&gt;I had success only with using the built-in mocking mechanism of Jasmine. It does what I need, but if you know a different package — I’d be super curious to check it out :)&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;code&gt;Misc&lt;/code&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.nativescript.org/nativescript-sidekick" rel="noopener noreferrer"&gt;SideKick&lt;/a&gt; — if you like GUIs it can be helpful. For me the greatest benefit of it are the cloud builds — it lets me build for Android/iOS from my Linux host.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/device-farm/" rel="noopener noreferrer"&gt;AWS Device Farm&lt;/a&gt; — I test my release builds of the app there. There’s a selection of iOS/Android devices. Not the most up-to-dated though. You get 1000/month within AWS’s free tier.&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Overall, I enjoyed my experience developing the app. I enjoyed the fact that I can use npm packages and not reinvent the wheel for each problem. Haven’t tested the app on iOS yet (don’t have a dev account yet) but in theory it should work with no code changes.&lt;/p&gt;

&lt;p&gt;I wish I started using &lt;code&gt;hooks&lt;/code&gt; earlier — I use it to choose/change my app config based on whether I am building a development app or a release one. I also use it to make checks like “Is my API address in a correct format and is it an address I’ve whitelisted”, to get the current git sha and put it in the app config so I know the exact code that the app runs, etc.&lt;/p&gt;

&lt;p&gt;Thanks for reading this. If you have any suggestions or remarks, please share them — I will be grateful.&lt;/p&gt;

</description>
      <category>nativescript</category>
      <category>typescript</category>
      <category>mobile</category>
      <category>app</category>
    </item>
  </channel>
</rss>
