<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Damian Perera</title>
    <description>The latest articles on DEV Community by Damian Perera (@damianperera).</description>
    <link>https://dev.to/damianperera</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/damianperera"/>
    <language>en</language>
    <item>
      <title>Behaviour Driven Testing in Enterprise Applications</title>
      <dc:creator>Damian Perera</dc:creator>
      <pubDate>Wed, 20 May 2020 02:48:51 +0000</pubDate>
      <link>https://dev.to/damianperera/behaviour-driven-testing-in-enterprise-applications-59fa</link>
      <guid>https://dev.to/damianperera/behaviour-driven-testing-in-enterprise-applications-59fa</guid>
      <description>&lt;h4&gt;
  
  
  A look into reducing bug leakages in microservices shared across multiple development teams
&lt;/h4&gt;

&lt;p&gt;I work in a software organization where change is frequent, to say the least. As a developer and part of a larger team building an e-commerce platform, we’ve more or less adapted to frequent changes in project management, development methodologies, testing ideologies and of course now (given the current global pandemic), remote collaboration. This behaviour-driven test strategy was developed within our team as a way to reduce bug leakages in mission-critical systems as our organization grew to include off-shore development teams (with product owners representing different customer interests)—and in order to ensure that the quality of the systems would not be affected by evolving engineering philosophies over the years.&lt;/p&gt;

&lt;h1&gt;
  
  
  Background
&lt;/h1&gt;

&lt;p&gt;When working with an enterprise application powered by microservices you can end up with a lot of sub-systems designed to handle different parts of a user's experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sFaJMRxc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1wt4b4rah5i497wthx5o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sFaJMRxc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1wt4b4rah5i497wthx5o.png" alt="Alt Text" title="A typical e-commerce ecosystem"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In an e-commerce setting these microservices might be scoped to manage users, orders, payment processing, product recommendations, product search, etc. leading to a lot of moving parts under the hood. Depending on the design of the system, each microservice might consume a bunch of others based on a predefined flow to complete a user action.&lt;/p&gt;

&lt;p&gt;An example would be a user wanting to know the delivery ETA of an order while in the order details page. In order to generate an ETA for the delivery (as per the diagram above), the delivery microservice would need to fetch the order stored with the order management service, cross-check its location with the warehouse and logistics services, fetch driver information and estimate a delivery time depending on a GPS marker. That’s four different sub-systems working in unison to populate a label on the user’s screen with the delivery information of an order.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Problem
&lt;/h1&gt;

&lt;p&gt;As and when your product grows, so will the requirements of your target users and in extension the demands of your product owner. This is when our software organization started to expand and grow, breaking off into smaller specialized business units (or verticals) focused on developing a part of a users experience like tracking the delivery of an order rather than taking tickets from a common project backlog in a round-robin manner.&lt;/p&gt;

&lt;p&gt;When you have teams working on different aspects of a customer’s experience you tend to get new features that require changes across many sub-systems — and having many developers building multiple features across the entire stack at the same time is a really big risk if you don’t have a proper test suite.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2lwIElS5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k38sjj11meo3nsr0pgfe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2lwIElS5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k38sjj11meo3nsr0pgfe.png" alt="Alt Text" title="When should I mock? via StackOverflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a unit test suite, the problem with mocking every dependency is that if you change a method’s functionality, and other methods in its call hierarchy have already mocked it, you won’t know if any functionality different to your own has been broken. This becomes a significant problem for developers (both new and old alike) since it’s difficult to comprehend the entire affected scope of a changeset.&lt;/p&gt;

&lt;p&gt;You might end up breaking an already working flow belonging to a team that handles order placement when implementing a feature related to order tracking in the order management service.&lt;/p&gt;

&lt;h3&gt;
  
  
  But isn’t that why we have Integration Tests?
&lt;/h3&gt;

&lt;p&gt;True. But once you’ve included tests for every edge-case, bug, and race-condition in addition to the existing cases in your integration test suite your test pyramid is going to look a lot like the one we had.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PJQDYNpn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2y3kw0w4alkuqns3rosw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PJQDYNpn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2y3kw0w4alkuqns3rosw.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although we had more than 80% coverage in all our unit test suites the reason it looks more like an ice-cream cone than a pyramid was because we added on so many integration and E2E test cases that towards the end some integration suites would take over 60 minutes to complete — that’s 1 hour wasted by a developer waiting to know if their code change broke another feature in the service — and with the suite running on a CI server it’s still going to be difficult to understand the entire scope of the broken tests and narrow down the impacted area to debug the faulty logic.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Solution
&lt;/h1&gt;

&lt;p&gt;We came up with an integration test suite that could be used to test entire API flows within a microservice while co-existing with unit tests, which would be executed automatically along with the unit test suite. A test would be executed from the service method that the REST controller uses while mocking only the external database and API calls.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HGE4vo3D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7kp4enjsjvcipmbjhjye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HGE4vo3D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7kp4enjsjvcipmbjhjye.png" alt="Alt Text" title="Real Flows vs Behaviour Driven Test Flows"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown above, we only mock anything outside the control of the service i.e. the database (using TestContainers) and external API calls (using WireMock). Unlike normal integration tests, this allows us to assert not only the response object sent from the service but also the state of the database as well as the API requests that were sent to the stubs. We can also configure the database and stubs to return different results to the service in order to recreate successful, buggy, and edgy flows in order to assert the correct behaviour during those scenarios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2zPxn9ez--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/srtpy5kg5ljnjo3wmrnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2zPxn9ez--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/srtpy5kg5ljnjo3wmrnr.png" alt="Alt Text" title="BDT Assertions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since this strategy allows you to control and assert all the entry and exit points of your service, you have the ability to write test cases to verify the exact behaviour of the relevant API endpoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  Naming Conventions
&lt;/h3&gt;

&lt;p&gt;This is where BDT gets its name. Remember that you are testing a complete flow in the viewpoint of a downstream service. A test case for the previous example which returns the delivery ETA of an order (&lt;code&gt;GET /order/:orderId/location&lt;/code&gt;) where you also need to simulate the failure of an upstream service would look like the one shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kQjAtwac--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/28zbso3i2vskfuvxqira.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kQjAtwac--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/28zbso3i2vskfuvxqira.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here the name of the test case should represent the expectation of the service that would be consuming that method via an API invocation as per the contract. We also try to make sure that the name clearly identifies the test scope so that a developer would be able to look at a test case and immediately know what is expected.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Results
&lt;/h1&gt;

&lt;p&gt;Since these new tests execute along with the existing unit tests, and since the database and external APIs are mocked, we’ve seen around 1500 test cases completing within a couple of minutes — that’s around 95% faster than what our previous integration test suite running on a CI server would have taken while more or less covering the same test cases.&lt;/p&gt;

&lt;p&gt;While we didn’t completely flip around our test ice-cream cone, we were able to grow our unit test layer to include many test cases that would otherwise have required real database and third-party service calls, leaving our integration and E2E test suites to cover only the most vital flows (the happy and bad paths).&lt;/p&gt;

&lt;p&gt;Now if a developer working from anywhere in the world was building a feature or fixing a bug and inadvertently broke any established API behaviours, they would see the exact impact of the change in their local environment itself and fix it without waiting for a quality gate failure or a bug reported in a cloud environment — which significantly reduces the number of bug leakages as well as the time to completely release a ticket.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;: The testing strategy described here is not meant to be a replacement for unit tests.&lt;/p&gt;

</description>
      <category>java</category>
      <category>microservices</category>
      <category>testing</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Why Docker?</title>
      <dc:creator>Damian Perera</dc:creator>
      <pubDate>Thu, 20 Feb 2020 03:28:24 +0000</pubDate>
      <link>https://dev.to/damianperera/why-docker-ggd</link>
      <guid>https://dev.to/damianperera/why-docker-ggd</guid>
      <description>&lt;h5&gt;
  
  
  Learn how Docker is transforming the way we code
&lt;/h5&gt;

&lt;p&gt;Ever since Docker went live in early 2013 it’s had a love-hate relationship with programmers and sysadmins. While some ‘experienced’ developers that I’ve talked to have a strong dislike for containerization in general (more on that later), there’s a reason why a lot of major companies including eBay, Twitter, Spotify and Lyft have reportedly adopted Docker in their production environments.&lt;/p&gt;

&lt;h1&gt;
  
  
  So what exactly does Docker do?
&lt;/h1&gt;

&lt;p&gt;Ever worked with VMware, VirtualBox, Parallels or any other virtualization software? Well, Docker’s pretty much the same (albeit without a fancy GUI) where it creates a virtual machine with an operating system of your choice bundled with only your web application and its dependencies.&lt;/p&gt;

&lt;h1&gt;
  
  
  But aren’t virtual machines slow?
&lt;/h1&gt;

&lt;p&gt;Virtualization is what drives the cloud computing revolution, and I like to call Docker the last step of virtualization which actually executes the business logic that you’ve developed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hhDYobHI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bble3b925qg1t0mrcbft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hhDYobHI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bble3b925qg1t0mrcbft.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But, your right — typical virtual machines are slow and what Docker does cannot be entirely categorized under virtualization. Instead, Docker provides an abstraction on top of the kernel’s support for different process namespaces, device namespaces etc. by using &lt;a href="https://github.com/opencontainers/runc"&gt;runc&lt;/a&gt; (maintained by the &lt;a href="https://www.opencontainers.org/"&gt;Open Containers Initiative&lt;/a&gt;) which allows it to share a lot of the host system’s resources. Since there isn’t an additional virtualization layer between the Docker container and the host machine’s kernel a container manages to provide nearly identical performance to your host.&lt;/p&gt;

&lt;p&gt;A fully virtualized system gets its own set of resources allocated to it and does minimal sharing (if any) which results in more isolation, but it’s heavier (requiring more resources) — however, with Docker, you get less isolation but the containers are pretty lightweight (requiring fewer resources).&lt;/p&gt;

&lt;p&gt;If you need to run a system where you absolutely require full isolation with guaranteed resources (e.g. a gaming server) then a virtual machine based on KVM or OpenVZ is probably the way to go. But, if you just want to isolate separate processes from each other and run a bunch of them on a reasonably sized host without breaking the bank, then Docker is for you.&lt;br&gt;
If you want to learn more about the performance aspects of running a containerized system here’s a great research paper from IBM: &lt;a href="http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/%24File/rc25482.pdf"&gt;An Updated Performance Comparison of Virtual Machines and Linux Containers&lt;/a&gt; (2014, Felter et. al) that does a sound comparison of virtual machines and containers.&lt;/p&gt;
&lt;h1&gt;
  
  
  Can’t I simply upload my application straight on to a bunch of cloud servers?
&lt;/h1&gt;

&lt;p&gt;Well you can, if you don’t care about stuff like infrastructure, environment consistency, scalability or availability.&lt;/p&gt;

&lt;p&gt;Imagine for a moment the following scenario: you manage a dozen Java services and deploy them on separate servers running Ubuntu with Java 8 for your Dev, QA, Staging and Production environments. Even if you haven’t made your applications highly available that’s a minimum of 48 servers that you need to manage (12 services x 4 environments).&lt;/p&gt;

&lt;p&gt;Now imagine your team spearheads an organization change policy requiring you to upgrade your runtimes to Java 11. That’s 48 servers that you need to log in to and manually update. Even using tools like Chef or Puppet, that’s a lot of work.&lt;/p&gt;
&lt;h1&gt;
  
  
  Here’s a simpler solution
&lt;/h1&gt;

&lt;p&gt;Docker lets you create a snapshot of the operating system that you need and install only the required dependencies on it. One side of this that I love is that you can manage all the ‘bloatware’ or lack thereof. You could use a minimal installation of Linux (I recommend Alpine Linux, although for the purpose of this article I’ll continue with Ubuntu) and only install Java 8 on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--URqLEYD6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8x0ayxxuop6lbgwcuq5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--URqLEYD6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8x0ayxxuop6lbgwcuq5t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the time comes to update, simply edit your Java image’s Dockerfile to use Java 11, build and push to a container repository (like Docker Hub or Amazon ECR), after which all you need to do is change your application containers base-image tag to reference the new snapshot and re-deploy them.&lt;/p&gt;



&lt;p&gt;Here's a &lt;a href="https://gist.github.com/damianperera/24da4e626681e7c7c027b28585d058dd#file-dockerfile"&gt;Gist&lt;/a&gt; of a sample Docker container built on top of the Ubuntu 18.04 minimal operating system.&lt;/p&gt;

&lt;p&gt;I would build and push this image to the Docker Hub account &lt;code&gt;damian&lt;/code&gt; using the tag &lt;code&gt;oracle-jdk-ubuntu-18.04:1.8.0_191&lt;/code&gt; and then use it to build another container for my services to run on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Instructs Docker to build this container on top of this snapshot
FROM damian/oracle-jdk-ubuntu-18.04:1.8.0_191

# Copys the application JAR to the container
COPY build/hello-world.jar hello-world.jar

# Executes this command when the container starts
CMD java -jar hello-world.jar
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if I needed to update my services to Java 11, all I need to do is publish a new version of my Java snapshot with a compatible JRE installed, and update the tag in the FROM declaration in the service container, instructing the container to use the new base image. Voila, with your next deployment you’ll have all your services up-to-date with the latest updates from Ubuntu and Java.&lt;/p&gt;

&lt;h1&gt;
  
  
  But how would this help me during development?
&lt;/h1&gt;

&lt;p&gt;Good question.&lt;/p&gt;

&lt;p&gt;I recently started using Docker in unit tests. Imagine you’ve got thousands of test cases (and if you do, believe me, I feel your pain) which connect to a database where each test class needs a fresh copy of the database and whose individual test cases will perform CRUD operations on the data. Normally one would reset the database after each test using something like &lt;a href="https://flywaydb.org/"&gt;Flyway&lt;/a&gt; by Redgate, but this means that your tests would have to run sequentially and would take a lot of time (I’ve seen unit test suites that take as long as 20 minutes to complete because of this).&lt;/p&gt;

&lt;p&gt;With Docker, you could easily create an image of your database (I recommend &lt;a href="https://www.testcontainers.org/"&gt;TestContainers&lt;/a&gt;), run a database instance per test class inside a container, and then run your entire test suite parallelly. Since all the test classes are running in parallel and are linked to separate databases, they could all run on the same host at the same time and finish in a flash (assuming your CPU can handle it).&lt;/p&gt;

&lt;p&gt;Another place I find myself using Docker is when coding in Golang (whose configurations and dependency management I find to be messy )— instead of directly installing Go on my development machine I follow a method similar to &lt;a href="https://levelup.gitconnected.com/how-to-live-reload-code-for-golang-and-docker-without-third-parties-ee90721ef641"&gt;Konstantin Darutkin&lt;/a&gt;’s by maintaining a Dockerfile with my Go installation + dependencies configured to live-reload my project when I make a change to a source file.&lt;/p&gt;

&lt;p&gt;This way since I’ve got my project and Dockerfile version-controlled, and if I ever need to change my development machine or reformat it, all I need to do is simply reinstall Docker to continue from where I left off.&lt;/p&gt;

&lt;h1&gt;
  
  
  To sum up…
&lt;/h1&gt;

&lt;p&gt;If you are a startup undecided on what’s going to power your new tech stack, or an established service provider thinking of containerizing your Prod and NonProd environments but fear sailing on ‘untested’ waters (smirk), consider for a moment the following.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consistency
&lt;/h3&gt;

&lt;p&gt;You might have the best developers in the entire industry, but with all the different operating systems out in the wild everyone prefers their own setup. If you’ve got your local environment properly configured with Docker, all a new developer need do is install it, spawn a container with your application and kick-off.&lt;/p&gt;

&lt;h3&gt;
  
  
  Debugging
&lt;/h3&gt;

&lt;p&gt;You can easily isolate and eliminate issues with the environment across your team without needing to know how their machine setup. A good example of this is when we once had to fix some time synchronization issues on our servers by migrating from ntpd to Chrony — and all we did was update our base image, with our developers being none the wiser.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automation
&lt;/h3&gt;

&lt;p&gt;Most CI/CD tools including Jenkins, CircleCI, TravisCI etc. are now fully integrated with Docker, which makes propagating your changes from environment to environment a breeze.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud Support
&lt;/h3&gt;

&lt;p&gt;Containers need to be monitored and controlled or else you will have no idea what’s running on your servers and DataDog, a cloud-monitoring company had this to say on Docker:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Containers’ short lifetimes and increased density have significant implications for infrastructure monitoring. They represent an order-of-magnitude increase in the number of things that need to be individually monitored.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The solution to this monstrous endeavour is available in self-managed cloud orchestration tools such as &lt;a href="https://docs.docker.com/engine/swarm/"&gt;Docker Swarm&lt;/a&gt; and &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; as well as vendor-managed tools such as AWS’s &lt;a href="https://aws.amazon.com/ecs/"&gt;Elastic Container Service&lt;/a&gt; and the &lt;a href="https://cloud.google.com/kubernetes-engine/"&gt;Google Kubernetes Engine&lt;/a&gt; which monitor and manage container clustering and scheduling.&lt;/p&gt;




&lt;p&gt;With the widespread use of Docker and its tight integration with Cloud Service Providers like AWS and Google Cloud, its quickly becoming a no-brainer to dockerize your new or existing application.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>programming</category>
      <category>devops</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
