<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karl Eriksson</title>
    <description>The latest articles on DEV Community by Karl Eriksson (@keaeriksson).</description>
    <link>https://dev.to/keaeriksson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/keaeriksson"/>
    <language>en</language>
    <item>
      <title>Scraping HackerNews with GPT-4</title>
      <dc:creator>Karl Eriksson</dc:creator>
      <pubDate>Fri, 03 Nov 2023 10:34:46 +0000</pubDate>
      <link>https://dev.to/keaeriksson/scraping-hackernews-with-chatgpt-240c</link>
      <guid>https://dev.to/keaeriksson/scraping-hackernews-with-chatgpt-240c</guid>
      <description>&lt;p&gt;I wanted to share a project I recently created - An automated scraper for that can scrape any website and store the text content as JSON using GPT. I thought this might be helpful for anyone interested in scraping data or working with APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Tech Stack
&lt;/h2&gt;

&lt;p&gt;After exploring various ways to achieve this, I opted for a no-code solution. In the end I chose the no-code platform &lt;a href="https://clevis.app"&gt;Clevis&lt;/a&gt; to cobble together the required steps and automate the process by running it on a daily schedule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Scraping the content
&lt;/h2&gt;

&lt;p&gt;By using an HTTP Request step in Clevis, I can make a GET request to any website and scrape the text content.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PVqaaqEL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qd25e1fuzuj44sydushd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PVqaaqEL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qd25e1fuzuj44sydushd.png" alt="HTTP Request" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Using ChatGPT to parse the content as JSON
&lt;/h2&gt;

&lt;p&gt;Next, I use the scraped text from HackerNews and prompt ChatGPT to create a JSON object with a schema that I provide in the prompt. In this screenshot, the scraped text is referenced as &lt;code&gt;steps.scrape.output&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pq1vD4Mt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tahzzycjxsa0wownsmnp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pq1vD4Mt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tahzzycjxsa0wownsmnp.png" alt="ChatGPT" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Storing the result
&lt;/h2&gt;

&lt;p&gt;Now that ChatGPT has provided me the result, I can store it in my own database with another HTTP Request step that calls an API I built.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FdZZKW-I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rztfkl57x40plnvn1vwf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FdZZKW-I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rztfkl57x40plnvn1vwf.png" alt="Workflow" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Scheduling
&lt;/h2&gt;

&lt;p&gt;By enabling a schedule in Clevis, I can have this run daily to store the top HackerNews posts for later curation.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Personalized Newsletters Using AI</title>
      <dc:creator>Karl Eriksson</dc:creator>
      <pubDate>Thu, 26 Oct 2023 15:50:33 +0000</pubDate>
      <link>https://dev.to/keaeriksson/personalized-newsletters-using-ai-5dpp</link>
      <guid>https://dev.to/keaeriksson/personalized-newsletters-using-ai-5dpp</guid>
      <description>&lt;p&gt;I recently explored how to build my own personalized newsletter to avoid having to sift through multiple news websites every day. The app that I created fetches the latest news about a subject of my choosing from a news API, summarizes and filters the data with the help of ChatGPT, and sends it to my email. In this post, I'll walk you through how I accomplished this project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Tech Stack
&lt;/h2&gt;

&lt;p&gt;After exploring various ways to achieve this, I opted for a no-code solution. In the end I chose the no-code platform &lt;a href="https://clevis.app"&gt;Clevis&lt;/a&gt; to cobble together the required steps and automate the daily email sendout.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Data Gathering
&lt;/h2&gt;

&lt;p&gt;I started by finding a suitable news API. There are several options available, but I settled on one that provides up-to-date information about startups - NewsAPI.org. The API offered a range of endpoints and parameters to customize my data retrieval. In Clevis, I was able to call this API to fetch data about any subject for the current day.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--up4ErFNO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/burcjmp28rrf9zsxuysh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--up4ErFNO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/burcjmp28rrf9zsxuysh.png" alt="Image description" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: ChatGPT Integration
&lt;/h2&gt;

&lt;p&gt;In Clevis I used the result from the NewsAPI.org API call and prompted it to summarize the news in the form of an HTML formatted newsletter.&lt;/p&gt;

&lt;p&gt;Here is the prompt I used which works surprisingly well given the simplicity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IbgoJaup--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ogz2gsjswid0dr2ztj4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IbgoJaup--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ogz2gsjswid0dr2ztj4z.png" alt="Image description" width="800" height="886"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Sending the email
&lt;/h2&gt;

&lt;p&gt;Clevis comes with built in functionality for sending emails so I simply configured it to send the HTML formatted email that ChatGPT produced for me. Here is an example of how the email looked. ChatGPT even included images from the API response. I believe the formatting could be improved further either by tweaking the prompt or by using my own email template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q6dGkiW---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8pywbhbtq30zvf46p5q4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q6dGkiW---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8pywbhbtq30zvf46p5q4.png" alt="Image description" width="800" height="937"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is how the finished app looks in the Clevis editor&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CCXs34_g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94aiyrzk5799yj45euq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CCXs34_g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94aiyrzk5799yj45euq4.png" alt="Image description" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Going forward
&lt;/h2&gt;

&lt;p&gt;I belive I will continue to iterate on this project to use data from multiple news sources and websites. One challenge is the token limit of ChatGPT which I might have to find a way to get around.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>CI/CD for SaaS Products</title>
      <dc:creator>Karl Eriksson</dc:creator>
      <pubDate>Sun, 16 Oct 2022 13:39:31 +0000</pubDate>
      <link>https://dev.to/keaeriksson/cicd-for-saas-products-52hh</link>
      <guid>https://dev.to/keaeriksson/cicd-for-saas-products-52hh</guid>
      <description>&lt;p&gt;Volca is a SaaS template that comes with a smooth CI/CD setup powered by GitHub Actions out of the box. Ship features faster for your SaaS using an automated deployment strategy.&lt;/p&gt;

&lt;p&gt;Learn how we designed our CI/CD setup with the goal of shipping changes in a fast, reliable and developer friendly way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Branching strategy
&lt;/h2&gt;

&lt;p&gt;The first step to design a CI/CD setup for your SaaS is to define a branching strategy. There are many ways of working with branches to make sure you work in a reliable and developer friendly way. Building Volca, we have focused on simplicity and developer experience while maintaining a reliable deployment flow. That is why we have went for a trunk based strategy where a single branch, the &lt;code&gt;main&lt;/code&gt; branch, is the one that all developers branch off from.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environments
&lt;/h3&gt;

&lt;p&gt;To be able to test your code before it is usable by your customers, you need isolated environments in which different versions of your code is running.&lt;/p&gt;

&lt;p&gt;In complex enterprise setups, there might be a large number of environments. When building a SaaS from scratch however, you need to keep things simple to be able to ship new features fast. That is why Volca comes with two environments with automated deployments out of the box: &lt;code&gt;staging&lt;/code&gt; and &lt;code&gt;production&lt;/code&gt;. In addition, developers have their own &lt;code&gt;local&lt;/code&gt; environments where code is tested during development.&lt;/p&gt;

&lt;h4&gt;
  
  
  Local
&lt;/h4&gt;

&lt;p&gt;When running a Volca backed application in your machine, you make constant changes to the code which are instantly reflected in the local environment. Once a developer decides their changes are ready to ship, they are pushed to a feature branch and a pull request is created to the &lt;code&gt;main&lt;/code&gt; branch. When the pull request has been created, other developers (or just you if you are a solo founder) can review the changes and finally merge them to the &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;

&lt;h4&gt;
  
  
  Staging
&lt;/h4&gt;

&lt;p&gt;Once the changes have been merged to the &lt;code&gt;main&lt;/code&gt; branch, the updated code will be deployed to the &lt;code&gt;staging&lt;/code&gt; environment. Here, you can test your code running in a similar environment to &lt;code&gt;production&lt;/code&gt; and make sure nothing breaks. It is recommended to test all important features and make sure they work as expected before moving to &lt;code&gt;production&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Production
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;production&lt;/code&gt; environment is the one that your customers use and it is critical that it is fully functional at all times. This environment is deployed by manually triggering a deploy in the GitHub interface. This is to make sure you do not ship code that has not yet been tested in the &lt;code&gt;staging&lt;/code&gt; environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow
&lt;/h3&gt;

&lt;p&gt;As a developer, more branches to switch between equal a higher risk of something going wrong and wasting time. That is why we chose to use a single branch for all development: the &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When building a feature, all developers branch off from the master branch and run the application locally.&lt;/li&gt;
&lt;li&gt;Once the feature is ready to merge, the developer creates a PR towards the &lt;code&gt;main&lt;/code&gt; branch&lt;/li&gt;
&lt;li&gt;When the PR has been merged, a GitHub Action is triggered that deploys to the &lt;code&gt;staging&lt;/code&gt; environment&lt;/li&gt;
&lt;li&gt;Once the feature has been tested in &lt;code&gt;staging&lt;/code&gt;, a deployment to &lt;code&gt;production&lt;/code&gt; can be triggered manually&lt;/li&gt;
&lt;li&gt;Once a production deployment is triggered, a tag is created for the commit that was deployed into production&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>saas</category>
      <category>cicd</category>
      <category>programming</category>
    </item>
    <item>
      <title>How We Used Mock APIs to Supercharge Our Microservice Testing</title>
      <dc:creator>Karl Eriksson</dc:creator>
      <pubDate>Fri, 04 Sep 2020 22:09:35 +0000</pubDate>
      <link>https://dev.to/keaeriksson/how-we-used-mock-apis-to-supercharge-our-microservice-testing-4h2d</link>
      <guid>https://dev.to/keaeriksson/how-we-used-mock-apis-to-supercharge-our-microservice-testing-4h2d</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This is a case study from an earlier project that we developed for a client. Some details have been left out or modified for confidentiality.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;After a long time developing and managing a huge monolithic application we decided to travel down the inevitable route of breaking it down into small independent microservices. This came with quite a few challenges where many of them related to testing. We no longer managed only one large application but many small ones which we should deploy and test individually.&lt;/p&gt;

&lt;p&gt;In this article we will describe our microservice journey and what we did to accelerate our testing strategy by leveraging mock APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our Architecture 🗺️
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Farchitecture.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Farchitecture.svg" title="Monolith Architecture" alt="Monolith Architecture"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Monolithic arcitecture with a single application using loads of different AWS services such as Lambda, API Gateway, DynamoDB, Kinesis, ElasticSearch and Cognito all deployed through CloudFormation.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The legacy architecture that we were moving away from was in fact not really ancient. We were deploying everything in AWS using infrastructure as code (AWS CloudFormation), we were running serverless functions (AWS Lambda) instead of managing VMs or container clusters and from the outside things looked pretty good. However, when you dug into the code and saw all of the entities and different services within you quickly understood that we had created a huge monolith with code dependencies that looked like a spider web across the different parts of the application.&lt;/p&gt;

&lt;p&gt;The main challenges with this architecture was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time consuming deployments:&lt;/strong&gt; A release took a lot of time and required careful monitoring as to make sure nothing broke. We needed to sync everything up and deploy during a specific time to make sure we had time to correct things. Many precious developer hours went into monitoring and fixing release issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer experience:&lt;/strong&gt; Since the code base had grown so large it was very hard to modify the existing code without knowing about the different dependecies between other parts of the application. Running the application locally was a strict no-go. Deploying to test your code when deployed to the cloud could take up to 20 minutes because of the ever increasing number of resources that we had to deploy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing:&lt;/strong&gt; For a developed feature to be merged into a release branch all tests for the entire monolith had to pass. These tests were time consuming and could take up to 30 minutes to complete. Waiting for the tests to pass when you made a small change lowered development and review speed. This is what we will focus on for the rest of the article.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Microservices were an obvious choice for us since we could...&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have small, fast and independent deployments of each microservice&lt;/li&gt;
&lt;li&gt;Split the monolith into smaller graspable code repositories with a clear purpose that developers could understand quicker&lt;/li&gt;
&lt;li&gt;Test each service individually with small test suites that finish quicker. No need to run the entire suite before releasing a microservice.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Our Integration Test Strategy 📈
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Ftesting.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Ftesting.svg" title="Test Strategy" alt="Integration Test Strategy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Our testing workflow which in theory was functioning well but in practice was hugely time consuming.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One large pain point were the time consuming integration tests that had to be run before a new feature could be merged to a release branch. These tests ran HTTP requests towards our REST API and made sure that the response was ok for each endpoint. In the monolith days we would spin up the entire monolith for the feature branch and run the tests each night. This would then mark the PR as approved in GitHub.&lt;/p&gt;

&lt;p&gt;That was quite time consuming because of the HTTP overhead and all of the setup of test data required to run the requests. We decided to instead minimize them to only test the happy flow. Unfortunately that sometimes resulted in issues with faulty error messages and requests with non-standard parameters that did not respond according to the specification.&lt;/p&gt;

&lt;p&gt;All in all while we were not happy with the performance of these tests we however liked the flow itself. That is why we set out to improve the performance of the tests while keeping the workflow as described in our microservice architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Breakout 💥
&lt;/h3&gt;

&lt;p&gt;For each new feature that we developed in the monolith we formed a strategy on how to break out the related application component into a new microservice. Once a developer picked up the ticket she/he started developing a new service in a new repository with its own deployment pipeline and no shared code with the monolith. This was fairly time consuming but helped us keep up with the regular feature development in parallel to the microservice transformation work.&lt;/p&gt;

&lt;h2&gt;
  
  
  How We Supercharged our Testing ⚡
&lt;/h2&gt;

&lt;p&gt;According to our old way of running our tests we had only one option to run our integration tests: Once a PR is created we will simply spin up all our microservices, run the tests on all of them to make sure that they work together and then spin them down again.&lt;/p&gt;

&lt;p&gt;However, this would make the tests even more time consuming. Deploying all of the individual services would be more complex and more time consuming than just deploying a single monolith instance. After some thinking we concluded that there is no actual reason to deploy all services when the change is introduced in only one of them. However, sometimes the service to be tested had dependencies to other services. So what to do then? Deploy them anyways?&lt;/p&gt;

&lt;p&gt;We researched different solutions to this problem and started investigating using mock APIs instead of the real services. The philosophy behind doing this was as long as the external services respond correctly there is no need for them to be real deployed services. This way we did not have to wait for the external dependencies to deploy and we wouldn't even have to pay for their infrastrucure. We could also control the response structure to catch corner cases and make sure that we have good test data. All this without having to spend time and resources to set it up programmatically before running the tests.&lt;/p&gt;

&lt;p&gt;Running mock APIs for the external dependencies also meant that we could run the entire stack locally in a very lightweight fashion. Sometimes you see teams running Docker containers with 40 microservices that are really heavy and tedious to get running. When using lightweight mock APIs without any real infrastructure at all we could run everything locally easily even on weaker workstations. By tweaking the responses during development we could also test edge cases and run the integration tests locally which sped up the feedback loop considerably.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Tests vs. End-to-End Tests ⚖️
&lt;/h3&gt;

&lt;p&gt;One challenge that we had was what to mock and what not to mock. Basically we split the tests up into two different suites. One suite we called service tests. These had all external dependencies completely mocked and were required to pass before a new feature was merged. This made sure that the test subject worked when the external services were happily returning the data that we expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Fservice_test.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Fservice_test.svg" title="Service Test" alt="Service Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Illustration of what we call "Service Tests"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;End to end tests were the other tier and ran as soon as a feature branch was merged to develop. Here we had a dedicated environment up and running with all the real services fully integrated. If these tests failed (which would be rarely) a developer would try to ship a fix as soon as possible. This made us certain that our fleet worked as it should.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Fe2e_test.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Fe2e_test.svg" title="End-to-End Test" alt="End-to-end Test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Illustration of what we call "End-to-End Tests"&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools 🛠️
&lt;/h3&gt;

&lt;p&gt;At first we ran tiny Nodejs applications written in Express as mock APIs. These were quick to set up but did not really scale when we had to manage them individually for all of our services. This meant having to deploy them, keep them in sync with the real services and maintain their infrastructure.&lt;/p&gt;

&lt;p&gt;At first we started to look into the different tools available to support our use case. We found many services that could generate simple HTTP responses but not quite what we needed. We wanted something that: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Worked well for microservices with many small APIs&lt;/li&gt;
&lt;li&gt;Required as little manual work as possible to set up&lt;/li&gt;
&lt;li&gt;Were version controlled&lt;/li&gt;
&lt;li&gt;Could be run both locally and hosted mocks in sync&lt;/li&gt;
&lt;li&gt;Was easy to keep in sync with the real API&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is why we decided to build our own mocking tool. The initial idea was to configure our mocks through a configuration file that we could manage in the GitHub repositories of each individual service. As soon as you pushed a change to the configuration the mock would update and be ready to use. This way we did not have to spend any time on keeping things in sync and worrying about different versions of the mocks. Each branch and version had its own mock that was always available. Next up was the test data needed to keep the consuming services happy. This was quickly integrated in the configuration file so that we could easily add the test data we needed and generate realistic fake data for the consumers.&lt;/p&gt;

&lt;p&gt;This was the start and when we decided to build the public tool that is Mocki today we added features such as realistic test data generation as well as simulated failures and delays. If you are interested in trying out a similar setup head over to our &lt;a href="https://mocki.io" rel="noopener noreferrer"&gt;start page&lt;/a&gt; to learn more or dive straight into the &lt;a href="https://dev.to/docs"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Solution 💡
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;New feature:&lt;/strong&gt; Service tests tests are run on features once there is a PR up. External dependencies are hosted mocks. This made the test suite a lot faster to finish and they required less setup to get started. By using mock services we are also able to test corner cases with generated test data in the mock.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature merged:&lt;/strong&gt; When merging to develop a new test is triggered with real deployed dependencies. If a test fails here developers are notified in Slack and someone will take a look at it. This makes sure that the service works with all external dependencies being the real deal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development:&lt;/strong&gt; While developing we run the microservice locally integrated to mock services also running locally. Since we no longer required the external dependencies to be up and running to develop features we got an increase in developer productivity. We also no longer needed to spin up our real services locally to test things out. We could simply use our lightweight mocks running locally or the hosted ones which are always available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Fdevelopment.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Fdevelopment.svg" title="Development" alt="Development"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note on costs:&lt;/strong&gt; Not only did we create higher developer efficiency, we also managed to lower the costs for our environments significantly thanks to the infrastructure cost that we save when not haing to deploy each service for each feature environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Further Work 🏗️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Error handling and simulation&lt;/strong&gt; - We have not utilized Mockis capabilities in error handling and failure simulation yet but that could be something to try out for the future to investigate how our services behave with failing dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load testing&lt;/strong&gt; - Using Mocki it is also possible to simulate delays that you would typically see when a service is overloaded. In the future we will run chaos engineering to see which services are affected most by external dependencies being overloaded and how we can remedy the risk of that affecting our users.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Mock APIs can be utilized to save cost and up developer productivity&lt;/li&gt;
&lt;li&gt;There are tools that can help you in your journey to testing microservices efficiently&lt;/li&gt;
&lt;li&gt;There are many possibilities to use mock APIs to stress test your application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you so much for taking the time to read this! Feel free to reach out if you have questions or need advice on how to get similar benefits 🚀&lt;/p&gt;

</description>
      <category>startup</category>
      <category>showdev</category>
      <category>microservices</category>
      <category>testing</category>
    </item>
    <item>
      <title>How we used Open Data APIs to go from $0 to $2000 MRR within weeks 🚀</title>
      <dc:creator>Karl Eriksson</dc:creator>
      <pubDate>Fri, 04 Sep 2020 22:06:09 +0000</pubDate>
      <link>https://dev.to/keaeriksson/how-we-used-open-data-apis-to-go-from-0-to-2000-mrr-within-weeks-gl8</link>
      <guid>https://dev.to/keaeriksson/how-we-used-open-data-apis-to-go-from-0-to-2000-mrr-within-weeks-gl8</guid>
      <description>&lt;h2&gt;
  
  
  Why Should You Read This?
&lt;/h2&gt;

&lt;p&gt;Are you in the process of starting a SaaS, e-commerce store or a local business? Then I think this story can help you develop a strategy to get your first hundred dollars MRR without spending tons of money on marketing. Lets begin.&lt;/p&gt;

&lt;h3&gt;
  
  
  We Had a Product Without Traffic, What to Do?
&lt;/h3&gt;

&lt;p&gt;One of the first products we developed was an online preparation course for the Swedish SAT test. We created thousands of practice questions, hired math experts to write tutorials and spent a lot of time creating optimized landing pages. However, the traffic never came. This was our start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Fhpskolan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Fhpskolan.png" title="Swedish SAT" alt="Swedish SAT"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the stage we were at, I think a lot of developers creating their own products give up. Developers can spend an insane amount of time creating a product &lt;strong&gt;THEY&lt;/strong&gt; like. But unfortunately we are not good at selling our products or showing them to users in a way that make them interested.&lt;/p&gt;

&lt;p&gt;We started thinking what people actually search for in regards to the Swedish SAT and college education. After doing some analysis we noticed that a lot of people used search queries like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What SAT score is required to get into Law at Stockholm university?&lt;/li&gt;
&lt;li&gt;What SAT score is required to get into any Law school in Sweden?&lt;/li&gt;
&lt;li&gt;What SAT score is required to get into Computer Science at Linköping university?&lt;/li&gt;
&lt;li&gt;Which education in Sweden was the hardest to get into?&lt;/li&gt;
&lt;li&gt;Which education had the most applications 2015?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You see these are all related to education. And we figured out that a lot of people searching for these things are interested in improving their SAT test score. So how did we use this? We could have started writing articles like crazy about each education programme, but that would be really time consuming. Instead we approached the problem like developres and started to look at how we could actually generate one page with this kind of data for each education in Sweden.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open Data APIs - the Golden Egg 🥚
&lt;/h3&gt;

&lt;p&gt;This is where things start to become interesting. Open data APIs! At Mocki we are very supportive of governments and organizations opening up their data publicly. It's an amazing way to let normal companies or people develop applications that never would have been developed otherwise. Our interest in open data started that day when we got our hands on application data for every education in Sweden.&lt;/p&gt;

&lt;p&gt;For a specific education programme we were able to see&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The number of applicants.&lt;/li&gt;
&lt;li&gt;The number of accepted people.&lt;/li&gt;
&lt;li&gt;The required SAT score to get accepted.&lt;/li&gt;
&lt;li&gt;The percentage of women/men&lt;/li&gt;
&lt;li&gt;The percentage of Swedish/Non-Swedish people.&lt;/li&gt;
&lt;li&gt;And a lot more...&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Suddenly we were able to answer all the questions mentioned earler. We also realised that we could create a website out of this. So that's what we did. At that time there wasn't much competition for these search queries either which was great.&lt;/p&gt;

&lt;p&gt;This is a screenshot of some of the data for med school and Lund University. One of the most popular universities in Sweden. It may be hard for a non Swede to understand what the data means. But that's not important here. The important thing is that we created pages with hundreds of words, graphs and tables all automatically generated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Fantagningspoang.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmocki.io%2Fantagningspoang.png" title="Med school Lund" alt="Swedish SAT"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Could This Be Considered Spam?
&lt;/h3&gt;

&lt;p&gt;If you use this tactic in the wrong way with randomized data or just non useful information it's not good. But we knew that this was data people really wanted to find! All the generated texts were useful and provided great user value. And modern day search engine optimization really comes down to providing great value for a &lt;strong&gt;user&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This was all confirmed when Google started picking up the pages and loved them. We quickly started ranking on all sorts of keywords related to SAT. In just a couple of weeks we reached traffic numbers of around 10 000 users per month and it kept steadily increasing each day. All user metrics were amazing. Long average time spent on page and many interactions with the graphs. We also noticed people visiting several different pages because they got hooked on seeing all the different graphs we provided.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Final Step
&lt;/h3&gt;

&lt;p&gt;So we had suddenly created our own traffic source with relevant users. We started putting up banners on the site with the SAT data leading directly to our SAT prep SaaS. The results were great! We got a lot of signups and had to go back working on our course platform since the users discovered bugs and other problems. But we now had a steady flow of new paying customers. That's where you want to be because that is when a SaaS becomes fun.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Can You Use Open Data Apis to Do the Same?
&lt;/h3&gt;

&lt;p&gt;This strategy may not be applicable for every service out there. But I will try to create one other example where I think this could work.&lt;/p&gt;

&lt;p&gt;Lets say you own a snow shoveling business and you are currently expanding to a lot of different cities. Then I think you could combine this strategy with using local SEO.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use an open API to get all cities in your country with more than x amount of people&lt;/li&gt;
&lt;li&gt;Try to find some weather API providing detailed data about if its going to snow&lt;/li&gt;
&lt;li&gt;Try to find historical snow data for every city.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So what can we do with with this data? We can generate something like the following text&lt;/p&gt;

&lt;p&gt;"What is the snow situation town X right now? - Snow shoveling forecast",&lt;/p&gt;

&lt;p&gt;"Town with X with Y inhabitants usually has around Z number of snow days per year. The temperature in September to February is between K degrees. On our snow and ice scale town X has the score of S (scould be calculated by an algorithm that you create). You will probably have to shovel snow around D days per winter."&lt;/p&gt;

&lt;p&gt;"It's currently heavily snowing in town X, so be prepared to shovel snow the next D days."&lt;/p&gt;

&lt;p&gt;Each page would be both useful and kind of unique. It's important to make sure to have enough variables as Google might think its duplicate content otherwise. When you have created these pages you now have a local snow shoveling page for a lot of cities. Put up your phone number or a link to your booking system and start scaling your snow shoveling business.&lt;/p&gt;

&lt;p&gt;There are probably a lot of better examples, but I hope I gave you enough to get thinking :)&lt;/p&gt;

&lt;h3&gt;
  
  
  Create an Open API and Start Sharing Your Data!
&lt;/h3&gt;

&lt;p&gt;If you are a government, organization or just a company with some great data set you should think about sharing it. In some cases the data is your business and then it's of course not the best idea. But in a lot of cases there is not much to lose. In Scandinavia where Mocki is founded the governments are currently doing great efforts to make data publicly available. There are a lot of applications that will never be developed by governments and state organizations but that companies or hobby programmers would love to create.&lt;/p&gt;

&lt;p&gt;Lets say your government would release all the data on every traffic accident in your country, including the location and severity. Then you could create an app warning about historically dangerous roads. It's these types of things that become possible when the data is released to the public.&lt;/p&gt;

&lt;p&gt;With Mocki it's very easy to create a simple API with mock data or real data. You can for example use Mocki if you want to make some of you data available to a hackathon or other event. It's a good and safe start as you wouldn't need to expose any of your own infrastructure or sensitive data sources. If that sounds interesting you can sign up with GitHub or send us a message at &lt;a href="https://mocki.io" rel="noopener noreferrer"&gt;Mocki.io&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>startup</category>
      <category>saas</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>Hosting a Hugo site on AWS S3 and CloudFront</title>
      <dc:creator>Karl Eriksson</dc:creator>
      <pubDate>Fri, 27 Mar 2020 16:50:39 +0000</pubDate>
      <link>https://dev.to/keaeriksson/hosting-a-hugo-site-on-aws-s3-and-cloudfront-2li9</link>
      <guid>https://dev.to/keaeriksson/hosting-a-hugo-site-on-aws-s3-and-cloudfront-2li9</guid>
      <description>&lt;p&gt;In the early days we used Wordpress as our go-to-CMS. Over the years however we have seen examples of these blazing fast lightweight sites built on top of static site frameworks such as Hugo. These sites had the following advantages over Wordpress:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Super fast (no server side logic at all)&lt;/li&gt;
&lt;li&gt;Secure (there is no login system to brute force - it's just static files)&lt;/li&gt;
&lt;li&gt;Easy to version control&lt;/li&gt;
&lt;li&gt;Easy and cheap to host (no need for a database or backend hosting)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So for the site you are looking at right now (Mocki.io) we chose to go with Hugo. We picked it up quickly and started building out the site still amazed at the speed it serves content. There were however some challenges. Our tech stack is hosted 100% on AWS and we quite like it. Naturally, we chose to deploy this site to AWS as well and the clear choice was S3 which is a static file hosting service. For CDN we chose AWS CloudFront which is super powerful. Deploying the site was quick and S3 + CloudFront served the content at blazing speed.&lt;/p&gt;

&lt;p&gt;Of course there were challenges... our previous site did not have trailing slashes in the URL (example.com/page) while Hugo forced us to have trailing slashes (example.com/page/). For SEO reasons we wanted the exact same URL structure as the previous site. When looking in the Hugo documentation it did not have support for that. We also wanted to redirect &lt;a href="http://www.example.com"&gt;www.example.com&lt;/a&gt; to example.com to make sure there was only one domain you could reach our site on.&lt;/p&gt;

&lt;p&gt;To solve this we used &lt;a href="mailto:Lambda@Edge"&gt;Lambda@Edge&lt;/a&gt;. An &lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda function&lt;/a&gt; is a piece of code that is deployed and run in the cloud. Lambda@Edge is a Lambda function that you can trigger when someone accesses your site through CloudFront. So we wrote a Lambda that removes the trailing slash and redirects the www domain to the non www domain. It also routes the requests from CloudFront to S3 so that the correct HTML-file is fetched from S3.&lt;/p&gt;

&lt;p&gt;We put the entire setup in a CloudFormation template that you can deploy in one single command. You can access the code to deploy your own Hugo site on AWS here: &lt;a href="https://github.com/keaeriksson/hugo-s3-cloudfront"&gt;https://github.com/keaeriksson/hugo-s3-cloudfront&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The result solved most of our SEO challenges that we had while still keeping the advantages of building the site with Hugo.&lt;/p&gt;

&lt;p&gt;Feel free to reach out if you need more advice on hosting static sites on AWS using the chat in the bottom right.&lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
      <category>hugo</category>
      <category>aws</category>
      <category>s3</category>
      <category>cloudfront</category>
    </item>
    <item>
      <title>Mocking AWS DynamoDB</title>
      <dc:creator>Karl Eriksson</dc:creator>
      <pubDate>Wed, 18 Mar 2020 20:11:54 +0000</pubDate>
      <link>https://dev.to/keaeriksson/mocking-aws-dynamodb-39m0</link>
      <guid>https://dev.to/keaeriksson/mocking-aws-dynamodb-39m0</guid>
      <description>&lt;p&gt;We all love living in the cloud, right? Yes we do! As long as we have an internet connection and the services in the cloud are available and accessible.&lt;/p&gt;

&lt;p&gt;At times we do not want to rely on having access to cloud services. For example when developing locally or when running tests that should only verify the behaviour of our application and not the behaviour of its external dependencies. In this guide I will show you how you can mock the DynamoDB API so that you no longer need access to the internet or AWS itself while developing.&lt;/p&gt;

&lt;p&gt;Let's start off with a simple example where we interact with DynamoDB to fetch some data from a table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const AWS = require('aws-sdk');

// Use our mock DynamoDB API instead of AWS
AWS.config.dynamodb = { endpoint: 'http://localhost:3001', region: 'eu-west-1' };

const dynamoDbClient = new AWS.DynamoDB.DocumentClient();

dynamoDbClient.get({TableName: 'TasksTable', Key: 'someTask'})
    .promise()
    .then(result =&amp;gt; {
        console.log(result.Item)
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assuming that there is a table called &lt;code&gt;TasksTable&lt;/code&gt; with an item that has the key  &lt;code&gt;Gardening&lt;/code&gt; we should see something like the following print:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ Type: 'Gardening', Description: 'Mow the lawn', DueDay: 'Saturday' }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we want to create a mock version of DynamoDB that we can use locally.&lt;/p&gt;

&lt;p&gt;To do this we will use Mocki that will help &lt;a href="https://mocki.io"&gt;mock the API&lt;/a&gt;. Install the tool in your project to get started:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install mocki --save-dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next up we will create a configuration file that we will call  &lt;code&gt;dynamo-mock.yml&lt;/code&gt;. This will define the mocked DynamoDB service and its responses. Use the below configuration or set up your own by referencing the &lt;a href="https://mocki.io/docs"&gt;documentation&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: dynamodb-mock
port: 3001
endpoints:
  - path: /
    method: post
    behavior: conditional
    responses:
      - statusCode: 200
        condition:
          operator: eq
          comparand: headers.x-amz-target
          value: DynamoDB_20120810.GetItem
        body:
          Item:
            Type:
              S: Gardening
            Description:
              S: Mow the lawn
            DueDay:
              S: Saturday
        headers:
          - name: content-type
            value: application/json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will return the object that we expect to get from Dynamo.&lt;/p&gt;

&lt;p&gt;Run your mock by running the following command in your project directory: &lt;code&gt;npx mocki run --path dynamo-mock.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When that is running we can change our code to point to the mock instead of AWS service by modifying our code like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const AWS = require('aws-sdk');

// Use our mock DynamoDB API instead of AWS
AWS.config.dynamodb = { endpoint: 'http://localhost:3001', region: 'eu-west-1' };

const dynamoDbClient = new AWS.DynamoDB.DocumentClient();

dynamoDbClient.get({TableName: 'TasksTable', Key: 'Gardening'})
    .promise()
    .then(result =&amp;gt; {
        console.log(result.Item)
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will point the AWS SDK to our mock instead and run the requests to that instead of the real DynamoDB service.&lt;/p&gt;

&lt;p&gt;Let's try it out by running the code and you should get the same result as when we started out. However this time we are not interacting with AWS at all, everything is happening locally and without HTTP calls over the internet. Pretty cool, huh?&lt;/p&gt;

&lt;p&gt;It is possible to use the same approach for mocking out all of the AWS services. Under the hood they are all just regular APIs accessed with HTTP requests. In coming articles I will guide you through mocking many more of AWS and other cloud services. Stay tuned!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dynamodb</category>
      <category>mock</category>
      <category>api</category>
    </item>
    <item>
      <title>Why GraphQL is the new REST</title>
      <dc:creator>Karl Eriksson</dc:creator>
      <pubDate>Mon, 10 Feb 2020 15:18:35 +0000</pubDate>
      <link>https://dev.to/keaeriksson/why-graphql-is-the-new-rest-4537</link>
      <guid>https://dev.to/keaeriksson/why-graphql-is-the-new-rest-4537</guid>
      <description>&lt;p&gt;Today most web services that I interact with in my work as a developer follow the REST standard (as close as they can). All is usually fine interacting with a REST service as long as there is proper documentation available. Usually working for enterprise client however you will be happy if you receive it in a PDF-file within days of starting to integrate the service...&lt;/p&gt;

&lt;p&gt;After getting your hands on a reasonably fresh copy of the documentation you are free to POST, PUT, GET and DELETE away as long as the service is functional.&lt;/p&gt;

&lt;p&gt;So - what is the problem I have with REST you might think?&lt;/p&gt;

&lt;p&gt;I asked a GraphQL worshipping senior developer on one of my first consulting gigs what advantages it has over REST. My brain was going something like: REST works, REST is widely used, REST is simple, there are many tools that support and... well it's not SOAP at least. His response was something along the lines of "everything". I usually avoid listening to someone who is that uncompromising about something so I figured I would at least investigate it when I had the time.&lt;/p&gt;

&lt;p&gt;A few months passed and a new side project idea dawned on me (as they do a couple of times a year). I was going to build a &lt;a href="https://mocki.io"&gt;mock API tool&lt;/a&gt;. It was going to be great. It was going to be GraphQL.&lt;/p&gt;

&lt;p&gt;So as I set out to build something with this magical new tool the truth slowly dawned on me. This is what is going to replace REST. Okay let's not wait any further here is why:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Fetch related data on demand
&lt;/h3&gt;

&lt;p&gt;GraphQL enables you to fetch the related data you want in only one HTTP call. Let's say you have a movie database API and want to fetch the movie and the actors in the movie. In REST that might look something like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;GET /movies/star-wars&lt;/code&gt;&lt;br&gt;
and then&lt;br&gt;
&lt;code&gt;GET /movies/star-wars/actors&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Two HTTP calls =&amp;gt; more overhead.&lt;/p&gt;

&lt;p&gt;In GraphQL you would execute something like the following query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

  getMovie(id: "star-wars") {
    releaseDate
    name
    actors {
      name
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A single HTTP call. Possibility to fetch related data as many levels down as you want.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Automated documentation
&lt;/h3&gt;

&lt;p&gt;As mentioned before documentation is in my opinion key to interacting successfully with a REST API. What is the path? Which method? What query parameters can i pass in? How should the body look like? What headers should I pass?&lt;/p&gt;

&lt;p&gt;Of course there are tools that make beautiful documentation for REST APIs that makes this very clear. However it takes work for developers to create that type of beautiful documentation. Time that many of us do not have. That is the way you end up with a PDF file sent a couple days to late over email.&lt;/p&gt;

&lt;p&gt;In GraphQL you will get automated documentation that you can use to explore and experiment with the API through a GUI such as GraphiQL which you can check out &lt;a href="https://graphql.org/swapi-graphql"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Built in validation and type checking
&lt;/h3&gt;

&lt;p&gt;In a REST API you will have to choose a library to handle your input validation. Or worse yet invent your own. In GraphQL you define your schema and types and both the server and client side validation is handled for you. This is an example of a type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Movie {
  name: String!

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;name&lt;/code&gt; property is of type &lt;code&gt;String&lt;/code&gt; and hence the &lt;code&gt;!&lt;/code&gt; it is required. This means the server will stop incoming requests which does not contain a name and that consuming clients will stop outgoing requests. Neat.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Effective caching
&lt;/h3&gt;

&lt;p&gt;Using libraries such as Apollo on the frontend there is built in smart caching mechanisms that make sure you do not fetch anything unless it has been updated. Let's say you log in to your application and thereby fetch the user logging in in an API call. Instead of re-fetching the user from the API on each page reload Apollo will cache the user until you update it, for example change the email address.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. It keeps evolving
&lt;/h3&gt;

&lt;p&gt;GraphQL is a growing ecosystem and the community around it is great. It is still young and will keep improving over the coming years.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;These are the top five reasons I think GraphQL wins over REST. However as I mentioned I avoid listening to people who see the world in black or white so I still don't think it might be worth it starting to rip out REST at every place you see it from now on. I do think however that you should give GraphQL a try and maybe some day you will find the perfect use case for it.&lt;/p&gt;

&lt;p&gt;Let me know if you have a GraphQL project that you want to share!&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>rest</category>
      <category>api</category>
    </item>
  </channel>
</rss>
