<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Eric Hacke</title>
    <description>The latest articles on DEV Community by Eric Hacke (@ehacke).</description>
    <link>https://dev.to/ehacke</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ehacke"/>
    <language>en</language>
    <item>
      <title>Transparent Caching Wrapper for Node</title>
      <dc:creator>Eric Hacke</dc:creator>
      <pubDate>Tue, 21 Jul 2020 16:32:20 +0000</pubDate>
      <link>https://dev.to/ehacke/transparent-caching-wrapper-for-node-l91</link>
      <guid>https://dev.to/ehacke/transparent-caching-wrapper-for-node-l91</guid>
      <description>&lt;p&gt;A simple transparent caching wrapper for Node. Wrap a function with it and call it like normal. And the cache stays warm with background updates, so it's always fast.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Available on &lt;a href="https://github.com/ehacke/transparent-cache"&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Previously I covered a more sophisticated &lt;a href="https://asserted.io/posts/simplified-firestore-with-redis"&gt;caching solution for Firestore&lt;/a&gt;. However, you don't always need something that complex.&lt;/p&gt;

&lt;p&gt;Sometimes you just want an expensive function call to be cached for 5 or 10 minutes to reduce load. This is often the case for read-focused operations where it's ok if the results are a little stale. Especially things like search results, image caching, certain computationally expensive operations, etc.&lt;/p&gt;

&lt;p&gt;For that purpose I built this transparent caching wrapper.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The cache is periodically updated in the background without blocking the primary call. So it's always fast.&lt;/li&gt;
&lt;li&gt;Simplicity. Just wrap any function and it becomes cached on the next call.&lt;/li&gt;
&lt;li&gt;Includes both local LRU cache and Redis cache levels. This improves speed and as a bonus minor network interrupts don't effect serving from the local cache.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use
&lt;/h2&gt;

&lt;p&gt;In the most basic case, you can just supply the redis configuration, and then wrap the function.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Beyond that, you can specify global defaults for cache sizes and TTL.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And you can override any defaults at the moment the function is wrapped.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;That's it! A simple caching solution for read-heavy operations.&lt;/p&gt;

</description>
      <category>node</category>
      <category>redis</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Kubernetes Cluster for Node API with Socket.io and SSL</title>
      <dc:creator>Eric Hacke</dc:creator>
      <pubDate>Tue, 14 Jul 2020 20:34:24 +0000</pubDate>
      <link>https://dev.to/ehacke/kubernetes-cluster-for-node-api-with-socket-io-and-ssl-31i9</link>
      <guid>https://dev.to/ehacke/kubernetes-cluster-for-node-api-with-socket-io-and-ssl-31i9</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;All code and configuration available on &lt;a href="https://github.com/ehacke/node-gke-cluster"&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As a disclaimer, I'm not claiming this is a perfect fit for everyone. Different applications have different technical requirements, and different uptime or availability standards. &lt;/p&gt;

&lt;p&gt;But I aim to outline the basics for an inexpensive GKE cluster with Node microservices in mind. &lt;a href="https://asserted.io"&gt;Asserted&lt;/a&gt; uses a configuration similar to this to run all of it's microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;preemptible nodes to reduce cost (optional)&lt;/li&gt;
&lt;li&gt;automatic SSL management with &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs"&gt;Google managed certificates‍&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;ingress websocket stickiness&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why a cluster at all? Why not just a VM?
&lt;/h3&gt;

&lt;p&gt;If your only consideration is price at the cost of everything else, then it's probably cheaper to just use a VM. However, deploying into a cluster offers a number of advantages for not that much more money.&lt;/p&gt;

&lt;p&gt;A GKE cluster gives you tons of stuff for free that you would otherwise have to do without or engineer yourself. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dockerized applications ensure portable and reproducable builds&lt;/li&gt;
&lt;li&gt;Deployments are automatically health-checked as they roll out and stop if something is broken&lt;/li&gt;
&lt;li&gt;Failing instances are automatically taken off the load balancer and restarted&lt;/li&gt;
&lt;li&gt;Ingress controllers can automatically provision and update your SSL certs&lt;/li&gt;
&lt;li&gt;Resource management becomes much easier as individual applications can be limited by CPU or memory, and distributed optimally over machines&lt;/li&gt;
&lt;li&gt;New applications can be deployed with minimal complexity&lt;/li&gt;
&lt;li&gt;High availability becomes a matter of how much you want to pay rather than an engineering problem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my mind the only real argument against any of this is just the cost of a cluster. But properly configured, a simple cluster can deployed for minimal cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  High (ish) Availability
&lt;/h3&gt;

&lt;p&gt;In this scenario I need my cluster to be able to perform deployments and node updates with no downtime as those two events are likely to be relatively frequent.&lt;/p&gt;

&lt;p&gt;That said, I don't need and can't afford 100% uptime. I don't need multi-zone redundancy, and definitely not multi-cloud failover. I can tolerate the risk of up to a minute or so of unexpected downtime once a month or so if it reduces my costs significantly. &lt;/p&gt;

&lt;p&gt;If you design all of your services to be stateless and make use of Cloud PubSub to queue work instead of directly calling other services over HTTP, it's possible to have an entire microservice worth of pods become unavailable for a minute or two without any lasting, (or maybe even noticable), impact.&lt;/p&gt;

&lt;h3&gt;
  
  
  Preemptible Nodes
&lt;/h3&gt;

&lt;p&gt;This is an optional step, but one where a lot cost savings comes from. A preemptible e2-small costs 30% of a standard VM. But comes with &lt;a href="https://cloud.google.com/compute/docs/instances/preemptible#limitations"&gt;some caveats&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Preemptible nodes can be killed at any time. Even within minutes of starting (though rare in my experience).&lt;/li&gt;
&lt;li&gt;Google claims they always restart instances within 24hrs, though I've found this to not always be the case&lt;/li&gt;
&lt;li&gt;Preemptible nodes may not always be available. This seems to be more of an issue for larger VMs, never seen this issue myself for smaller ones.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your services are stateless, this should not be much of an issue. The only real problem happens if the lifetime of the Nodes is synchronised and Google decides to kill all of them at the same time. This risk can be minimized by running something like &lt;a href="https://github.com/estafette/estafette-gke-preemptible-killer"&gt;preemptible-killer&lt;/a&gt;, but I haven't found it necessary yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Cluster
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cluster Details
&lt;/h3&gt;

&lt;p&gt;The cluster is created with a single gcloud command. If the cluster already exists, you can create a new node pool with similar arguments.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once this command is run, it will take a few minutes to complete.&lt;/p&gt;

&lt;h3&gt;
  
  
  API Implementation
&lt;/h3&gt;

&lt;p&gt;The example API is only a few lines, but has a fair bit going on to demonstrate the various cluster features.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Namespace
&lt;/h3&gt;

&lt;p&gt;Create the namespace first.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cluster/namespace.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Deploy Redis
&lt;/h3&gt;

&lt;p&gt;Redis is only included as an in-cluster deployment for the purposes of this example. It's likely that in a production environment, if Redis is required, you likely wouldn't want it on a preemptible instance.&lt;/p&gt;

&lt;p&gt;A better choice is to use a node selector or node affinity to deploy it onto a non-preemptible VM, or even just substitute with Redis Memorystore if the budget allows. A minimal Redis Memorystore instance is a bit costly, but worth it my opinion.&lt;/p&gt;

&lt;p&gt;That said, if you design your microservices to treat Redis as an ephemeral nice-to-have global cache, and have connections fail gracefully if it's gone, you could run it in the cluster on preemptible. Again it depends on your application, cost sensitivity, and uptime requirements.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cluster/redis
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Create the API IP Address
&lt;/h3&gt;

&lt;p&gt;Create a public external API IP to bind to the ingress.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud compute addresses create test-api-ip --global
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;Configure your DNS provider to point to the IP.&lt;/p&gt;
&lt;h3&gt;
  
  
  ConfigMap and API Deployment
&lt;/h3&gt;

&lt;p&gt;The configMap and deployment are mostly pretty standard, but I’ll highlight the important details.&lt;/p&gt;

&lt;p&gt;The deploy.yml specifies pod anti-affinity to spread the API pods as widely as possible across the nodes. The &lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels"&gt;topologyKey&lt;/a&gt; allows the deployment to determine if a given pod is co-located on the same resource as another.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Apply the configMap and the API deployment and wait until they are up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cluster/api/configMap.yml
kubectl apply -f cluster/api/deploy.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  BackendConfig
&lt;/h3&gt;

&lt;p&gt;The BackendConfig is a less widely &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#configuring_ingress_features_through_backendconfig_parameters"&gt;documented configuration&lt;/a&gt; option in GKE, but it’s essential to making websockets load-balance correctly across multiple nodes.&lt;/p&gt;

&lt;p&gt;The BackendConfig itself looks like this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;This configures the load-balancer to have session stickyness based on IP so that connections are not constantly round-robined to every API pod. Without that, socket.io won't be able to maintain a connection while polling.&lt;/p&gt;

&lt;p&gt;The connectionDraining option just increases the amount of time allowed to drain connections as old API pods are replaced with new ones. The default is 0, which can cause connections to be severed early.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cluster/api/backend.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;p&gt;This BackendConfig is then referenced by both the &lt;em&gt;service.yml&lt;/em&gt; and the &lt;em&gt;ingress.yml&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  API Service
&lt;/h3&gt;

&lt;p&gt;The service creates an external load-balancer that connects to each API pod.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The important extra details in this case are the annotations and sessionAffinity in the spec.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cluster/api/service.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  ManagedCertificate and Ingress
&lt;/h3&gt;

&lt;p&gt;The ingress terminates SSL and connects the service and the load balancer to the fixed external IP.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The important extra details here are the annotations again. They link the ingress to the correct cert, IP, and backend. And also enable websocket load-balancing in nginx, without it websocket connections will not work.&lt;/p&gt;

&lt;p&gt;The managed certificate attempts to create an SSL cert for the domain specified in it's config. It requires that everything before this is deployed and working before the managed certificate will switch to active.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Create the cert and the ingress.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cluster/api/managedCert.yml
kubectl apply -f cluster/api/ingress.yml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It’ll take up to 20 minutes to create the managed certificate. You can monitor the cert creation and the ingress creation by running the following separately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;watch kubectl describe managedcertificate
watch kubectl get ingress
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Success!
&lt;/h2&gt;

&lt;p&gt;Once everything is up, you should be able to navigate to the URL you bound to the external IP, and see this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0ZVZHcwd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pm1mqgndbsskqopjjm1q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0ZVZHcwd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pm1mqgndbsskqopjjm1q.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you refresh, the connected hostname should not change which indicates that socket.io and the session affinity are working.&lt;/p&gt;

&lt;p&gt;You now have all the basic configuration you need for a Kubernetes cluster with automatic SSL and websocket/socket.io support!&lt;/p&gt;

</description>
      <category>node</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Analyzing Weird Spikes in Cloud Function Require Latency</title>
      <dc:creator>Eric Hacke</dc:creator>
      <pubDate>Thu, 09 Jul 2020 20:05:08 +0000</pubDate>
      <link>https://dev.to/ehacke/analyzing-weird-spikes-in-cloud-function-require-latency-44dp</link>
      <guid>https://dev.to/ehacke/analyzing-weird-spikes-in-cloud-function-require-latency-44dp</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Code to reproduce this is available on &lt;a href="https://github.com/ehacke/cloud-function-latency-spikes" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The whole idea of Asserted is that it allows you to &lt;a href="https://asserted.io/features/uptime-as-code" rel="noopener noreferrer"&gt;run custom test code against your application&lt;/a&gt;. At the time I started building it, I figured the fastest and easiest way to do that was using GCP Cloud Functions. Cloud Functions have been around for years, and have well known performance and security characteristics, so it seemed like a safe bet.&lt;/p&gt;

&lt;p&gt;At it's core, the implementation was simple. Copy code into a Cloud Function and then use child_process to execute it safely with a timeout.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This seemed to work great at first. Relatively low-latency, and easy to maintain.&lt;/p&gt;

&lt;p&gt;But this code runs continuously, as often as every minute, forever. Within less than a day, I got a timeout on the child_process.exec. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Mystery Begins
&lt;/h2&gt;

&lt;p&gt;Logically, I assumed it was my fault, because most things are. &lt;/p&gt;

&lt;p&gt;The code I was executing was calling API endpoints and maybe they were holding the connection open too long or something. I ignored it first, but then I noticed that when I ran the code locally on my machine for extended periods, the timeouts didn't happen. So it wasn't the code exactly, and it wasn't the API I was calling from inside that code.&lt;/p&gt;

&lt;p&gt;I started investigating. I did the usual debugging steps of basically adding console.log statements everywhere to see where the holdup was, and set the exec to inherit stdio so I could easily see the logs.&lt;/p&gt;

&lt;p&gt;I added some around child_process.exec:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And others inside the user code itself:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;After running the function a number of times, I looked into GCP Logging where I could see the log lines and the time they occurred.&lt;/p&gt;

&lt;p&gt;I was surprised to see that the delay wasn't happening within the bulk of the user code, it was happening between the exec starting and when the require statements finished. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There is a huge variance in how long the require statements take to finish. Some times the require statements would complete within 100 ms, and other times it may take over 2 seconds, or not even complete before the timeout.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That definitely seemed weird. These aren't weird esoteric dependencies. They are some of the most commonly used libraries on NPM. &lt;/p&gt;

&lt;p&gt;Profiling these require statements on my own machine showed negligible impact, so maybe it was something about Cloud Functions itself that was weird? &lt;/p&gt;

&lt;p&gt;I decided to come up with a more formal test to see if I could track it down.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Experiment
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Environments
&lt;/h3&gt;

&lt;p&gt;I had tried out Cloud Run around the same time and knew that I didn't see the issue there, only in Cloud Functions. So I decided to do a three-way comparison. I would run the same code in three environments and compare the results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Function - 2048 MB Memory - single 2.4 GHz CPU&lt;/li&gt;
&lt;li&gt;Cloud Run - 2048 MB Memory - single vCPU&lt;/li&gt;
&lt;li&gt;Local Docker - 2048 MB Memory - single CPU&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Code
&lt;/h3&gt;

&lt;p&gt;In terms of the code I was running, I didn't want to rely on a specific pre-existing library. While that's where I originally noticed it, I didn't want to introduce the idea that for some reason this specific dependency was an issue.&lt;/p&gt;

&lt;p&gt;So I wrote a bit of code that randomly generates node modules. Each containing a single object with up to 100 randomly created properties.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Then I used that to create a folder containing 1000 randomly generated libraries, and a single index.js file that requires all of those libraries and exports them in a single giant object.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;1000 dependencies might sound like a lot, but if you run &lt;em&gt;ls -al node_modules | wc -l&lt;/em&gt; inside an arbitrary Node project, you'll see that it's actually pretty reasonable. Maybe even conservative.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As mentioned at the beginning of the post, you can see the full codebase for this experiment &lt;a href="https://github.com/ehacke/cloud-function-latency-spikes" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scenarios
&lt;/h3&gt;

&lt;p&gt;Beyond just calling require on 1000 dependencies, I wanted to contrast it with a few different scenarios to give some context to the issue. So I came up with three scenarios that I'd run in each of the three environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Normal Require - Load 1000 dependencies from the default directory&lt;/li&gt;
&lt;li&gt;Regenerate and Require - Regenerate and load 1000 dependencies in /tmp&lt;/li&gt;
&lt;li&gt;CPU - Just eat CPU for 1 second&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The idea here is that Cloud Functions loads the code you provide from a read-only directory. I don't know much at all about the underlying implementation of Cloud Functions, but I wanted to control for the fact that this read-only directory may be somehow effecting things. So I added a second scenario where I regenerate all of the dependencies during the request into /tmp, and then load them from there.&lt;/p&gt;

&lt;p&gt;And the last scenario is a simple control group, where I just spin in place for 1000 ms and then exit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Results
&lt;/h2&gt;

&lt;p&gt;I ran each of these scenarios 1000 times in each of the three environments and collected the results. The times shown in all of these charts are not the HTTP request latency, but the amount of time it takes for the child_process.exec to complete loading the giant dependency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Require Time
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fulcod42clxj2um3kb4cx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fulcod42clxj2um3kb4cx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the chart, there is a huge variation in the amount of time it takes for the fake dependencies to load within the Cloud Function. From 2.5 seconds to well over 10 seconds.&lt;/p&gt;

&lt;p&gt;The Cloud Run instance shows some variation, but quite reasonable. And the local Docker instance is basically unchanged, which is what you'd expect.&lt;/p&gt;

&lt;p&gt;Statistics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Function - Standard Deviation: 862 ms - Median: 4015 ms&lt;/li&gt;
&lt;li&gt;Cloud Run - Standard Deviation: 207 ms - Median: 2265 ms&lt;/li&gt;
&lt;li&gt;Local Docker - Standard Deviation: 30 ms - Median: 1213 ms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flpzhkoqbpwe5hadp37i7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flpzhkoqbpwe5hadp37i7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The chart above shows a distribution of the latencies with the outlier 1% stripped. The local docker is very tight, some variation in Cloud Run, and a wide variation in Cloud Function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Regenerate and Require Time
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fns9wtajw4m4v0856hc2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fns9wtajw4m4v0856hc2v.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This scenario has more going on, so the numbers are bigger, but the pattern is essentially the same. Cloud Function performs worst, Cloud Run has some variation but is reasonable, and local Docker is tight.&lt;/p&gt;

&lt;p&gt;Statistics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Function - Standard Deviation: 1664 ms - Median: 7198 ms&lt;/li&gt;
&lt;li&gt;Cloud Run - Standard Deviation: 524 ms - Median: 5895 ms&lt;/li&gt;
&lt;li&gt;Local Docker - Standard Deviation: 36 ms - Median: 3245 ms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5mqpafy1jlbzhrqfn9nh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5mqpafy1jlbzhrqfn9nh.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The distribution is similar to the simpler require scenario. The local Docker is tight, Cloud Run wider (with an outlier), and the Cloud Function has an even wider distribution.&lt;/p&gt;

&lt;h3&gt;
  
  
  CPU Time (control)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr9nqhvmvftpthxs4yo2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr9nqhvmvftpthxs4yo2c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The vertical axis on this chart has been adjusted to match the first scenario to give a better visual comparison. &lt;/p&gt;

&lt;p&gt;You can see that when it's just doing straight CPU work, all environments are close to the same. There are some spikes in the Cloud Function times, but nothing significant.&lt;/p&gt;

&lt;p&gt;Statistics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Function - Standard Deviation: 23 ms - Median: 1172 ms&lt;/li&gt;
&lt;li&gt;Cloud Run - Standard Deviation: 20 ms - Median: 1095 ms&lt;/li&gt;
&lt;li&gt;Local Docker - Standard Deviation: 2 ms - Median: 1045 ms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frwby8e95olr3lllibqd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frwby8e95olr3lllibqd6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I could not seem to adjust the horizontal axis in this case, but note that the overall variation shown here is narrow, even if the Cloud Function is more broad than the other two.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;You: This is interesting Eric, but what does this mean?&lt;br&gt;
Me: I have no idea.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I don't know enough about how Cloud Functions are implemented to speculate about why this is happening. &lt;/p&gt;

&lt;p&gt;At a glance, it seems likely that for some reason, large reads from disk (or disk-in-memory?) for Cloud Functions seem to have unpredictable performance characteristics. &lt;/p&gt;

&lt;p&gt;I can't say why exactly this is happening. But I can say that it was a big enough problem for me that I switched everything over to using Cloud Run instead. &lt;/p&gt;

&lt;p&gt;I'd be really curious to know if any Google people have a guess as to why this might be the case, and I'd definitely post it here if I hear anything.&lt;/p&gt;

</description>
      <category>node</category>
      <category>serverless</category>
      <category>devops</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>Monitoring gRPC Uptime</title>
      <dc:creator>Eric Hacke</dc:creator>
      <pubDate>Thu, 09 Jul 2020 12:33:01 +0000</pubDate>
      <link>https://dev.to/ehacke/monitoring-grpc-uptime-197l</link>
      <guid>https://dev.to/ehacke/monitoring-grpc-uptime-197l</guid>
      <description>&lt;p&gt;Uptime monitoring for gRPC can't be done with traditional HTTP checks. With Asserted you can write sophisticated external health checks using the gRPC client and Mocha.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Example on &lt;a href="https://github.com/assertedio/grpc-uptime"&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;gRPC is an open source, high performance RPC framework that uses protocol buffers to efficiently serialize structured data between servers and clients in dozens of languages.&lt;/p&gt;

&lt;p&gt;The specialized interchange format that makes it highly performant is also means that a gRPC server usually requires a specialized client to communicate, regular HTTP libraries won't work. As a result, Asserted is uniquely suited to providing the custom environment needed to externally monitor gRPC uptime and stability.&lt;/p&gt;

&lt;p&gt;The example code used in this walk-through is heavily based on the Node example provided &lt;a href="https://github.com/grpc/grpc/tree/v1.30.0/examples/node"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example Server
&lt;/h2&gt;

&lt;p&gt;The proto definitions that this server will use are shown below, and are taken directly from the official gRPC Node example mentioned above.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The gRPC server that the example tests will run against is described in &lt;a href="https://github.com/assertedio/grpc-uptime/blob/master/route_guide/route_guide_server.js"&gt;this file&lt;/a&gt;. It's too large to show here in it's entirety, but I'll summarize the major elements.&lt;/p&gt;

&lt;p&gt;The server exposes four RPCs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a simple RPC for getting a single object&lt;/li&gt;
&lt;li&gt;a server-side streaming RPC to retrieve a list&lt;/li&gt;
&lt;li&gt;a client-side streaming RPC to record a series of events&lt;/li&gt;
&lt;li&gt;and a bi-directional RPC to provide a simple chat function&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Routine Configuration
&lt;/h2&gt;

&lt;p&gt;The routine.json makes use of &lt;a href="https://docs.asserted.io/reference/included-dependencies#custom-dependencies"&gt;custom dependencies&lt;/a&gt;. Custom dependencies are available on paid plans, and here we're using that option to include the Socket.IO client library in our tests.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Routine package.json
&lt;/h2&gt;

&lt;p&gt;The package.json for the routine (inside the .asserted directory) is slightly different than the default in this case because of the custom dependencies. In this case we're adding a few convenience libraries as well as @grpc/proto-loader, google-protobuf, and grpc.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Continuous Integration Tests
&lt;/h2&gt;

&lt;p&gt;To start off, we create a gRPC client based on the same proto as the server.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Then we use that client to perform tests against each of the four RPCs we defined earlier.&lt;/p&gt;

&lt;p&gt;The simple RPC just retrieves a single object and asserts that it matches what is expected.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;From the client side, the list RPC (streaming on the server-side) looks fairly similar to the simple RPC above.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The client-side streaming RPC sends a number of points to the server, and asserts the result.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And finally the bidirectional RPC sends a series of notes to the server which responds once the call is ended.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;While the example shown &lt;a href="https://github.com/assertedio/grpc-uptime"&gt;here&lt;/a&gt; can be cloned and run locally without an account, you'll need to do a few extra steps if you want to create your own Asserted routine to integration test your API in production.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an &lt;a href="https://asserted.io"&gt;Asserted&lt;/a&gt; account. It's free and easy.&lt;/li&gt;
&lt;li&gt;Complete the 2 minute onboarding to ensure that your environment is ready. You can also reference the docs &lt;a href="https://docs.asserted.io"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Start writing and running tests in prod!&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>node</category>
      <category>devops</category>
      <category>testing</category>
    </item>
    <item>
      <title>Monitoring Socket.IO Uptime</title>
      <dc:creator>Eric Hacke</dc:creator>
      <pubDate>Wed, 08 Jul 2020 12:55:51 +0000</pubDate>
      <link>https://dev.to/ehacke/monitoring-socket-io-uptime-4hn</link>
      <guid>https://dev.to/ehacke/monitoring-socket-io-uptime-4hn</guid>
      <description>&lt;p&gt;Monitoring the health and availability of Socket.IO APIs can be complex. With Asserted you can write sophisticated uptime tests using the Socket.IO client library.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Example on &lt;a href="https://github.com/assertedio/socketio-uptime"&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Socket.IO is a library that leverages websockets and standard HTTP to enable real-time, bi-directional communication. Depending on your use case, Socket.IO is often faster to implement and less error-prone than raw websockets as it supports things like broadcast and protocol fallback out of the box.&lt;/p&gt;

&lt;p&gt;The example I'm going to work with is a modified version of the demo provided &lt;a href="https://github.com/socketio/socket.io/tree/master/examples/chat"&gt;here&lt;/a&gt;. It's an extremely simple example of a chat app using Socket.IO.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example Server
&lt;/h2&gt;

&lt;p&gt;The server that the Asserted tests will run against contains two primary files. &lt;/p&gt;

&lt;p&gt;The first is the Socket.IO logic that handles new connections and responds to messages emitted from the client.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This allows users to join and disconnect, as well as broadcast messages to other users.&lt;/p&gt;

&lt;p&gt;The second file is where the Socket.IO logic is connected to the server.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Routine Configuration
&lt;/h2&gt;

&lt;p&gt;The routine.json is slightly different this time, only in that it makes use of &lt;a href="https://docs.asserted.io/reference/included-dependencies#custom-dependencies"&gt;custom dependencies&lt;/a&gt;. Custom dependencies are available on paid plans, and here we're using that option to include the Socket.IO client library in our tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Routine package.json
&lt;/h2&gt;

&lt;p&gt;The package.json for the routine (inside the .asserted directory) is slightly different than the default in this case because of the custom dependencies. On top of adding socket.io-client, we can prune out all the other dependencies we don't need.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Continuous Integration Tests
&lt;/h2&gt;

&lt;p&gt;We created two different clients in this case. One to act as a new user joining the chat and sending a message, and the other client to observe the new user joining and the message.&lt;/p&gt;

&lt;p&gt;The new user client is recreated for every test case.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The before and after hooks ensure that things are cleaned up properly, which is important if this is running continuously in production or staging.&lt;/p&gt;

&lt;p&gt;The tests themselves check that the appropriate events are emitted to the appropriate clients when the new user joins, and when they send a message.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;With tests similar to these you can continuously monitor your SocketIO APIs in production and track uptime accurately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;While the example shown &lt;a href="https://github.com/assertedio/socketio-uptime"&gt;here&lt;/a&gt; can be cloned and run locally without an account, you'll need to do a few extra steps if you want to create your own Asserted routine to integration test your API in production.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an &lt;a href="https://asserted.io"&gt;Asserted account&lt;/a&gt;. It's free and easy.&lt;/li&gt;
&lt;li&gt;Complete the 2 minute onboarding to ensure that your environment is ready. You can also reference the docs &lt;a href="https://docs.asserted.io/"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Start writing and running tests in prod!&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>node</category>
      <category>devops</category>
      <category>testing</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Monitoring GraphQL Uptime</title>
      <dc:creator>Eric Hacke</dc:creator>
      <pubDate>Tue, 07 Jul 2020 13:02:39 +0000</pubDate>
      <link>https://dev.to/ehacke/monitoring-graphql-uptime-11pl</link>
      <guid>https://dev.to/ehacke/monitoring-graphql-uptime-11pl</guid>
      <description>&lt;p&gt;Monitoring the uptime of a GraphQL application can't be done by just checking status codes. Asserted lets you write sophisticated uptime tests and even lets you use your client of choice if you prefer.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Example on &lt;a href="https://github.com/assertedio/graphql-uptime"&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you are unfamiliar with GraphQL, or just need a refresher on it, I strongly recommend reading through this &lt;a href="https://medium.com/naresh-bhatia/graphql-concepts-i-wish-someone-explained-to-me-a-year-ago-514d5b3c0eab"&gt;blog post series&lt;/a&gt;. It provides a suitably complicated example to demonstrate most of the features of GraphQL and how you would build a production application with it. I also used modified versions of the &lt;a href="https://github.com/nareshbhatia/graphql-bookstore"&gt;code from that post&lt;/a&gt; in my example below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example Server
&lt;/h2&gt;

&lt;p&gt;The full example GraphQL server (even the simplified version for this example) is too large and complicated to be completely shown here. I recommend cloning the repo to take a look at the code, but I'll include snippets where I can.&lt;/p&gt;

&lt;p&gt;The core of this example is a books model that has associated authors and publishers. The &lt;a href="https://github.com/assertedio/graphql-uptime/blob/master/src/graphql/typedefs/book.graphql"&gt;book-related type definitions&lt;/a&gt; can be seen below.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;These are handled by the &lt;a href="https://github.com/assertedio/graphql-uptime/blob/master/src/graphql/resolvers/book-resolvers.ts"&gt;book resolvers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And the resolvers connect to the &lt;a href="https://github.com/assertedio/graphql-uptime/blob/master/src/datasources/book-service.ts"&gt;book service&lt;/a&gt;, which is too large to include here.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/assertedio/graphql-uptime/blob/master/src/index.ts"&gt;server itself&lt;/a&gt; is just a straightforward &lt;a href="https://www.apollographql.com/server/"&gt;ApolloServer&lt;/a&gt;. I did not include any authentication in this example in the interests of simplicity, but you can see that in the &lt;a href="https://asserted.io/posts/node-api-uptime-tests"&gt;Node API post&lt;/a&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Routine Configuration
&lt;/h2&gt;

&lt;p&gt;As with the Node API example, the GraphQL routine doesn't require any special dependencies, so just the fixed dependencies are used.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;If you wanted to include an Apollo client in the tests, or some other GraphQL specific libraries, you would need to upgrade to a paid plan to use the custom dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous Integration Tests
&lt;/h2&gt;

&lt;p&gt;In these tests we do not have any special environment variables that we need to load, and we're just using the &lt;a href="https://www.npmjs.com/package/got"&gt;got&lt;/a&gt; client to execute our requests. &lt;/p&gt;

&lt;p&gt;We create a unique book name at the beginning of the test, just to ensure we don't conflict with other books that may already exist in our theoretical production system.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;All of the tests we wrote can be seen &lt;a href="https://github.com/assertedio/graphql-uptime/blob/master/.asserted/simple.asrtd.js"&gt;here&lt;/a&gt;, but I'll list a few specific examples.&lt;/p&gt;

&lt;p&gt;This test uses a more sophisticated query to get all of the other books written by a specific author. &lt;/p&gt;

&lt;p&gt;By being able to write arbitrarily sophisticated queries, you can deeply test all of the resolvers in your API.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Beyond just queries, we can create, update, and remove books as well.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;By adding before and after hooks, we could further ensure that anything created during the test is wiped from production before the test exits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;While the example shown &lt;a href="https://github.com/assertedio/graphql-uptime"&gt;here&lt;/a&gt; can be cloned and run locally without an account, you'll need to do a few extra steps if you want to create your own Asserted routine to integration test your API in production.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an &lt;a href="https://asserted.io"&gt;Asserted&lt;/a&gt; account. It's free and easy.&lt;/li&gt;
&lt;li&gt;Complete the 2 minute onboarding to ensure that your environment is ready. You can also reference the docs &lt;a href="https://docs.asserted.io"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Start writing and running tests in prod!&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>graphql</category>
      <category>node</category>
      <category>testing</category>
      <category>devops</category>
    </item>
    <item>
      <title>Node Typescript API Template with Dependency Injection</title>
      <dc:creator>Eric Hacke</dc:creator>
      <pubDate>Mon, 06 Jul 2020 12:56:40 +0000</pubDate>
      <link>https://dev.to/ehacke/node-typescript-api-template-with-dependency-injection-31eg</link>
      <guid>https://dev.to/ehacke/node-typescript-api-template-with-dependency-injection-31eg</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Available on &lt;a href="https://github.com/ehacke/ts-di-starter"&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Dependency Injected Everything so everything is modular and unit testable&lt;/li&gt;
&lt;li&gt;Typescript everything&lt;/li&gt;
&lt;li&gt;Everything testable with emulators and Docker, many examples&lt;/li&gt;
&lt;li&gt;Express API with dependency injected routes, controllers and middleware&lt;/li&gt;
&lt;li&gt;Firestore with transparent validation and caching&lt;/li&gt;
&lt;li&gt;Websockets driven by distributed events service&lt;/li&gt;
&lt;li&gt;Fail-safe and centralized configuration loading and validation&lt;/li&gt;
&lt;li&gt;Flexible and configurable rate limiting&lt;/li&gt;
&lt;li&gt;Flexibility over magic&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Folder Structure
&lt;/h2&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Why Dependency Injection?
&lt;/h2&gt;

&lt;p&gt;For those of you that have not heard the term before, &lt;a href="https://en.wikipedia.org/wiki/Dependency_injection"&gt;dependency injection&lt;/a&gt; (or inversion of control), is a pattern wherein an object or function is passed it's dependencies by the caller instead of requesting them directly. This improves modularity, reuse, and makes testing much easier.&lt;/p&gt;

&lt;p&gt;Without dependency injection, any class you create would directly require it's dependencies. This tightly binds one class to another, and means that when you are writing tests you either have to spin up the entire dependency tree and deal with all that complexity, or you have to intercept the require call.&lt;/p&gt;

&lt;p&gt;Intercepting require calls is possible and commonly done, but not without caveats and side effects. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If your test blows up in the wrong way, mocked require calls may not be restored correctly before the next test.&lt;/li&gt;
&lt;li&gt;Even in normal use, mocked require calls can easily contaminate other tests if not done and undone perfectly.&lt;/li&gt;
&lt;li&gt;Intercepting require calls deep in the structure can be difficult and break easily and non-obviously if files are moved.&lt;/li&gt;
&lt;li&gt;In the event that require-mocking fails, or mocks the wrong thing, the code will fail over to using the real instance instead of failing safe, and this can cause problems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my opinion, using dependency injection is just simpler for both implementation and testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Major Components
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Talking about code is like dancing about architecture. It's better to just &lt;a href="https://github.com/ehacke/ts-di-starter"&gt;read/use the code&lt;/a&gt;. But...&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I'll briefly describe each major component, and then how they all fit together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Services
&lt;/h3&gt;

&lt;p&gt;Services all follow the same signature which you can see examples of in the &lt;a href="https://github.com/ehacke/ts-di-starter/tree/e72e27f893e39a501dfc7e1abfeef7e9ae2e9b9d/src/lib/services"&gt;services/&lt;/a&gt; folder.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The constructor for every service takes a map of other services this service class depends on, and a configuration object with the properties relevant to this service.&lt;/p&gt;

&lt;p&gt;I usually make the services and config args specific to each individual service class. You can make them the same for all services to reduce boilerplate, but I find that gets confusing and just moves all that detail to the already busy &lt;a href="https://github.com/ehacke/ts-di-starter/blob/e72e27f893e39a501dfc7e1abfeef7e9ae2e9b9d/src/app/serviceManager.ts"&gt;serviceManager&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You don't have to pass in all of the dependencies, but my rule is that I pass in any external libraries that make an async call or do serious work; or any other services. Things like lodash or simple utilities I don't generally inject.&lt;/p&gt;

&lt;h3&gt;
  
  
  Models
&lt;/h3&gt;

&lt;p&gt;As covered in the posts on &lt;a href="https://asserted.io/posts/type-safe-models-in-node"&gt;validated models&lt;/a&gt; and &lt;a href="https://asserted.io/posts/simplified-firestore-with-redis"&gt;firebase caching&lt;/a&gt;, models hold state and validate their contents. They differ from Requests below, in that they are primarily used to transfer state internally and save it to the db.&lt;/p&gt;

&lt;p&gt;In this template I've include a few more concrete examples in &lt;a href="https://github.com/ehacke/ts-di-starter/tree/e72e27f893e39a501dfc7e1abfeef7e9ae2e9b9d/src/lib/models"&gt;models/&lt;/a&gt; and made use of them throughout the code.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;You can see in the above example that in addition to the same sort of structure I've outlined in other posts, it also includes a &lt;em&gt;generateId&lt;/em&gt; and &lt;em&gt;create&lt;/em&gt; function.&lt;/p&gt;

&lt;p&gt;Wherever possible I try to generate model IDs deterministically based on immutable properties of that model. &lt;/p&gt;

&lt;h3&gt;
  
  
  Requests
&lt;/h3&gt;

&lt;p&gt;Requests are very similar to models, with the minor difference of being principally used to transfer state externally. In a lot of cases I end up moving all request models into a dedicated repo and NPM package that is shared with the frontend.&lt;/p&gt;

&lt;h3&gt;
  
  
  Controllers
&lt;/h3&gt;

&lt;p&gt;Controllers are one of the few places in this repo that contain a bit of hidden functionality. Examples in &lt;a href="https://github.com/ehacke/ts-di-starter/tree/e72e27f893e39a501dfc7e1abfeef7e9ae2e9b9d/src/lib/express/controllers"&gt;controllers/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Controllers are simple classes that translate raw incoming JSON into requests or models, and then invoke service calls with those requests or models. They serve as the minimal translation layer between the outside world and the services within the API.&lt;/p&gt;

&lt;p&gt;They generally look like this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;A couple things to note in here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I use &lt;a href="https://www.npmjs.com/package/auto-bind"&gt;autoBind&lt;/a&gt; in the constructor. This is just to make referencing the attached functions easier in the route definitions.&lt;/li&gt;
&lt;li&gt;I pull a user model out of request.locals. This is the user model attached to the request upstream by a middleware when the token is validated and matched to a user.&lt;/li&gt;
&lt;li&gt;I don't call response methods anywhere in here&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reason that I don't call response methods explicitly is because all controllers and middleware in this API are automatically wrapped with an outer function that handles this for you. It's done by &lt;a href="https://github.com/ehacke/ts-di-starter/blob/e72e27f893e39a501dfc7e1abfeef7e9ae2e9b9d/src/lib/express/responseBuilder.ts"&gt;ResponseBuilder&lt;/a&gt;. ResponseBuilder takes whatever is returned by any controller functions and wraps it in a standard response format.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Additionally, any exceptions that are thrown anywhere during the request are caught by ResponseBuilder. If the exception has an attached code property, that is used as the HTTP code, otherwise it's treated as a 500.&lt;/p&gt;

&lt;h3&gt;
  
  
  Middleware
&lt;/h3&gt;

&lt;p&gt;Middleware classes have the same structure and wrapper as controlllers, the only difference is that they typically attach something to the locals property of request, and then call next.&lt;/p&gt;

&lt;h3&gt;
  
  
  ServiceManager
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/ehacke/ts-di-starter/blob/e72e27f893e39a501dfc7e1abfeef7e9ae2e9b9d/src/app/serviceManager.ts"&gt;serviceManager&lt;/a&gt; is where everything is stitched together. In a dependency injected pattern this is often referred to as the &lt;a href="https://blog.ploeh.dk/2011/07/28/CompositionRoot/"&gt;composition root&lt;/a&gt;. Here all the clients (redis and firestore clients, etc), services, controllers, and middleware are created; and passed into each other to resolve their dependencies in the right order. &lt;a href="https://github.com/ehacke/ts-di-starter/blob/e72e27f893e39a501dfc7e1abfeef7e9ae2e9b9d/src/app/serviceManager.ts"&gt;Take a look at it&lt;/a&gt; to see what I mean, it's too big to post an example here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Events
&lt;/h3&gt;

&lt;p&gt;One of the services I included is the &lt;a href="https://github.com/ehacke/ts-di-starter/blob/e72e27f893e39a501dfc7e1abfeef7e9ae2e9b9d/src/lib/services/events.ts"&gt;events service&lt;/a&gt;. This service exists to serve as a way of notifying other services, API containers, or the UI of changes to a given model. It uses &lt;a href="https://www.npmjs.com/package/eventemitter2"&gt;eventemitter2&lt;/a&gt; and &lt;a href="https://redis.io/topics/pubsub"&gt;redis pubsub&lt;/a&gt; to do this in a distributed way, so depending on the event type, you can listen for events in your node, or any node in the cluster.&lt;/p&gt;

&lt;p&gt;Sending an event is simple:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Socket.IO
&lt;/h2&gt;

&lt;p&gt;One place events are used heavily is to communicate with the UI via socket.io.&lt;/p&gt;

&lt;p&gt;My socket.io API has controllers and middleware just like the express API. The middleware mediates authentication and the controller sends out events and responds.&lt;/p&gt;

&lt;p&gt;In the case of this template, the controller just relays events for the authenticated user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rate Limiting
&lt;/h2&gt;

&lt;p&gt;The rate limiting sub-system should probably be it's own post at some point, but the examples are included for &lt;a href="https://github.com/ehacke/ts-di-starter/tree/e72e27f893e39a501dfc7e1abfeef7e9ae2e9b9d/src/lib/services/limiters"&gt;reference&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;They allow multiple overlapping limits to be implemented, and the associated middleware will enforce the limits and attach the headers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So that is it for now in this series. If you have questions, hit me up in the &lt;a href="https://github.com/ehacke/ts-di-starter/issues"&gt;issues&lt;/a&gt; of this repo.&lt;/p&gt;

</description>
      <category>node</category>
      <category>typescript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Simplified Firestore with Redis</title>
      <dc:creator>Eric Hacke</dc:creator>
      <pubDate>Wed, 03 Jun 2020 16:11:52 +0000</pubDate>
      <link>https://dev.to/ehacke/simplified-firestore-with-redis-ak6</link>
      <guid>https://dev.to/ehacke/simplified-firestore-with-redis-ak6</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Library available on &lt;a href="https://github.com/ehacke/simple-cached-firestore"&gt;github&lt;/a&gt; and &lt;a href="https://www.npmjs.com/package/simple-cached-firestore"&gt;npm&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've used either Firestore or Datastore on four different large projects now (including my &lt;a href="https://roleup.io"&gt;onboarding app&lt;/a&gt; RoleUp and &lt;a href="https://asserted.io"&gt;uptime testing service&lt;/a&gt; asserted), and over time I've been able to refine and improve my own wrapper. &lt;/p&gt;

&lt;h2&gt;
  
  
  Isn't this better?
&lt;/h2&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;simple-cached-firestore&lt;/em&gt; offers a number of key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;transparent, no-effort redis caching to improve speed and limit costs&lt;/li&gt;
&lt;li&gt;model validation (optional, suggest using &lt;a href="https://github.com/ehacke/validated-base"&gt;validated-base&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;simplified API to reduce boilerplate&lt;/li&gt;
&lt;li&gt;still have access to the underlying firestore client if you need custom functionality&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why build an API when using Firestore?
&lt;/h2&gt;

&lt;p&gt;Obviously one of the biggest and most popular features of Firebase/Firestore is that it can be used entirely serverless. With the correct configuration it can be securely accessed directly from the web or a native app without having to write your own API.&lt;/p&gt;

&lt;p&gt;But that comes with a few big sacrifices that I wasn't willing to make.&lt;/p&gt;

&lt;h3&gt;
  
  
  Validation
&lt;/h3&gt;

&lt;p&gt;You can't easily validate your data models without an API. There is a capability to &lt;a href="https://firebase.google.com/docs/rules/data-validation"&gt;write rules&lt;/a&gt;, but I really don't want to spend hours writing complicated validation logic in thisr DSL:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Furthermore, in some cases it's just not possible. If you have any kind of complicated validation logic, or even something as simple as wanting to use constants from a library, you're out of luck.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sanitization
&lt;/h3&gt;

&lt;p&gt;Additionally, the rules merely determine whether or not to allow a write to occur. &lt;/p&gt;

&lt;p&gt;What if the properties you are checking are valid, but the user has messed with the Javascript and is saving extra arbitrary properties within the same object? Or much more likely, what if you accidentally attach properties you don't mean to save? In either case you only have limited control over what gets written to your db.&lt;/p&gt;

&lt;h3&gt;
  
  
  Caching
&lt;/h3&gt;

&lt;p&gt;Caching can serve both as a circuit breaker, and insurance against malice or bugs. Which is why it's unfortunate that caching also cannot be implemented in a serverless setup without a lot of complexity. &lt;/p&gt;

&lt;p&gt;When implemented well, caching provides significant benefits in terms of cost-reduction and responsiveness. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;With simple-cached-firestore wrapper, I regularly see cache hit rates of 95-98%. Amounting to a huge reduction in Firestore READ costs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;p&gt;Moving on to the subject at hand, we'll look at how I've addressed the shortcomings above with an API and &lt;em&gt;simple-cached-firestore&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Each instance of &lt;em&gt;simple-cached-firestore&lt;/em&gt; is responsible for all reads and writes to a specific collection, and it's assumed that all elements of that collection can be represented by the same model.&lt;/p&gt;

&lt;p&gt;To create an instance of &lt;em&gt;simple-cached-firestore&lt;/em&gt;, we must first create the model that will exists in the collection. &lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Model
&lt;/h3&gt;

&lt;p&gt;At minimum, the model has to fulfill the following interface:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The easiest way to do this is to just extend &lt;a href="https://www.npmjs.com/package/validated-base"&gt;validated-base&lt;/a&gt; (the subject of the post on &lt;a href="https://asserted.io/posts/type-safe-models-in-node"&gt;validated models&lt;/a&gt;) and use that.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Now that we have a model to work with, let's create an instance of &lt;em&gt;simple-cached-firestore&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create simple-cached-firestore
&lt;/h3&gt;

&lt;p&gt;As mentioned above, a single instance is responsible for reading and writing to a specific Firestore collection. &lt;/p&gt;

&lt;p&gt;Reads are cached for the configured TTL, and writes update the cache. Because all reads and write pass through this layer, cache invalidation is not an issue. We have perfect knowledge of what is written, so the only real limit on the cache TTL is how big of a Redis instance you want to pay for.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;You may not want to do all of these operations in one place like this, but this is the general idea.&lt;/p&gt;

&lt;p&gt;The validated class we created above serves as both validation of anything that's passed to it, and a way to translate the object to and from the db (and the cache) into a class instance with known properties.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic CRUD Operations
&lt;/h2&gt;

&lt;p&gt;You can see the breakdown of the basic operations &lt;a href="https://github.com/ehacke/simple-cached-firestore/tree/82507ebd021e1d2bf3ee3cebd79debf47242abf5#crud-api"&gt;here&lt;/a&gt;, but included the expected &lt;em&gt;create&lt;/em&gt;, &lt;em&gt;get&lt;/em&gt;, &lt;em&gt;patch&lt;/em&gt;, &lt;em&gt;update&lt;/em&gt;, and &lt;em&gt;remove&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;To give you an idea of how these CRUD operations are implemented, here is an example of how simple-cached-firestore implements the get operation. It's actually more complicated than this, but this is just to show the major details.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The full implementation is &lt;a href="https://github.com/ehacke/simple-cached-firestore/blob/82507ebd021e1d2bf3ee3cebd79debf47242abf5/src/firestore.ts#L288"&gt;here&lt;/a&gt;, and includes some extra work with timestamps to avoid race conditions contaminating the cache. But basically the process is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check cache and return if cache exists&lt;/li&gt;
&lt;li&gt;Otherwise get snapshot and convert into a model instance&lt;/li&gt;
&lt;li&gt;Update cache before returning if a value is found&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pretty straight-forward, and you can imagine write operations working in a similar way. &lt;/p&gt;

&lt;p&gt;Depending on the problem you're solving, and if you're careful about how you design all of the data models for your project, you can actually do a large portion of the regular tasks with just the basic CRUD operations. &lt;/p&gt;

&lt;p&gt;This is great if you can manage it because it not only minimizes costs in normal operation, but thanks to the cache, means that you'll almost never have to hit the Firestore itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Query Operations
&lt;/h2&gt;

&lt;p&gt;At some point, some type of query operation is usually required in most projects, even if it's just a list operation with a single filter. In Firestore this is done by chaining operations, often in a specific order. In order to abstract and simplify this, I created a simpler query abstraction that looks like this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;In use, the query objects look like this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;One important thing to note is that while queries are cached, due to the complexity of the query logic, accurate invalidation is hard. As a result, the cache for queries within a given collection is invalidated on every write to that collection. This makes it not very useful by default, so if you want effective caching of queries, that should be implemented on a case-by-case basis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom Functionality
&lt;/h2&gt;

&lt;p&gt;If the crud and query functionality don't work for you in a specific case, you can always access the underlying Firestore client or cache instance with:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;But keep in mind, that any modifications you make directly to objects in Firestore will not be captured by the cache unless you update it manually, and can result in inconsistencies if you don't do it properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next
&lt;/h2&gt;

&lt;p&gt;From here I'll next describe how the validated models and simple-cached-firestore can be integrated together in a dependency-injected Node microservice architecture.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>redis</category>
      <category>firebase</category>
      <category>node</category>
    </item>
    <item>
      <title>Type Safe Models in Node</title>
      <dc:creator>Eric Hacke</dc:creator>
      <pubDate>Mon, 01 Jun 2020 18:54:06 +0000</pubDate>
      <link>https://dev.to/ehacke/type-safe-models-in-node-18d4</link>
      <guid>https://dev.to/ehacke/type-safe-models-in-node-18d4</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Library available on &lt;a href="https://github.com/ehacke/validated-base"&gt;github&lt;/a&gt; and &lt;a href="https://www.npmjs.com/package/validated-base"&gt;npm&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  In the Beginning
&lt;/h2&gt;

&lt;p&gt;Many years ago, before I ever got started with Node, I used to write a fair bit of C and C++. While those languages have the benefit of type safety in some circumstances, relatively common patterns like pointer casting are still unsafe. Making unchecked assumptions about your data at runtime can have fun effects, &lt;a href="https://www.androidauthority.com/android-wallpaper-crash-1124577/"&gt;like a wallpaper that bootloops your phone&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As a result, from early days I developed a kind of paranoia for including runtime-checks and assertions in my code as a way of ensuring that everything if something unexpected happened, the code would explode in a useful way, rather than in a confusing way, or worse, just silently corrupt data. &lt;/p&gt;

&lt;p&gt;You can add testing (or just raw self-confidence) to try to avoid these checks, but in my experience some level of runtime checking is more useful than it is expensive.&lt;/p&gt;

&lt;p&gt;A simple check would look something like this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Or you can make it a bit more concise with Node assert.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Of course this only really works for non-object parameters. Asserting all of the properties of an object parameter quickly becomes a mess.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  The Javascript Solution
&lt;/h2&gt;

&lt;p&gt;So I came up with a solution that seemed to work pretty well without being overly verbose. I'd create a class that validates it's members before construction, and then I could pass instances of that class around and just assert that the argument was an instance of that class. &lt;/p&gt;

&lt;p&gt;Not perfect, technically you could still mutate the class outside of the constructor, but it was good enough for my purposes in a pre-Typescript world.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Some features of this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This solution centralises the validation of a given data model within a given model file, it's DRY&lt;/li&gt;
&lt;li&gt;It's only validated once at construction and then the rest of the code can essentially just trust it based on type&lt;/li&gt;
&lt;li&gt;Extra object values that are not necessary are silently stripped off at construction (may be a problem depending on how strict you want to be)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are further ways to improve this that I won't get into deeply. The biggest improvement is that instead of writing assert statements inside the constructor, it's nicer to use something like &lt;a href="https://github.com/ajv-validator/ajv"&gt;ajv&lt;/a&gt; and &lt;a href="https://json-schema.org/understanding-json-schema/"&gt;jsonschema&lt;/a&gt; to do the validation. This standardizes the validation, and adds a ton of strictness if that's what you're going for.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Model?
&lt;/h2&gt;

&lt;p&gt;For me, in my implementations, and this blog going forward, a model is a (mostly) immutable instance of a class that validates its member variables at construction, and can be assumed to only contain valid data from that point forward. &lt;/p&gt;

&lt;p&gt;This allows you to pass model instances from service to service without re-checking all of the internal state, and serves as a centralised place to put all the validation logic associated with a given concept. In my designs, models are created anytime data crosses a system boundary (API to UI, or UI to API, or API to DB, etc), and this way you can be sure that everything is expecting the same data structure with the same constraints.&lt;/p&gt;

&lt;p&gt;Creating new instances of classes at boundaries like this does have a computational cost, but that's usually minimal, and I'll talk later about what to do when it isn't.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For me, models are the fundamental, passive, immutable block of state that all other active abstractions use to communicate with each other. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Moving to Typescript
&lt;/h2&gt;

&lt;p&gt;So at some point in the last year I saw the light and took &lt;a href="https://www.typescriptlang.org/"&gt;Typescript&lt;/a&gt; into my heart. I had resisted it because of the time-penalty during development caused by the compile step, but on the whole it's been a large improvement.&lt;/p&gt;

&lt;p&gt;For those that haven't made the transition, my biggest points would be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Significantly fewer dumb-level bugs with less testing&lt;/li&gt;
&lt;li&gt;Way faster refactoring in a good IDE like Intellij&lt;/li&gt;
&lt;li&gt;Enums, interfaces, and abstract classes offer a big improvement in standardized expressiveness that I had been missing since my C#/C++ days. I had hacked together my own interface concept in Javascript, but Typescript standardizes and improves it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So beyond just the benefits of Typescript as whole, Typescript also offered the opportunity to rethink and refine my validated model approach I had built in Javascript above.&lt;/p&gt;

&lt;p&gt;Of course the gotcha with Typescript is that all of that fancy type-safety stuff completely evaporates at runtime, by design. That's not to say it isn't useful in finding and fixing bugs during development, but it's not helping you in production. My non-typescript approach had been trying to address both, making development faster with better errors, and making production safer with validation. So switching entirely to Typescript types and abandoning runtime checks was not an option for me.&lt;/p&gt;

&lt;p&gt;At the same time, I didn't want to duplicate my work by implementing both runtime and compile time type-checks everywhere. This seems like a waste.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;So, as with all good engineering solutions, I settled on a compromise. I'd validate at runtime within my models, and let Typescript do the the rest of the work everywhere else. Sure that's not perfect, but I good enough was good enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Implementation
&lt;/h2&gt;

&lt;p&gt;There are a number of libraries and options for translating Typescript types to runtime checks, but I didn't really like any of them. They seemed like a lot of verbosity and work, basically re-implementing a runtime version of Typescript for every model.&lt;/p&gt;

&lt;p&gt;Eventually I found &lt;a href="https://github.com/typestack/class-validator"&gt;class-validator&lt;/a&gt; and that proved to be the thing I needed. Create a regular Typescript class however you like, and then attach decorators with the validation and constraints to the member definitions. Before exiting the constructor, validate what you have initialised.&lt;/p&gt;

&lt;p&gt;To make this easier, I created a base class that contains the validation logic that I extend for every instance of every model in my system. The core of the base class looks like this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;I omitted some details for brevity, but the full implementation of the class is &lt;a href="https://github.com/ehacke/validated-base/blob/f187ab2f770f273c1fb9620bffcf115d03c99b3d/index.ts"&gt;here&lt;/a&gt;. Or checkout &lt;a href="https://github.com/ehacke/validated-base"&gt;github&lt;/a&gt; or &lt;a href="https://www.npmjs.com/package/validated-base"&gt;npm&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This does a few things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;uses class-validator to validate the concrete class&lt;/li&gt;
&lt;li&gt;if there are any errors, collect them, format them, and throw them with an attached HTTP status code (I catch and relay this in my controller)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An example implementation of this class would look like:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;With this class defined, you can just create an instance of it, and the omit asserting the types of function parameters.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;And that's it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Next
&lt;/h2&gt;

&lt;p&gt;From here I'll move onto the next level, using these validated models in connection with the DB.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>node</category>
      <category>validation</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
