<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lochie</title>
    <description>The latest articles on DEV Community by Lochie (@ldenholm).</description>
    <link>https://dev.to/ldenholm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ldenholm"/>
    <language>en</language>
    <item>
      <title>Notes on Resilient Services</title>
      <dc:creator>Lochie</dc:creator>
      <pubDate>Tue, 05 Oct 2021 08:45:01 +0000</pubDate>
      <link>https://dev.to/ldenholm/notes-on-resilient-services-n0g</link>
      <guid>https://dev.to/ldenholm/notes-on-resilient-services-n0g</guid>
      <description>&lt;p&gt;&lt;strong&gt;Resilience&lt;/strong&gt; = a systems capability to withstand errors and faults and the effectiveness at recovering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliability&lt;/strong&gt; = the measurement of a system ability to behave as expected for a given time interval.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt; = how well a systems internal states can be inferred from knowledge of its external outputs. A system can be considered observable when it's possible to quickly and consistently ask novel questions about it with minimal prior knowledge.&lt;/p&gt;

&lt;p&gt;Detecting and debugging problems is a fundamental requirement if we wish to produce a robust and painless system. Distributed systems are complex and often even locating where a problem is can be time consuming.&lt;/p&gt;

&lt;p&gt;The number of possible failure states for any given system is proportional to the product of the number of possible partial and complete failure states of each of its components, and it's impossible to predict them all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Vertical&lt;/strong&gt; = Upsize (or downsize) RAM or CPU etc. Technically speaking it's relatively straightforward but a system can only be vertically scaled so much.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Horizontal&lt;/strong&gt; = Adding (or removing) service instances. For example adding more service nodes behind a load balancer or containers in kubernetes / ECS. More replicas = greater design and management complexity, not all services can be horizontally scaled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concurrency is not Parallelism
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Parallelism&lt;/strong&gt; refers to the execution of multiple processes simultaneously. The processes must be executing at the same time, which is impossible on a single processor. Multi core processors allow for parallel execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concurrency&lt;/strong&gt; refers to the composition of independently executing procedures. This idea is about structuring a set of managed processes to increase performance regardless of executing the processes in parallel.&lt;/p&gt;

&lt;p&gt;We can use the ACP (algebra of communicating processes) describe what concurrency might look like. In this framework processes are considered control mechanisms for the manipulation of data.&lt;/p&gt;

&lt;p&gt;For more in-depth reading on process algebra:&lt;br&gt;
&lt;a href="http://www.few.vu.nl/%7Ewanf/BOOKS/procalg.pdf"&gt;http://www.few.vu.nl/~wanf/BOOKS/procalg.pdf&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Reasons to adopt gRPC in existing systems</title>
      <dc:creator>Lochie</dc:creator>
      <pubDate>Tue, 21 Sep 2021 04:28:32 +0000</pubDate>
      <link>https://dev.to/ldenholm/reasons-to-adopt-grpc-in-microservices-architecture-45ph</link>
      <guid>https://dev.to/ldenholm/reasons-to-adopt-grpc-in-microservices-architecture-45ph</guid>
      <description>&lt;h2&gt;
  
  
  Microservices
&lt;/h2&gt;

&lt;p&gt;Advances in cloud computing have shown that microservices architecture is an effective solution to issues introduced by monolithic systems. The previous iteration of this idea was called SOA or Services Oriented Architecture, a term coined in the 90s.&lt;/p&gt;

&lt;p&gt;The general design pattern is not new, but the rise of highly scalable infrastructure provisioning technologies (AWS/Azure/gCloud) has pushed this instrumental pattern back into the mainstream. Leading companies have adopted Microservices to decrease the computation costs of their systems. &lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges with microservices
&lt;/h2&gt;

&lt;p&gt;There are three critical challenges with a microservices implementation. The first immediate challenge is that previously we would primarily deal with passing objects around in memory. Nowadays, we are passing objects across the wire. To send information over the wire requires a large amount of work. The typical journey involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Serialize a request into a message that we can send as bits across the wire.&lt;/li&gt;
&lt;li&gt;Send the request via a network card.&lt;/li&gt;
&lt;li&gt;Translate the data into packets.&lt;/li&gt;
&lt;li&gt;Receive the packets on the other end.&lt;/li&gt;
&lt;li&gt;Deserialize and finally turn the data into an in-memory object the requester can use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To do all this creates a performance penalty. It is costly to do all this work every time we send a message.  Modern microservice implementations comprise many layers and operations, such as fan-out. Considering these layers, the cost of serialization and deserialization grows in computational cost as the system evolves.&lt;/p&gt;

&lt;p&gt;The second major challenge is network contention. When dealing with poorly reliable networks and significant payloads, the system performance suffers.&lt;/p&gt;

&lt;p&gt;The third challenge arises from the increased volume of services in a given system. Current microservices systems may often have a single machine dedicated to a single service. There are two requirements in newer systems with a higher volume of services: having a single machine run multiple services and services running over multiple machines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Answers to the challenges
&lt;/h2&gt;

&lt;p&gt;gRPC =&amp;gt; a collection of libraries and tools that allow us to create APIs (clients and servers) in many different languages.  It relies on protobufs, a strongly typed and binary efficient mechanism for serializing and deserializing messages (syntactically similar to Go). &lt;/p&gt;

&lt;p&gt;gRPC is a transport built on HTTP/2, giving it access to features like bidirectional streaming. &lt;/p&gt;

&lt;p&gt;gRPC can control retries, flow control, rate management—all things required for building a robust client.&lt;/p&gt;

&lt;p&gt;The language of choice = Go. Go has a small runtime footprint. This consideration is essential when services are small, and there are lots of them. There is an economic incentive to minimize computational costs in any business. Having the ability to run a process as efficiently as possible with as little memory and system overhead required means we can pack processes into a finite amount of compute.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is next?
&lt;/h2&gt;

&lt;p&gt;Next, I will explore the structure of a gRPC microservice.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>go</category>
      <category>cloud</category>
      <category>grpc</category>
    </item>
  </channel>
</rss>
