<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shane Jarman</title>
    <description>The latest articles on DEV Community by Shane Jarman (@sjarman91).</description>
    <link>https://dev.to/sjarman91</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sjarman91"/>
    <language>en</language>
    <item>
      <title>Why GraphQL Is Perfect For Microservices</title>
      <dc:creator>Shane Jarman</dc:creator>
      <pubDate>Wed, 13 Oct 2021 00:04:36 +0000</pubDate>
      <link>https://dev.to/sjarman91/why-graphql-is-perfect-for-microservices-2lfh</link>
      <guid>https://dev.to/sjarman91/why-graphql-is-perfect-for-microservices-2lfh</guid>
      <description>&lt;p&gt;A well-designed microservices architecture is an amazing thing. When each service is responsible for its own specific domain and can be deployed independently, it allows for unparalleled flexibility and productivity for backend teams. However, very few decisions are all benefit with no cost - including our decision to choose microservices over a monolithic backend. &lt;/p&gt;

&lt;p&gt;Before we migrated our backend to GraphQL, one trade-off that we encountered was the additional complexity that it imposed on our client-side networking code - especially when we had to create views that used data from multiple services.&lt;/p&gt;

&lt;p&gt;For example, our application had a "appointment list" view that looked a little something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--73cmZRkr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oty3u5gwy95r182vhwxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--73cmZRkr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oty3u5gwy95r182vhwxa.png" alt="Appointment List View"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a monolithic architecture, the networking code for this page could be trivial. It would be very uncommon to look at an appointment without provider, patient, and clinic details, so you probably would want to JOIN those tables in the appointments query and return them every time.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[MONOLITH]: /companies/id/appointments

{ result: [Appointment] } &amp;lt;--- joined with provider/patient info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, this JOIN was not possible in our microservices architecture since Appointment Service does not have access to provider and patient data. &lt;/p&gt;

&lt;p&gt;Our client applications now had to query three different services via four different endpoints instead of querying one resource that returned all the data they needed. Additionally, the client applications did not have access to the IDs of the required providers, patients, and clinics until after we loaded the initial appointments query!&lt;/p&gt;

&lt;p&gt;So the client side networking logic ended up looking like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- [APPOINTMENT SERVICE] - GET /companies/id/appointments

.. wait for data ...

{ result: [Appointment] }

THEN:
- [PATIENT SERVICE] - GET /patients/id x 5
- [COMPANY SERVICE] - GET /clinics/id x 5
- [COMPANY SERVICE] - GET /providers/id x 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not ideal! The requests were fast, but almost twice as slow as the monolithic version would have been since we had to wait for the initial appointments request to complete before fetching the rest of the associated details.&lt;/p&gt;

&lt;h2&gt;
  
  
  GraphQL To The Rescue
&lt;/h2&gt;

&lt;p&gt;We worked around these limitations for a while, but realized that eventually something had to give. The microservices infrastructure that the backend team loved so much was making the developer experience for our frontend and mobile teams extremely frustrating. We needed a solution, and we needed one fast.&lt;/p&gt;

&lt;p&gt;Luckily, there was a brand new innovation in the GraphQL ecosystem that was the perfect solution for our problem - Apollo Federation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Kq1h1uw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whdb7h64hf5a5hvtwz54.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Kq1h1uw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whdb7h64hf5a5hvtwz54.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Apollo Federation allowed each microservice to manage its own "subgraph" while abstracting that complexity from our client applications. All our subgraphs were stitched together in the GraphQL Gateway to create one seamless Federated Graph that could easily be accessed via a single endpoint.&lt;/p&gt;

&lt;p&gt;The new request for the company appointment list was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- [GRAPHQL GATEWAY] - POST /graphql

query {
  companyAppointments { &amp;lt;--- [APPOINTMENT]
    startTime
    patient { &amp;lt;--- [PATIENT]
      firstName
      lastName
    }
    provider { &amp;lt;--- [COMPANY]
      firstName
      lastName
    }
    clinic { &amp;lt;--- [COMPANY]
      clinicName
      lastName
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our clients did not need to know which backend service was resolving the data they were querying, and they only had to make one request to complete this view. This new query was also much more efficient because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GraphQL only returns the specific fields the client requests, making the payloads much smaller&lt;/li&gt;
&lt;li&gt;The Gateway is able to gather all the data it needs via inter-service communication before responding to the client (eliminating the need for the followup requests)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks to GraphQL Federation, we were able to have our cake and eat it too! The backend team was able to continue development with all the benefits that microservices provide, while our frontend and mobile teams become even more efficient with the flexibility of GraphQL. &lt;/p&gt;

&lt;p&gt;We were extremely happy with the outcome of our migration to Federated GraphQL, and while microservices can be good with REST, they are better with GraphQL!&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>microservices</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Architecting a Real-Time Scheduling Integration with DynamoDB</title>
      <dc:creator>Shane Jarman</dc:creator>
      <pubDate>Thu, 30 Sep 2021 16:32:40 +0000</pubDate>
      <link>https://dev.to/sjarman91/architecting-a-realtime-scheduling-integration-with-dynamodb-215a</link>
      <guid>https://dev.to/sjarman91/architecting-a-realtime-scheduling-integration-with-dynamodb-215a</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;At BetterHealthcare, we are focused on building the best digital front door in healthcare. To do that, we needed to develop an integrated scheduling product that accurately displayed provider availability in real-time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Considerations &amp;amp; Thought Process
&lt;/h3&gt;

&lt;p&gt;Our solution needed to be scalable, event-driven, and able to support multiple EMRs (each of which could have calendar data formatted in slightly different ways). The best way to brainstorm a solution to a big problem like this is to break it down into a few smaller problems. &lt;/p&gt;

&lt;p&gt;In this case, we discussed the following questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What will the scheduling data look like? How will we initially process the scheduling data when we receive it?&lt;/li&gt;
&lt;li&gt;Once data from a specific EMR has entered the system, how will we transform it into our standard availability model?&lt;/li&gt;
&lt;li&gt;After the data is transformed into a standard format, how will we store it?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After answering all those questions and considering the results in all in the context of our broader platform and architecture, we came up with the following solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  The BetterScheduling Solution
&lt;/h2&gt;

&lt;p&gt;We decided that the best move was to leverage AWS's managed services to build our availability data flow using AWS Lambda, DynamoDB, and Kinesis streams. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 - Processing EMR Data Feeds
&lt;/h3&gt;

&lt;p&gt;Each scheduling event that we receive contains just a single piece of the availability puzzle. It could be an appointment cancellation, an updated vacation day, or a change to the 'working hours' of a provider. Scheduling events can arrive in rapid bursts (e.g. when the integration is activated for a new customer) or one at a time as provider schedules update throughout the day.&lt;/p&gt;

&lt;p&gt;Because the quantity of events we are receiving at any given time can be highly variable, we decided to publish each one to an AWS Kinesis Stream when it is received. The &lt;code&gt;scheduling-event-stream&lt;/code&gt; then has an AWS Lambda Consumer that creates or updates the corresponding scheduling item in the &lt;code&gt;scheduling-items&lt;/code&gt; DynamoDB Table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasa7185og91wsfp1paar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasa7185og91wsfp1paar.png" alt="Diagram of API Gateway, Kinesis, Lambda, DynamoDB all linked"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The benefit of using the &lt;code&gt;scheduling-event-stream&lt;/code&gt; is that if data feed from an EMR partner exceeds our write capacity, we can hold the data in the stream temporarily and retry processing the event! Alternatively, if our API failed the request, we would force our partners to "try again" and resend the data (which would not be as reliable and lead to a lot of errors).&lt;/p&gt;

&lt;p&gt;Our Lambda, Kinesis, and Dynamo event intake architecture allows us to have the benefits of 'real-time' event processing without the risk of overloading the system when volume increases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 - Transforming Scheduling Items
&lt;/h3&gt;

&lt;p&gt;OK - so we now have a table full of scheduling items updated in real-time. Great! But these items are formatted differently for each EMR, and "scheduling items" are not "available slots". We still have more work to do to get to the "real-time availabilities" that we need.&lt;/p&gt;

&lt;p&gt;We solved this data 'transformation' issue using DynamoDB Streams. Using pattern-matching, we are able match specific scheduling item updates to a corresponding "EMR Data Transform" Lambda. When a scheduling item is modified, the appropriate Lambda immediately calculates new, properly standardized availability for whichever providers and dates were impacted. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l7cdmpc40fagvgxdueb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l7cdmpc40fagvgxdueb.png" alt="Expanded Architecture Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our 'availability data' is now ready to be stored!&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 - Store Standardized Available Spans
&lt;/h3&gt;

&lt;p&gt;Now we have a day (or multiple days) of standardized available spans ready to be saved. But how should we store them?&lt;/p&gt;

&lt;p&gt;We decided to save the final, standardized data in DynamoDB. There are pros and cons to using a 'wide-column' NoSQL datastore like Dynamo, but in our case the benefits clearly outweighed the costs. &lt;/p&gt;

&lt;p&gt;Our access patterns were clearly defined, the Dynamo API works splendidly with AWS Lambda, and we would not need to manage any connections (in the case of a SQL solution). As we add additional EMR partners and integrated customers, Dynamo would allow us to scale our reads and writes horizontally while "paying as we go" to handle bursts of onboarding throughout the day.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5dllgkuejfds0lw2hyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5dllgkuejfds0lw2hyi.png" alt="Final Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So - our realtime availabilities are saved in our &lt;code&gt;available-spans&lt;/code&gt; DynamoDB table, which our clients are able to query through our GraphQL API.&lt;/p&gt;

&lt;p&gt;NOTE: In the final architecture diagram above, you'll notice that there is an additional Lambda connected to our &lt;code&gt;available-spans&lt;/code&gt; DynamoDB table. That is the final piece of a slightly shorter 'pre-calculated availability' data flow. It is there to show an example of additional data flows that we support, but it is not required for the provided example to function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We hope that you enjoyed this high-level overview of our scheduling infrastructure. We plan on posting more in-depth posts on each of the three steps above over the next couple weeks, so stay tuned!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
