<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sanket Barapatre</title>
    <description>The latest articles on DEV Community by Sanket Barapatre (@sanket2021).</description>
    <link>https://dev.to/sanket2021</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sanket2021"/>
    <language>en</language>
    <item>
      <title>Introduction to Consumer Driven Contract Tests</title>
      <dc:creator>Sanket Barapatre</dc:creator>
      <pubDate>Thu, 26 May 2022 18:49:19 +0000</pubDate>
      <link>https://dev.to/sanket2021/introduction-to-consumer-driven-contract-tests-2203</link>
      <guid>https://dev.to/sanket2021/introduction-to-consumer-driven-contract-tests-2203</guid>
      <description>&lt;h2&gt;
  
  
  CDCs short for Consumer Driven Contract Tests.
&lt;/h2&gt;

&lt;p&gt;They involve writing test cases for other systems. These systems are the ones that we communicate with. Since their is a contract involved during event driven communication, it is highly imperative to ensure that this contract stays as it is even during feature developments, refactoring or any kind of code development on the other system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why we need CDCs
&lt;/h3&gt;

&lt;p&gt;Lets say we our System A and we consume messages from System B using a contract. Since our functionality depends on System A.&lt;/p&gt;

&lt;p&gt;We need to ensure that if System B unintentionally changes their code and the contract is affected, they KNOW that System A will fail since it will no longer be able to consume the messages/ event as the contract has been changed. &lt;/p&gt;

&lt;p&gt;An effective way to ensure this is to write CDC or tests on our project that will be provided to System B to run. If during development of System B, they change this contract, and try to build it, the CDC by System A will fail and System B will know to rollback changes or to inform System A to take care of that change on our side. Hence, CDC helps to maintain proper communication between multiple Systems ensuring contract changes are properly communicated and maintained.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to implement?
&lt;/h4&gt;

&lt;p&gt;We write CDCs in our code base as simple as a JAVA Unit Test. These are then be wrapped into a docker image to be easily shared. This docker image can be shared via docker registry. The other System is informed that they have to run this CDC as we are using their contract. &lt;/p&gt;

&lt;p&gt;Hence, they on their side, in their build pipeline, pull this docker image from registry and run it. If it runs successfully we know the contract is alive, if not, they need to inform us to incorporate the change.&lt;/p&gt;

&lt;p&gt;If multiple Systems our consuming our contract, and we want to change it, we know whom to communicate the change to.&lt;/p&gt;

</description>
      <category>cdc</category>
      <category>eventdriven</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Some AWS Networking concepts</title>
      <dc:creator>Sanket Barapatre</dc:creator>
      <pubDate>Thu, 26 May 2022 18:44:01 +0000</pubDate>
      <link>https://dev.to/sanket2021/some-aws-networking-concepts-57e6</link>
      <guid>https://dev.to/sanket2021/some-aws-networking-concepts-57e6</guid>
      <description>&lt;ol&gt;
&lt;li&gt;&lt;p&gt;VPC is like a container for holding multiple resources together bound by a private CIDR. Can be cross AZs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Subnet is sub-part of VPC for isolationg reources within VPC, like EC2 instance or DB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VPC can have one CIDR, and each of its subnet use a part of it and all should be non overlapping.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VPC has security groups as firewall (which require config only for one way traffic, reply traffic is already configured) and route tables to connect to NAT, IG, or other VPC or even intra-traffic. Security Grp rules evaluated ascending first.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Subnet has its own route table, which inherits from VPC (basic one) if not specified, it has to have at-least intra-traffic by default. Subnet also have Network Access Control List as its own firewall where you have to configure for reply traffic as well unline VPC's security group. NACL rules are evaluated from ascending. Hence always add local traffic as first.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NAT- Network Address Translation, used when connecting VPC to outside world. It converts the internal IP address to public IP address.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Elastic IP- when we reboot an instance the public IP changes, hence we use Elastic IP to configure it to a static IP address.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IG- internet gateway to allow access to Internet. Has to configure same in route tables.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VPC peering- create a connection for connecting VPC -to-VPC.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NAT instance- a ec2 instance which is in public subnet and connected to a private subnet as well as NAT gateway for allowing private subnet to access outside world. We use similar setup for bastion host, or jump host to connect to private subnet DB.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;RPC 1918 specification recommends us to use 10.0.0.0 or 172.168.0.0 similar CIDR for local CIDR when attaching to VPC.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Distributing and re-using your private maven artefact using github registry</title>
      <dc:creator>Sanket Barapatre</dc:creator>
      <pubDate>Thu, 26 May 2022 18:40:39 +0000</pubDate>
      <link>https://dev.to/sanket2021/distrbuting-and-re-using-your-private-maven-artefact-using-github-registry-3kni</link>
      <guid>https://dev.to/sanket2021/distrbuting-and-re-using-your-private-maven-artefact-using-github-registry-3kni</guid>
      <description>&lt;p&gt;&lt;strong&gt;The Why?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While working  on a maven project or any other build tools like gradle, we come across a lot of dependencies while building a project.&lt;br&gt;
The idea is to not write boilerplate plate and re-invent the wheel by utilizing existing dependencies in a project.&lt;br&gt;
While working in an organization, we come across some libraries privately owned by the organization or something that we developed and want only specific teams or groups of people to use.&lt;/p&gt;

&lt;p&gt;In order to share these build artefacts, such as for maven/gradle build tools, we can either share it using privately owned collaboration tools. But then the idea of updates, patches and improvements that needs to be done across teams for same artefacts become cumbersome and hard to maintain.&lt;/p&gt;

&lt;p&gt;A simple idea is to upload the artefacts in a secure place where dependent services can download these using some credentials.&lt;/p&gt;

&lt;p&gt;Here I will discuss 2 approaches to do this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Github Registry
Github provides a artefact registry where among other things you can build and deploy your artefact so it can be used by everyone as a public artefact or even privately owned.
Here are the steps to do so:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;a. create your library that you want to deploy&lt;/p&gt;

&lt;p&gt;b. add distributionManagement attributes  to your project object mode file, pom.xml specifiying which repository you want your code to be put into.&lt;/p&gt;

&lt;p&gt;c. do mvn deploy to deploy your artifact as a dependency.&lt;/p&gt;

&lt;p&gt;d. you should see your artifact deployed in corresponding package.&lt;/p&gt;

&lt;p&gt;e. restrict the use of artefact by making your repo private, or you can keep it public. For private repos, you can use restrictions to specific users / user groups.&lt;/p&gt;

&lt;p&gt;How to use above artefacts:&lt;/p&gt;

&lt;p&gt;a. create a service depending on the library.&lt;/p&gt;

&lt;p&gt;b. add remote repository attributes at pom.xml, specifying which remote repository where you can find above artefact.&lt;/p&gt;

&lt;p&gt;c. in case your artefact is public, mvn compile or any of the goals should be able to download the dependency, a jar in our case.&lt;/p&gt;

&lt;p&gt;d. for a private build artefact, we need to create a github access token with read-packages permission at least, so that we can use to download artefact.&lt;/p&gt;

&lt;p&gt;e. with the gitub access token, updated in your settings.xml with id param matching with that of the remote repository tag, and provided you have access t the repo (at least read), mvn compile or similar goal should help you download the artefact.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;S3 backed build artefact registry&lt;/p&gt;

&lt;p&gt;We can use AWS S3 storage service to also store our artefacts securely. This helps you to maintain control over your artefacts in a finer way by specifying access policy, either limiting by VPC, or any other IAM policy.&lt;/p&gt;

&lt;p&gt;Also, you can make it region specific, or have a S3 policy retrictive to same VPC as your build tool (like goCD, jenkins) so only your pipeline can build it. For local build, VPN to the VPC could help. This makes access secure, finer control over visibility to package and no modification for settings.xml. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For deploying artefacts to S3 bucket, we need a special dependency, 
namely &lt;code&gt;maven-s3-wagon&lt;/code&gt;, this allows you to deploy maven artefact to s3 bucket.&lt;/li&gt;
&lt;li&gt; Add distributionManagement attributes  to your project object mode file, pom.xml specifiying which s3 bucekt you want your code to be put into. (we can have only single distributionManagement repo, at max 2 for snapshot and release separately)&lt;/li&gt;
&lt;li&gt;Do mvn deploy to deploy your artifact as a dependency.&lt;/li&gt;
&lt;li&gt;You should see your artifact deployed in corresponding package.&lt;/li&gt;
&lt;li&gt;Restrict the use of artefact by making your repo private, or you can keep it public.
    for private repos, you can use restrictions to specific users / user groups.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For using above artefact:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;specify your artefact s3 registry in remote repositories tag (we can have multiple remote repositories)&lt;/li&gt;
&lt;li&gt;if your AWS credentials are configured, you should be able to download the dependencies , for mvn compile.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Both approaches are good enough with a little bit of tradeoff.&lt;br&gt;
Github Registry is free and control can be given to specific users/ groups with read and write access.&lt;/p&gt;

&lt;p&gt;Same For S3 bucket as registry, where you can control access via Virtual Private Network, to only some instances and maybe to some resources, principles.&lt;br&gt;
S3 services depends on your AWS usage, but it is very inexpensive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comparing approaches&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Github Registry puts your artefact closer to your code and has better visibility from code perspective. All code and dependency at one place.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;While S3, gives you better security if you want existing IAM roles, policies applied to your artefacts as well.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Github Registry will throw error if you do not have write access while pushing or if above version already exists.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;S3 Bucket will allow a authorizied push but overwrites previous artefact with same version. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can check previous artefact in S3 bucket versions section, but it won't be downloadable by user directly. &lt;br&gt;
This could be a major flaw if somebody tried to update same version. Could affect all users at once. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both are highly available, and well performing service managing different versions, and proper updates being update.&lt;/p&gt;

</description>
      <category>maven</category>
      <category>pom</category>
      <category>artifactmanagement</category>
      <category>s3</category>
    </item>
    <item>
      <title>AWS SQS with spring boot &amp; Localstack with Junit Testing</title>
      <dc:creator>Sanket Barapatre</dc:creator>
      <pubDate>Mon, 17 May 2021 09:31:02 +0000</pubDate>
      <link>https://dev.to/sanket2021/aws-sqs-with-spring-boot-localstack-with-junit-testing-8p</link>
      <guid>https://dev.to/sanket2021/aws-sqs-with-spring-boot-localstack-with-junit-testing-8p</guid>
      <description>&lt;h2&gt;
  
  
  Preface
&lt;/h2&gt;

&lt;p&gt;Building microservices architecture often involves creating microservices communicating using a message bus or any means of loosely coupled means such as AWS Simple Queue Service, dearly called AWS SQS.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are we building
&lt;/h2&gt;

&lt;p&gt;Complete Code: &lt;a href="https://github.com/sanket0612/spring-boot-localstack"&gt;spring-boot-localstack&lt;/a&gt;&lt;br&gt;
Here is step-by-step guide of setting up a simple spring-boot web application talking to AWS SQS using localstack to mock the AWS environment.&lt;/p&gt;

&lt;p&gt;This includes bare minimum configurations required to create a web app communicating using SQS only.&lt;/p&gt;
&lt;h2&gt;
  
  
  Basic Definitions:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://github.com/localstack/localstack"&gt;localstack&lt;/a&gt;: Simply a tool to mock AWS Cloud Provider in your local environment, to help develop cloud applications.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://junit.org/junit5/"&gt;Junit5&lt;/a&gt;: A testing framework for Java application based on Java8.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/awaitility/awaitility"&gt;awaitability&lt;/a&gt;: A tool to express expectations for asynchronous system in an easy and concise manner.&lt;/li&gt;
&lt;li&gt;Docker: Run any process or application in a containerised manner.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Pre-requisites:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Basic knowledge of Java and spring-boot.&lt;/li&gt;
&lt;li&gt;Environment setup for running Docker e.g. docker for mac, or just a happy linux system&lt;/li&gt;
&lt;li&gt;Can setup &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html"&gt;AWS CLI&lt;/a&gt; to play around with application. A command line utility to interact with AWS services.&lt;/li&gt;
&lt;li&gt;A Familiar IDE.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Setup Basic Project
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Get on to the second-best website on internet. &lt;a href="https://start.spring.io/"&gt;Spring Initializer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create a spring project preferably with 2.3 Spring boot version and following dependencies

&lt;ul&gt;
&lt;li&gt;Spring web&lt;/li&gt;
&lt;li&gt;Lombok (java utility to avoid writing boilerplate code)&lt;/li&gt;
&lt;li&gt;AWS Simple Queue Service&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Also add following dependencies externally in pom.xml
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;dependency&amp;gt;
    &amp;lt;groupId&amp;gt;org.awaitility&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;awaitility&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;3.1.3&amp;lt;/version&amp;gt;
    &amp;lt;scope&amp;gt;test&amp;lt;/scope&amp;gt;
&amp;lt;/dependency&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Create simple event and data models to send and receive messages:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Data
@AllArgsConstructor
@NoArgsConstructor
@Builder
public class SampleEvent {

  private String eventId;
  private String version;
  private String type;
  private ZonedDateTime eventTime;
  private EventData data;
}

@Data
@AllArgsConstructor
@NoArgsConstructor
@Builder
public class EventData {

  private String name;
  private int age;
  private String description;
  private EventType eventType;

  public enum EventType {
    CREATED, PROCESSED
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Create a simple controller annotated with &lt;em&gt;@SqsListener&lt;/em&gt; to listen to a queue.
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@SqsListener(value = "${cloud.aws.sqs.incoming-queue.url}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
  private void consumeFromSQS(SampleEvent sampleEvent) {
    log.info("Receive message {}", sampleEvent);
    //do some processing
    sampleEvent.setEventTime(ZonedDateTime.now());
    sampleEvent.getData().setEventType(EventData.EventType.PROCESSED);
    amazonSQSAsync.sendMessage(outgoingQueueUrl, mapper.writeValueAsString(sampleEvent));
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Spring property configurations:
&lt;/h3&gt;

&lt;p&gt;setup application.yml with aws sqs properties, such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;localstack:
  host: localhost

cloud:
  aws:
    credentials:
      access-key: some-access-key
      secret-key: some-secret-key
    sqs:
      incoming-queue:
        url: http://localhost:4576/queue/incoming-queue
        name: incoming-queue
      outgoing-queue:
        name: outgoing-queue
        url: http://localhost:4576/queue/outgoing-queue
    stack:
      auto: false
    region:
      static: eu-central-1

logging:
  level:
    com:
      amazonaws:
        util:
          EC2MetadataUtils: error
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Notes&lt;/strong&gt;:&lt;br&gt;
a. Aws credentials can also be set up environmental variables or in a .aws/credentials file, (further read 😉)&lt;br&gt;
b. define name and url of queues so application can listen to and write to queue. Localstack runs SQS service on port number: 4576&lt;br&gt;
c. The below logging property is to avoid having multiple lines of error where application tries to connect to EC2 instance of localstack. (workaround 😏)&lt;/p&gt;
&lt;h3&gt;
  
  
  AWS Local SQS configs:
&lt;/h3&gt;

&lt;p&gt;Inside the java configuration for SQS we create some beans to allow our application to talk to SQS service provided by localstack. You could add a profile for each config when deploying app in production i.e. actual AWS environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Bean
//endpoint config for connecting to localstack and not actual aws environment.
  public AwsClientBuilder.EndpointConfiguration endpointConfiguration(){
    return new AwsClientBuilder.EndpointConfiguration("http://localhost:4576", region);
  }

  @Bean
  @Primary
//This bean will be used for communicating to AWS SQS
  public AmazonSQSAsync amazonSQSAsync(final AwsClientBuilder.EndpointConfiguration endpointConfiguration){
    AmazonSQSAsync amazonSQSAsync = AmazonSQSAsyncClientBuilder
        .standard()
        .withEndpointConfiguration(endpointConfiguration)
        .withCredentials(new AWSStaticCredentialsProvider(
            new BasicAWSCredentials(awsAccesskey, awsSecretKey)
        ))
        .build();
    createQueues(amazonSQSAsync, "incoming-queue");
    createQueues(amazonSQSAsync, "outgoing-queue");
    return amazonSQSAsync;
  }
//create initial queue so our application can talk to it
  private void createQueues(final AmazonSQSAsync amazonSQSAsync,
                            final String queueName){
    amazonSQSAsync.createQueue(queueName);
    var queueUrl = amazonSQSAsync.getQueueUrl(queueName).getQueueUrl();
    amazonSQSAsync.purgeQueueAsync(new PurgeQueueRequest(queueUrl));
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use QueueMessagingTemplate for sending and receiving messages from AWS SQS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; @Bean
  public QueueMessagingTemplate queueMessagingTemplate(AmazonSQSAsync amazonSQSAsync){
    return new QueueMessagingTemplate(amazonSQSAsync);
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Also setup QueueMessageHandlerFactory so it can convert incoming messages from SQS as String to the actual object you want, in this case Simple Event, using objectmapper.&lt;/em&gt; &lt;br&gt;
You can configure objectmapper separately. Add your custom deserialiser o such by registering  your module or add custom datetime conversion.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Bean
  public QueueMessageHandlerFactory queueMessageHandlerFactory(MessageConverter messageConverter) {

    var factory = new QueueMessageHandlerFactory();
    factory.setArgumentResolvers(singletonList(new PayloadArgumentResolver(messageConverter)));
    return factory;
  }

  @Bean
  protected MessageConverter messageConverter(ObjectMapper objectMapper) {

    var converter = new MappingJackson2MessageConverter();
    converter.setObjectMapper(objectMapper);
    // Serialization support:
    converter.setSerializedPayloadClass(String.class);
    // Deserialization support: (suppress "contentType=application/json" header requirement)
    converter.setStrictContentTypeMatch(false);
    return converter;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally add this docker compose yaml asking docker to create a localstack container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.0'

services:
  localstack:
    image: localstack/localstack:0.10.7
    environment:
      - DEFAULT_REGION=eu-central-1
      - SERVICES=sqs
    ports:
      - "4576:4576"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Starting the application
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;start the localstack using: &lt;strong&gt;docker-compose up&lt;/strong&gt; (in same directory as docker file)&lt;/li&gt;
&lt;li&gt;run application using: &lt;strong&gt;mvn spring-boot:run&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;send a message to sqs using AWS CLI for e.g.:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --endpoint="http://localhost:4576" --region=eu-central-1 sqs send-message --queue-url http://localhost:4576/queue/incoming-queue --message-body '{
  "eventId": "some-event-id-1",
  "eventTime": "2016-09-03T16:35:13.273Z",
  "type": "some-type",
  "version": "1.0",
  "data": {
    "name": "Dev. to",
    "age": 20,
    "description": "User created",
    "eventType": "CREATED"
  }
}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;You should see log messages of event received and forwarded.&lt;/li&gt;
&lt;li&gt;Also added a Junit Test which uses same configuration as for local run. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Thank you!!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>testing</category>
      <category>java</category>
    </item>
    <item>
      <title>Getting Asynchronous with SQS and SNS</title>
      <dc:creator>Sanket Barapatre</dc:creator>
      <pubDate>Mon, 17 May 2021 03:52:15 +0000</pubDate>
      <link>https://dev.to/sanket2021/getting-asynchronous-with-sqs-and-sns-147k</link>
      <guid>https://dev.to/sanket2021/getting-asynchronous-with-sqs-and-sns-147k</guid>
      <description>&lt;h2&gt;
  
  
  Preface
&lt;/h2&gt;

&lt;p&gt;Developing microservices architecture is a really good way to ensure that we get independently deployable which are easier to manage, scalable, develop and are fault tolerant.&lt;/p&gt;

&lt;p&gt;A business functionality is generally achieved using meaningful and validated communication between multiple microservices  which in-turn interact with their respective databases systems persisting meaningful state.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Why
&lt;/h2&gt;

&lt;p&gt;To think of its disadvantages, about the bottle-neck and the points of failure, we can pin point several areas including:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;microservices failure to process due to business logic not updated, request validation failure, unhandled conditions, faulty business logic, internal feature disabled, also including failure in its internal connection to database&lt;/li&gt;
&lt;li&gt;availability of microservice due to restarts, re-deployment on the microservice. e.g. pod restarts, new features or enhancement deployed in a microservice.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Having synchronous communication between such microservices could be dangerous if &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Any deployed microservice restarts, or is re-deployed.&lt;/li&gt;
&lt;li&gt;High latency, network failure in reaching the microservice or a general non-availability of such microservice.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A chain of REST API calls from microservice to another one, could lead to ultimate failure of the execution of the corresponding business functionality.&lt;/p&gt;

&lt;h2&gt;
  
  
  The What
&lt;/h2&gt;

&lt;p&gt;In order to ensure a relaxation to microservices so that they can restart, take time to respond to a REST API call, one way is to make the architecture loosely coupled and have them talk to each other using events rather than typical request-response HTTP calls.&lt;/p&gt;

&lt;p&gt;HTTP calls can timeout occasionally due to network failure, high latency, or microservice being re-loaded, rate limit exceeding on microservices or some other general downtime of a microservice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The How
&lt;/h2&gt;

&lt;p&gt;Such a simple mechanism to achieve is by using, SQS AWS Simple Queue Service and Simple Notification Service as means of communication.&lt;br&gt;
In a nutshell,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A microservice sends out a meaningful and short message to a SNS topic.&lt;/li&gt;
&lt;li&gt;A SNS topic can be subscribed to send message to various SQS listened by each microservice. Hence fan-out of a message.&lt;/li&gt;
&lt;li&gt;A microservice can listen to an incoming message using a SQS listener module.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Such a system ensures easy fault tolerance as well, since we can monitor the complete flow of message generation from source, to final consumption by a microservice.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Benefits
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Traceability &amp;amp; Fault Tolerance
&lt;/h3&gt;

&lt;p&gt;Ensuring traceability by using following properties in the envelope of a message,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;eventId: each message (or event) generated by a microservice has a unique id, preferably UUID which ensures uniqueness of a event and store it.&lt;/li&gt;
&lt;li&gt;traceId: Each business flow involves multiple messages generation in a single flow, e.g. place order, cart checkout, payment processing, order placed. Such a flow that generates a message in response to an incoming message, can have same traceId for all messages so that a complete flow can be traced. A traceId is generated at source and is passed on to new messages generated as a result.&lt;/li&gt;
&lt;li&gt;spanId: a spanId is similar to a traceId except it need not cover all messages. It is an additional safety net that spans over two events, which are linked together. e.g. if a microservice consumes a message A and sends out message B, they have same spanId, so we know these messages are linked.&lt;/li&gt;
&lt;li&gt;version: every message that we consume can have a version. In case of a breaking change in a message, we could upgrade the version allowing it to be processed differently and also enable version as a means of comunication breaking, major or even minor changes in an event.&lt;/li&gt;
&lt;li&gt;context: the business context which gives a rough idea and a meaning to the event. Suppose, ORDER_PROCESSING, could be a business context for the flow for messages that cover messages where customer selects means of payment, processing of actual payment happens and updating status of payment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is how an event can look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"eventId":"518c9aac-b6b9-11eb-b34e-8f1bd39e6f13",
"traceId":"57df4c56-b6b9-11eb-a0a2-f72362d4cbcb",
"spanId":"5e9245a8-b6b9-11eb-9ced-935329a9daeb",
"version":"1.1.0",
"context":"ORDER_PROCESSING",
"data":{
        "name":"customer name",
        "items":"I bought this"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testable
&lt;/h3&gt;

&lt;p&gt;Having SQS and SNS as a means of event driven communication ensures testability since each microservice is a black box consuming a message and producing another. Hence multiple scenarios can be tested using different incoming messages and testing various outgoing message.&lt;br&gt;
 Also, AWS provides a means of sending a test message in SNS, or SQS so that we can test a deployed application as well as replay some messages which may have failed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flexibile with microservice availability:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Introducing DLQs&lt;/strong&gt;: Dead Letter Queues&lt;/p&gt;

&lt;p&gt;A microservice can have some downtime, and all the messages that it was supposed to consume can be stored in a AWS SQS. Once the service is up, it can start consuming messages from where it left off and keep on working.&lt;br&gt;
If a microservice takes time to load, then messages from SQS after being replayed for a configurable number of times, is send to a Dead Letter Queue (or DLQ).&lt;br&gt;
This DLQ stores all failed messages and can be pushed back to the main queue to replay it.&lt;br&gt;
Also in case if a microservice is not able to process a message due to feature currently not available, message is broken or invalid, then such a message is retried and pushed back to the DLQ.  Here, we can later analyse the message and update how we handle this message in our microservice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Having asynchronous communication using AWS Simple Queue Service and SNS Simple Notifications service makes microservices de-coupled and an easy way to have &lt;strong&gt;Event Driven Architecture&lt;/strong&gt; with event as means of communication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;p&gt;What I have explained above is an Event Driven Architecture, without actually using the word. Since this is a big topic in itself I have refrained from using it. This just give a different perspective on how microservices talk with each other.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eventdriven</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
