<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ben Sooraj</title>
    <description>The latest articles on DEV Community by Ben Sooraj (@bensooraj).</description>
    <link>https://dev.to/bensooraj</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bensooraj"/>
    <language>en</language>
    <item>
      <title>Ossa: A node.js server-side module (powered by Redis) for sending scheduled messages</title>
      <dc:creator>Ben Sooraj</dc:creator>
      <pubDate>Tue, 12 May 2020 17:37:39 +0000</pubDate>
      <link>https://dev.to/bensooraj/ossa-a-node-js-server-side-module-powered-by-redis-for-sending-scheduled-messages-1dhp</link>
      <guid>https://dev.to/bensooraj/ossa-a-node-js-server-side-module-powered-by-redis-for-sending-scheduled-messages-1dhp</guid>
      <description>&lt;p&gt;&lt;em&gt;Disclaimer:&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This is a side project! It is by no way complete, but it does get the job done.&lt;/li&gt;
&lt;li&gt;This module is not production ready!&lt;/li&gt;
&lt;li&gt;And is good for small projects/apps&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At my work, I had to solve a problem where we send reminders to our users nudging them to prioritise a task pending with them (can't go beyond this with the details). These reminders were required to be configurable, that is, users get to choose when they want to be reminded: in 2 days, in 1 hour on a specific date and time etc.&lt;/p&gt;

&lt;p&gt;After quite a bit of digging around the internet, I stumbled upon &lt;a href="https://redis.io/topics/notifications"&gt;&lt;strong&gt;Redis Keyspace Notifications&lt;/strong&gt;&lt;/a&gt;. There are many other ways you could implement this (&lt;em&gt;cron jobs&lt;/em&gt; for example), but I decided to go with this approach. And since Redis was already one of our tech stacks, it was a viable option worth trying out.&lt;/p&gt;

&lt;p&gt;I went ahead and wrapped up this feature into a module called &lt;a href="https://www.npmjs.com/package/ossa"&gt;&lt;strong&gt;Ossa&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Pretty easy to get started.&lt;/p&gt;

&lt;p&gt;Create an &lt;code&gt;ossa&lt;/code&gt; instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Ossa&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ossa&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ossa&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;Ossa&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ossa&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Default&lt;/span&gt;
    &lt;span class="na"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;localhost&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Default&lt;/span&gt;
        &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;6379&lt;/span&gt; &lt;span class="c1"&gt;// Default&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Default&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;  &lt;span class="c1"&gt;// 0 =&amp;gt; Send and receive (Default) | 1 =&amp;gt; Send only&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To schedule a notification/message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;notificationID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ossa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sendNotification&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;in&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;10s&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c1"&gt;// on: moment().utc().add(30, 'seconds'),&lt;/span&gt;
        &lt;span class="c1"&gt;// on: '2020-05-02 03:23:00',&lt;/span&gt;
        &lt;span class="c1"&gt;// on: '2020-05-01T21:59:16Z',&lt;/span&gt;
        &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Ben&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;age&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;notificationID: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;notificationID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Output:&lt;/span&gt;
&lt;span class="c1"&gt;// notificationID:  ossa::f1799e87-6740-4394-bf5e-d6e55eae3914&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To receive the scheduled message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;ossa&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;notification-received&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;notificationID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;notificationPayload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Process the payload received&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;notificationPayload: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;notificationPayload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;table&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;notificationID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;notificationPayload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Output:&lt;/span&gt;
&lt;span class="c1"&gt;// notificationPayload:  { in: '10s', message: '{"name":"Ben","age":1000}' }&lt;/span&gt;
&lt;span class="c1"&gt;// ┌─────────┬──────────────────────────────────────────────┬─────────────────────────────┐&lt;/span&gt;
&lt;span class="c1"&gt;// │ (index) │                notificationID                │           message           │&lt;/span&gt;
&lt;span class="c1"&gt;// ├─────────┼──────────────────────────────────────────────┼─────────────────────────────┤&lt;/span&gt;
&lt;span class="c1"&gt;// │    0    │ 'ossa::f1799e87-6740-4394-bf5e-d6e55eae3914' │ '{"name":"Ben","age":1000}' │&lt;/span&gt;
&lt;span class="c1"&gt;// └─────────┴──────────────────────────────────────────────┴─────────────────────────────┘&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Do checkout the &lt;a href="https://github.com/bensooraj/ossa#readme"&gt;README&lt;/a&gt; page to learn more.&lt;/p&gt;

&lt;p&gt;I have also created a sample &lt;a href="https://github.com/bensooraj/ossa/tree/master/examples"&gt;ready-to-run example&lt;/a&gt; using docker so you can get started even faster.&lt;/p&gt;

&lt;p&gt;So, go ahead and check it out and let me know your thoughts. Would love some community feedback.&lt;/p&gt;

&lt;p&gt;Thanks!&lt;/p&gt;

</description>
      <category>node</category>
      <category>webdev</category>
      <category>redis</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Accessing Amazon RDS From AWS EKS</title>
      <dc:creator>Ben Sooraj</dc:creator>
      <pubDate>Wed, 09 Oct 2019 07:08:11 +0000</pubDate>
      <link>https://dev.to/bensooraj/accessing-amazon-rds-from-aws-eks-2pc3</link>
      <guid>https://dev.to/bensooraj/accessing-amazon-rds-from-aws-eks-2pc3</guid>
      <description>&lt;h3&gt;
  
  
  Contents
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; Premise
&lt;/li&gt;
&lt;li&gt; Setup the MySQL Database - Amazon RDS

&lt;ol&gt;
&lt;li&gt; Create the VPC
&lt;/li&gt;
&lt;li&gt; Create the subnets
&lt;/li&gt;
&lt;li&gt; Create the DB subnet group
&lt;/li&gt;
&lt;li&gt; Create the VPC security group
&lt;/li&gt;
&lt;li&gt; Create a DB instance in the VPC
&lt;/li&gt;
&lt;li&gt; Amazon RDS setup diagram
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Setup the EKS cluster
&lt;/li&gt;

&lt;li&gt; Let's build the bridge!

&lt;ol&gt;
&lt;li&gt; Create and Accept a VPC Peering Connection
&lt;/li&gt;
&lt;li&gt; Update the EKS cluster VPC's route table
&lt;/li&gt;
&lt;li&gt; Update the RDS VPC's route table
&lt;/li&gt;
&lt;li&gt; Update the RDS instance's security group
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; Test the connection
&lt;/li&gt;

&lt;/ol&gt;

&lt;h3&gt;
  
  
  1. Premise &lt;a id="premise"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;When moving your services to the Kubernetes ecosystem for the first time, it is best practice to port only the stateless parts to begin with.&lt;/p&gt;

&lt;p&gt;Here's the problem I had to solve: Our service uses [Amazon RDS for MySQL][1]. Both the RDS instance(s) and EKS reside within their own dedicated [VPC][2]. How do resources running within AWS EKS communicate with the database? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F9f54sopkrm2za71fhvl5.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F9f54sopkrm2za71fhvl5.jpeg" alt="Problem visualised"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's dive right in!&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Setup the MySQL Database (Amazon RDS) &lt;a id="setup-the-mysql-database"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We will be using the AWS CLI for setting up MySQL database.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.1 Create the VPC &lt;a id="rds-create-the-vpc"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;We will first create a VPC with the CIDR block &lt;code&gt;10.0.0.0/24&lt;/code&gt; which accommodate 254 hosts in all. This is &lt;strong&gt;more than enough&lt;/strong&gt; to host our RDS instance.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;aws ec2 create-vpc &lt;span class="nt"&gt;--cidr-block&lt;/span&gt; 10.0.0.0/24 | jq &lt;span class="s1"&gt;'{VpcId:.Vpc.VpcId,CidrBlock:.Vpc.CidrBlock}'&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"VpcId"&lt;/span&gt;: &lt;span class="s2"&gt;"vpc-0cf40a5f6db5eb3cd"&lt;/span&gt;,
    &lt;span class="s2"&gt;"CidrBlock"&lt;/span&gt;: &lt;span class="s2"&gt;"10.0.0.0/24"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Export the RDS VPC ID for easy reference in the subsequent commands&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;RDS_VPC_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vpc-0cf40a5f6db5eb3cd


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  2.2 Create the subnets &lt;a id="rds-create-subnets"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;RDS instances launched in a VPC must have a [DB subnet group][3]. DB subnet groups are a collection of subnets within a VPC. Each DB subnet group should have &lt;code&gt;subnets&lt;/code&gt; in at least two &lt;code&gt;Availability Zones&lt;/code&gt; in a given &lt;code&gt;AWS Region&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We will divide the RDS VPC (&lt;code&gt;RDS_VPC_ID&lt;/code&gt;) into two equal subnets: &lt;code&gt;10.0.0.0/25&lt;/code&gt; and &lt;code&gt;10.0.0.128/25&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So, let's create the first subnet in the availability zone &lt;code&gt;ap-south-1b&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;aws ec2 create-subnet &lt;span class="nt"&gt;--availability-zone&lt;/span&gt; &lt;span class="s2"&gt;"ap-south-1b"&lt;/span&gt; &lt;span class="nt"&gt;--vpc-id&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RDS_VPC_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--cidr-block&lt;/span&gt; 10.0.0.0/25 | jq &lt;span class="s1"&gt;'{SubnetId:.Subnet.SubnetId,AvailabilityZone:.Subnet.AvailabilityZone,CidrBlock:.Subnet.CidrBlock,VpcId:.Subnet.VpcId}'&lt;/span&gt;
&lt;span class="c"&gt;# Response:&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"SubnetId"&lt;/span&gt;: &lt;span class="s2"&gt;"subnet-042a4bee8e92287e8"&lt;/span&gt;,
  &lt;span class="s2"&gt;"AvailabilityZone"&lt;/span&gt;: &lt;span class="s2"&gt;"ap-south-1b"&lt;/span&gt;,
  &lt;span class="s2"&gt;"CidrBlock"&lt;/span&gt;: &lt;span class="s2"&gt;"10.0.0.0/25"&lt;/span&gt;,
  &lt;span class="s2"&gt;"VpcId"&lt;/span&gt;: &lt;span class="s2"&gt;"vpc-0cf40a5f6db5eb3cd"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;and the second one in the availability zone &lt;code&gt;ap-south-1a&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;aws ec2 create-subnet &lt;span class="nt"&gt;--availability-zone&lt;/span&gt; &lt;span class="s2"&gt;"ap-south-1a"&lt;/span&gt; &lt;span class="nt"&gt;--vpc-id&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RDS_VPC_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--cidr-block&lt;/span&gt; 10.0.0.128/25 | jq &lt;span class="s1"&gt;'{SubnetId:.Subnet.SubnetId,AvailabilityZone:.Subnet.AvailabilityZone,CidrBlock:.Subnet.CidrBlock,VpcId:.Subnet.VpcId}'&lt;/span&gt;
&lt;span class="c"&gt;# Response:&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"SubnetId"&lt;/span&gt;: &lt;span class="s2"&gt;"subnet-0c01a5ba480b930f4"&lt;/span&gt;,
  &lt;span class="s2"&gt;"AvailabilityZone"&lt;/span&gt;: &lt;span class="s2"&gt;"ap-south-1a"&lt;/span&gt;,
  &lt;span class="s2"&gt;"CidrBlock"&lt;/span&gt;: &lt;span class="s2"&gt;"10.0.0.128/25"&lt;/span&gt;,
  &lt;span class="s2"&gt;"VpcId"&lt;/span&gt;: &lt;span class="s2"&gt;"vpc-0cf40a5f6db5eb3cd"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Each VPC has an implicit router which controls where network traffic is directed. Each subnet in a VPC must be explicitly associated with a route table, which controls the routing for the subnet.&lt;/p&gt;

&lt;p&gt;Let's go ahead and associate these two subnet that we created, to the VPC's route table:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Fetch the route table information&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;aws ec2 describe-route-tables &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vpc-id,Values&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RDS_VPC_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; | jq &lt;span class="s1"&gt;'.RouteTables[0].RouteTableId'&lt;/span&gt;
&lt;span class="s2"&gt;"rtb-0e680357de97595b1"&lt;/span&gt;

&lt;span class="c"&gt;# For easy reference&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;RDS_ROUTE_TABLE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rtb-0e680357de97595b1

&lt;span class="c"&gt;# Associate the first subnet with the route table&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;aws ec2 associate-route-table &lt;span class="nt"&gt;--route-table-id&lt;/span&gt; rtb-0e680357de97595b1 &lt;span class="nt"&gt;--subnet-id&lt;/span&gt; subnet-042a4bee8e92287e8
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"AssociationId"&lt;/span&gt;: &lt;span class="s2"&gt;"rtbassoc-02198db22b2d36c97"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Associate the second subnet with the route table&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;aws ec2 associate-route-table &lt;span class="nt"&gt;--route-table-id&lt;/span&gt; rtb-0e680357de97595b1 &lt;span class="nt"&gt;--subnet-id&lt;/span&gt; subnet-0c01a5ba480b930f4
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"AssociationId"&lt;/span&gt;: &lt;span class="s2"&gt;"rtbassoc-0e5c3959d360c92ab"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  2.3 Create DB Subnet Group &lt;a id="rds-create-db-subnet-group"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Now that we have two subnets spanning two availability zones, we can go ahead and create the &lt;strong&gt;DB subnet group&lt;/strong&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;aws rds create-db-subnet-group &lt;span class="nt"&gt;--db-subnet-group-name&lt;/span&gt;  &lt;span class="s2"&gt;"DemoDBSubnetGroup"&lt;/span&gt; &lt;span class="nt"&gt;--db-subnet-group-description&lt;/span&gt; &lt;span class="s2"&gt;"Demo DB Subnet Group"&lt;/span&gt; &lt;span class="nt"&gt;--subnet-ids&lt;/span&gt; &lt;span class="s2"&gt;"subnet-042a4bee8e92287e8"&lt;/span&gt; &lt;span class="s2"&gt;"subnet-0c01a5ba480b930f4"&lt;/span&gt; | jq &lt;span class="s1"&gt;'{DBSubnetGroupName:.DBSubnetGroup.DBSubnetGroupName,VpcId:.DBSubnetGroup.VpcId,Subnets:.DBSubnetGroup.Subnets[].SubnetIdentifier}'&lt;/span&gt;
&lt;span class="c"&gt;# Response:&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"DBSubnetGroupName"&lt;/span&gt;: &lt;span class="s2"&gt;"demodbsubnetgroup"&lt;/span&gt;,
  &lt;span class="s2"&gt;"VpcId"&lt;/span&gt;: &lt;span class="s2"&gt;"vpc-0cf40a5f6db5eb3cd"&lt;/span&gt;,
  &lt;span class="s2"&gt;"Subnets"&lt;/span&gt;: &lt;span class="s2"&gt;"subnet-0c01a5ba480b930f4"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"DBSubnetGroupName"&lt;/span&gt;: &lt;span class="s2"&gt;"demodbsubnetgroup"&lt;/span&gt;,
  &lt;span class="s2"&gt;"VpcId"&lt;/span&gt;: &lt;span class="s2"&gt;"vpc-0cf40a5f6db5eb3cd"&lt;/span&gt;,
  &lt;span class="s2"&gt;"Subnets"&lt;/span&gt;: &lt;span class="s2"&gt;"subnet-042a4bee8e92287e8"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  2.4 Create a VPC Security Group &lt;a id="rds-create-vpc-security-group"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;The &lt;em&gt;penultimate&lt;/em&gt; step to creating the DB instance is creating a VPC security group, an instance level virtual firewall  with &lt;em&gt;rules&lt;/em&gt; to control inbound and outbound traffic.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ aws ec2 create-security-group --group-name DemoRDSSecurityGroup --description "Demo RDS security group" --vpc-id ${RDS_VPC_ID}
{
    "GroupId": "sg-06800acf8d6279971"
}

# Export the RDS VPC Security Group ID for easy reference in the subsequent commands
$ export RDS_VPC_SECURITY_GROUP_ID=sg-06800acf8d6279971


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We will use this security group at a later point, to set an &lt;code&gt;inbound&lt;/code&gt; rule to allow all traffic from the EKS cluster to the RDS instance.&lt;/p&gt;

&lt;h4&gt;
  
  
  2.5 Create a DB Instance in the VPC &lt;a id="rds-create-db-instance-in-the-vpc"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;aws rds create-db-instance &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--db-name&lt;/span&gt; demordsmyqldb &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--db-instance-identifier&lt;/span&gt; demordsmyqldbinstance &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--allocated-storage&lt;/span&gt; 10 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--db-instance-class&lt;/span&gt; db.t2.micro &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--engine&lt;/span&gt; mysql &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--engine-version&lt;/span&gt; &lt;span class="s2"&gt;"5.7.26"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--master-username&lt;/span&gt; demoappuser &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--master-user-password&lt;/span&gt; demoappuserpassword &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--no-publicly-accessible&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--vpc-security-group-ids&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RDS_VPC_SECURITY_GROUP_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--db-subnet-group-name&lt;/span&gt; &lt;span class="s2"&gt;"demodbsubnetgroup"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--availability-zone&lt;/span&gt; ap-south-1b &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--port&lt;/span&gt; 3306 | jq &lt;span class="s1"&gt;'{DBInstanceIdentifier:.DBInstance.DBInstanceIdentifier,Engine:.DBInstance.Engine,DBName:.DBInstance.DBName,VpcSecurityGroups:.DBInstance.VpcSecurityGroups,EngineVersion:.DBInstance.EngineVersion,PubliclyAccessible:.DBInstance.PubliclyAccessible}'&lt;/span&gt;

&lt;span class="c"&gt;# Respone:&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"DBInstanceIdentifier"&lt;/span&gt;: &lt;span class="s2"&gt;"demordsmyqldbinstance"&lt;/span&gt;,
  &lt;span class="s2"&gt;"Engine"&lt;/span&gt;: &lt;span class="s2"&gt;"mysql"&lt;/span&gt;,
  &lt;span class="s2"&gt;"DBName"&lt;/span&gt;: &lt;span class="s2"&gt;"demordsmyqldb"&lt;/span&gt;,
  &lt;span class="s2"&gt;"VpcSecurityGroups"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;"VpcSecurityGroupId"&lt;/span&gt;: &lt;span class="s2"&gt;"sg-06800acf8d6279971"&lt;/span&gt;,
      &lt;span class="s2"&gt;"Status"&lt;/span&gt;: &lt;span class="s2"&gt;"active"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;]&lt;/span&gt;,
  &lt;span class="s2"&gt;"EngineVersion"&lt;/span&gt;: &lt;span class="s2"&gt;"5.7.26"&lt;/span&gt;,
  &lt;span class="s2"&gt;"PubliclyAccessible"&lt;/span&gt;: &lt;span class="nb"&gt;false&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can verify that the DB instance has been created in the UI as well:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F5reunqhdinqcp3vlrbq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F5reunqhdinqcp3vlrbq4.png" alt="RDS MySQL DB Instance Details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2.6 Amazon RDS setup diagram &lt;a id="rds-setup-diagram"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Ffeg2ujod4ja6vtudique.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Ffeg2ujod4ja6vtudique.jpeg" alt="AWS RDS Setup Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Setup the EKS cluster &lt;a id="setup-the-eks-cluster"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Spinning up an EKS cluster on AWS is as simple as:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;eksctl create cluster &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;demo-eks-cluster &lt;span class="nt"&gt;--nodes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="nt"&gt;--region&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ap-south-1
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  using region ap-south-1
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  setting availability zones to &lt;span class="o"&gt;[&lt;/span&gt;ap-south-1a ap-south-1c ap-south-1b]
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  subnets &lt;span class="k"&gt;for &lt;/span&gt;ap-south-1a - public:192.168.0.0/19 private:192.168.96.0/19
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  subnets &lt;span class="k"&gt;for &lt;/span&gt;ap-south-1c - public:192.168.32.0/19 private:192.168.128.0/19
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  subnets &lt;span class="k"&gt;for &lt;/span&gt;ap-south-1b - public:192.168.64.0/19 private:192.168.160.0/19
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  nodegroup &lt;span class="s2"&gt;"ng-ae09882f"&lt;/span&gt; will use &lt;span class="s2"&gt;"ami-09c3eb35bb3be46a4"&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;AmazonLinux2/1.12]
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  creating EKS cluster &lt;span class="s2"&gt;"demo-eks-cluster"&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"ap-south-1"&lt;/span&gt; region
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  will create 2 separate CloudFormation stacks &lt;span class="k"&gt;for &lt;/span&gt;cluster itself and the initial nodegroup
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  &lt;span class="k"&gt;if &lt;/span&gt;you encounter any issues, check CloudFormation console or try &lt;span class="s1"&gt;'eksctl utils describe-stacks --region=ap-south-1 --name=demo-eks-cluster'&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  2 sequential tasks: &lt;span class="o"&gt;{&lt;/span&gt; create cluster control plane &lt;span class="s2"&gt;"demo-eks-cluster"&lt;/span&gt;, create nodegroup &lt;span class="s2"&gt;"ng-ae09882f"&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  building cluster stack &lt;span class="s2"&gt;"eksctl-demo-eks-cluster-cluster"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  deploying stack &lt;span class="s2"&gt;"eksctl-demo-eks-cluster-cluster"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  building nodegroup stack &lt;span class="s2"&gt;"eksctl-demo-eks-cluster-nodegroup-ng-ae09882f"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  &lt;span class="nt"&gt;--nodes-min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 was &lt;span class="nb"&gt;set &lt;/span&gt;automatically &lt;span class="k"&gt;for &lt;/span&gt;nodegroup ng-ae09882f
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  &lt;span class="nt"&gt;--nodes-max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 was &lt;span class="nb"&gt;set &lt;/span&gt;automatically &lt;span class="k"&gt;for &lt;/span&gt;nodegroup ng-ae09882f
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  deploying stack &lt;span class="s2"&gt;"eksctl-demo-eks-cluster-nodegroup-ng-ae09882f"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;✔]  all EKS cluster resource &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s2"&gt;"demo-eks-cluster"&lt;/span&gt; had been created
&lt;span class="o"&gt;[&lt;/span&gt;✔]  saved kubeconfig as &lt;span class="s2"&gt;"/Users/Bensooraj/.kube/config"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  adding role &lt;span class="s2"&gt;"arn:aws:iam::account_number:role/eksctl-demo-eks-cluster-nodegroup-NodeInstanceRole-1631FNZJZTDSK"&lt;/span&gt; to auth ConfigMap
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  nodegroup &lt;span class="s2"&gt;"ng-ae09882f"&lt;/span&gt; has 0 node&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  waiting &lt;span class="k"&gt;for &lt;/span&gt;at least 2 node&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt; to become ready &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"ng-ae09882f"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  nodegroup &lt;span class="s2"&gt;"ng-ae09882f"&lt;/span&gt; has 2 node&lt;span class="o"&gt;(&lt;/span&gt;s&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  node &lt;span class="s2"&gt;"ip-192-168-30-190.ap-south-1.compute.internal"&lt;/span&gt; is ready
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  node &lt;span class="s2"&gt;"ip-192-168-92-207.ap-south-1.compute.internal"&lt;/span&gt; is ready
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  kubectl &lt;span class="nb"&gt;command &lt;/span&gt;should work with &lt;span class="s2"&gt;"/Users/Bensooraj/.kube/config"&lt;/span&gt;, try &lt;span class="s1"&gt;'kubectl get nodes'&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;✔]  EKS cluster &lt;span class="s2"&gt;"demo-eks-cluster"&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"ap-south-1"&lt;/span&gt; region is ready



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We will create a kubernetes &lt;code&gt;Service&lt;/code&gt; named &lt;code&gt;mysql-service&lt;/code&gt; of type &lt;code&gt;ExternalName&lt;/code&gt; aliasing the RDS endpoint &lt;code&gt;demordsmyqldbinstance.cimllxgykuy3.ap-south-1.rds.amazonaws.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;kubectl apply -f mysql-service.yaml&lt;/code&gt; to create the service.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# mysql-service.yaml &lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql-service&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;externalName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demordsmyqldbinstance.cimllxgykuy3.ap-south-1.rds.amazonaws.com&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql-service&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ExternalName&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;loadBalancer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, clients running inside the pods within the cluster can connect to the RDS instance using &lt;code&gt;mysql-service&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let's test the connect using a throwaway &lt;code&gt;busybox&lt;/code&gt; pod:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;--tty&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; debug &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;busybox &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Never &lt;span class="nt"&gt;--&lt;/span&gt; sh
If you don&lt;span class="s1"&gt;'t see a command prompt, try pressing enter.
/ # nc mysql-service 3306
^Cpunt!



&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It is evident that the pod is unable to get through! Let's solve the problem now.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Let's build the bridge! &lt;a id="lets-build-the-bridge"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We are going to create a [VPC Peering Connection][5] to facilitate communication between the resources in the two VPCs. According to the documentation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A &lt;strong&gt;VPC peering connection&lt;/strong&gt; is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  4.1 Create and Accept a VPC Peering Connection &lt;a id="create-and-accept-vpc-peering-connections"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;To create a VPC peering connection, navigate to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;VPC console: &lt;a href="https://console.aws.amazon.com/vpc/" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/vpc/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;Peering Connections&lt;/code&gt; and click on &lt;code&gt;Create Peering Connection&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Configure the details as follows (select the EKS VPC as the &lt;code&gt;Requester&lt;/code&gt; and the RDS VPC as the &lt;code&gt;Accepter&lt;/code&gt;):
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F5a2ilp9pzgiih073mlvp.png" alt="Configuration"&gt;
&lt;/li&gt;
&lt;li&gt;Click on &lt;code&gt;Create Peering Connection&lt;/code&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F9yw3pc84bwo31nihvcvg.jpg" alt="Confirmation page"&gt;
&lt;/li&gt;
&lt;li&gt;Select the &lt;code&gt;Peering Connection&lt;/code&gt; that we just created. Click on &lt;code&gt;Actions&lt;/code&gt; =&amp;gt; &lt;code&gt;Accept&lt;/code&gt;. Again, in the confirmation dialog box, click on &lt;code&gt;Yes, Accept&lt;/code&gt;.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fi3pgejmth4xmhb88fybn.jpg" alt="Yes, Accept"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Don't forget to export the VPC Peering Connection ID:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ssh"&gt;&lt;code&gt;

&lt;span class="err"&gt;$&lt;/span&gt; &lt;span class="k"&gt;export&lt;/span&gt; VPC_PEERING_CONNECTION_ID=pcx-0cc408e65493fe197


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  4.2 Update the EKS cluster VPC's route table &lt;a id="update-eks-cluster-vpc-route-table"&gt;&lt;/a&gt;
&lt;/h4&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Fetch the route table associated with the 3 public subnets of the VPC created by `eksctl`:&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;aws ec2 describe-route-tables &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="nv"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"tag:aws:cloudformation:logical-id"&lt;/span&gt;,Values&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"PublicRouteTable"&lt;/span&gt; | jq &lt;span class="s1"&gt;'.RouteTables[0].RouteTableId'&lt;/span&gt;
&lt;span class="s2"&gt;"rtb-06103bd0704b3a9ee"&lt;/span&gt;

&lt;span class="c"&gt;# For easy reference&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;EKS_ROUTE_TABLE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rtb-06103bd0704b3a9ee

&lt;span class="c"&gt;# Add route: All traffic to (destination) the RDS VPC CIDR block is via the VPC Peering Connection (target)&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;aws ec2 create-route &lt;span class="nt"&gt;--route-table-id&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;EKS_ROUTE_TABLE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--destination-cidr-block&lt;/span&gt; 10.0.0.0/24 &lt;span class="nt"&gt;--vpc-peering-connection-id&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_PEERING_CONNECTION_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Return"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  4.3 Update the RDS VPC's route table &lt;a id="update-rds-vpc-route-table"&gt;&lt;/a&gt;
&lt;/h4&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Add route: All traffic to (destination) the EKS cluster CIDR block is via the VPC Peering Connection (target)&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;aws ec2 create-route &lt;span class="nt"&gt;--route-table-id&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RDS_ROUTE_TABLE_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--destination-cidr-block&lt;/span&gt; 192.168.0.0/16 &lt;span class="nt"&gt;--vpc-peering-connection-id&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;VPC_PEERING_CONNECTION_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"Return"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  4.4 Update the RDS instance's security group &lt;a id="update-rds-instance-security-group"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;Allow all ingress traffic from the EKS cluster to the RDS instance on port &lt;code&gt;3306&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;aws ec2 authorize-security-group-ingress &lt;span class="nt"&gt;--group-id&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RDS_VPC_SECURITY_GROUP_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--protocol&lt;/span&gt; tcp &lt;span class="nt"&gt;--port&lt;/span&gt; 3306 &lt;span class="nt"&gt;--cidr&lt;/span&gt; 192.168.0.0/16


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  5. Test the connection &lt;a id="test-the-connection"&gt;&lt;/a&gt;
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="nt"&gt;--tty&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; debug &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;busybox &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Never &lt;span class="nt"&gt;--&lt;/span&gt; sh
If you don&lt;span class="s1"&gt;'t see a command prompt, try pressing enter.
/ # nc mysql-service 3306
N
5.7.26-logR&amp;amp;=lk`xTH???mj   _5#K)&amp;gt;mysql_native_password
```

We can see that `busybox` can now successfully talk to the RDS instance using the service `mysql-service`.

That said, this is what our final setup looks like (lot of hard work guys):
![Final setup](https://thepracticaldev.s3.amazonaws.com/i/1ba38e5zu8i36egibtvc.jpeg)


**Note**:
This setup allows all pods in the EKS cluster to access the RDS instance. Depending on your use case, this may or may not be ideal to your architecture. To implement more fine-grained access control, considering setting up a [`NetworkPolicy`][6] resource.

Useful resources:
1. [Visual Subnet Calculator][4]
2. [jq - Command-line JSON processor][7]
3. [AWS CLI Command Reference][8]
4. [AWS VPC Peering][5]


[1]: https://aws.amazon.com/rds/mysql/
[2]: https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html
[3]: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html#USER_VPC.Subnets
[4]: http://www.davidc.net/sites/default/subnets/subnets.html
[5]: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
[6]: https://kubernetes.io/docs/concepts/services-networking/network-policies/
[7]: https://github.com/stedolan/jq
[8]: https://docs.aws.amazon.com/cli/latest/index.html
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>database</category>
      <category>devops</category>
    </item>
    <item>
      <title>Up And Running With Kafka On AWS EKS Using Strimzi</title>
      <dc:creator>Ben Sooraj</dc:creator>
      <pubDate>Thu, 03 Oct 2019 10:20:27 +0000</pubDate>
      <link>https://dev.to/bensooraj/up-and-running-with-kafka-on-aws-eks-using-strimzi-25ga</link>
      <guid>https://dev.to/bensooraj/up-and-running-with-kafka-on-aws-eks-using-strimzi-25ga</guid>
      <description>&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;: &lt;em&gt;This is not a tutorial per se, instead, this is me recording my observations as I setup a Kafka cluster for the first time on a Kubernetes platform using Strimzi.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Contents
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Configure the AWS CLI&lt;/li&gt;
&lt;li&gt;Create the EKS cluster&lt;/li&gt;
&lt;li&gt;Enter Kubernetes&lt;/li&gt;
&lt;li&gt;Install and configure Helm&lt;/li&gt;
&lt;li&gt;Install the Strimzi Kafka Operator&lt;/li&gt;
&lt;li&gt;Deploying the Kafka cluster&lt;/li&gt;
&lt;li&gt;Analysis&lt;/li&gt;
&lt;li&gt;Test the Kafka cluster with Node.js clients&lt;/li&gt;
&lt;li&gt;Clean up!&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;Let's get right into it, then!&lt;/p&gt;

&lt;p&gt;We will be using &lt;a href="https://eksctl.io/" rel="noopener noreferrer"&gt;&lt;code&gt;eksctl&lt;/code&gt;&lt;/a&gt;, the official CLI for Amazon EKS, to spin up our K8s cluster.&lt;/p&gt;
&lt;h3&gt;
  
  
  Configure the AWS CLI &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Ensure that the AWS CLI is &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html" rel="noopener noreferrer"&gt;configured&lt;/a&gt;. To view your configuration:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;aws configure list
      Name                    Value             Type    Location
      &lt;span class="nt"&gt;----&lt;/span&gt;                    &lt;span class="nt"&gt;-----&lt;/span&gt;             &lt;span class="nt"&gt;----&lt;/span&gt;    &lt;span class="nt"&gt;--------&lt;/span&gt;
   profile                &amp;lt;not &lt;span class="nb"&gt;set&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;             None    None
access_key     &lt;span class="k"&gt;****************&lt;/span&gt;7ONG shared-credentials-file    
secret_key     &lt;span class="k"&gt;****************&lt;/span&gt;lbQg shared-credentials-file    
    region               ap-south-1      config-file    ~/.aws/config


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Note: The aws CLI config and credentials details are usually stored at &lt;code&gt;~/.aws/config&lt;/code&gt; and &lt;code&gt;~/.aws/credentials&lt;/code&gt; respectively.&lt;/p&gt;
&lt;h3&gt;
  
  
  Create the EKS cluster &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


$ eksctl create cluster --name=kafka-eks-cluster --nodes=4 --region=ap-south-1

[ℹ]  using region ap-south-1
[ℹ]  setting availability zones to [ap-south-1b ap-south-1a ap-south-1c]
[ℹ]  subnets for ap-south-1b - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for ap-south-1a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for ap-south-1c - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "ng-9f3cbfc7" will use "ami-09c3eb35bb3be46a4" [AmazonLinux2/1.12]
[ℹ]  creating EKS cluster "kafka-eks-cluster" in "ap-south-1" region
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-south-1 --name=kafka-eks-cluster'
[ℹ]  2 sequential tasks: { create cluster control plane "kafka-eks-cluster", create nodegroup "ng-9f3cbfc7" }
[ℹ]  building cluster stack "eksctl-kafka-eks-cluster-cluster"
[ℹ]  deploying stack "eksctl-kafka-eks-cluster-cluster"
[ℹ]  building nodegroup stack "eksctl-kafka-eks-cluster-nodegroup-ng-9f3cbfc7"
[ℹ]  --nodes-min=4 was set automatically for nodegroup ng-9f3cbfc7
[ℹ]  --nodes-max=4 was set automatically for nodegroup ng-9f3cbfc7
[ℹ]  deploying stack "eksctl-kafka-eks-cluster-nodegroup-ng-9f3cbfc7"
[✔]  all EKS cluster resource for "kafka-eks-cluster" had been created
[✔]  saved kubeconfig as "/Users/Bensooraj/.kube/config"
[ℹ]  adding role "arn:aws:iam::account_numer:role/eksctl-kafka-eks-cluster-nodegrou-NodeInstanceRole-IG63RKPE03YQ" to auth ConfigMap
[ℹ]  nodegroup "ng-9f3cbfc7" has 0 node(s)
[ℹ]  waiting for at least 4 node(s) to become ready in "ng-9f3cbfc7"
[ℹ]  nodegroup "ng-9f3cbfc7" has 4 node(s)
[ℹ]  node "ip-192-168-25-34.ap-south-1.compute.internal" is ready
[ℹ]  node "ip-192-168-50-249.ap-south-1.compute.internal" is ready
[ℹ]  node "ip-192-168-62-231.ap-south-1.compute.internal" is ready
[ℹ]  node "ip-192-168-69-95.ap-south-1.compute.internal" is ready
[ℹ]  kubectl command should work with "/Users/Bensooraj/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "kafka-eks-cluster" in "ap-south-1" region is ready



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;A k8s cluster by the name &lt;strong&gt;kafka-eks-cluster&lt;/strong&gt; will be created with 4 nodes (instance type: &lt;a href="https://aws.amazon.com/ec2/instance-types/" rel="noopener noreferrer"&gt;m5.large&lt;/a&gt;) in the Mumbai region (ap-south-1). You can view these in the AWS Console UI as well,&lt;/p&gt;

&lt;p&gt;EKS:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fxamksw0rnxsxjohb8zkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fxamksw0rnxsxjohb8zkh.png" alt="AWS EKS UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudFormation UI:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F8gj6kw97f6bxkm6u9a44.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F8gj6kw97f6bxkm6u9a44.png" alt="Cloudformation UI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, after the cluster is created, the appropriate kubernetes configuration will be added to your kubeconfig file (defaults to &lt;code&gt;~/.kube/config&lt;/code&gt;). The path to the kubeconfig file can be overridden using the &lt;code&gt;--kubeconfig&lt;/code&gt; flag.&lt;/p&gt;
&lt;h3&gt;
  
  
  Enter Kubernetes &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Fetching all k8s controllers lists the default &lt;code&gt;kubernetes&lt;/code&gt; service. This confirms that &lt;code&gt;kubectl&lt;/code&gt; is properly configured to point to the cluster that we just created.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
service/kubernetes   ClusterIP   10.100.0.1   &amp;lt;none&amp;gt;        443/TCP   19m


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  Install and configure Helm &lt;a&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://helm.sh" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; is a package manager and application management tool for Kubernetes that packages multiple Kubernetes resources into a single logical deployment unit called Chart.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I use &lt;em&gt;Homebrew&lt;/em&gt;, so the installation was pretty straightforward: &lt;code&gt;brew install kubernetes-helm&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Alternatively, to install &lt;code&gt;helm&lt;/code&gt;, run the following:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/eks-kafka-strimzi

&lt;span class="nv"&gt;$ &lt;/span&gt;curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; get_helm.sh

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x get_helm.sh

&lt;span class="nv"&gt;$ &lt;/span&gt;./get_helm.sh


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Read through their &lt;a href="https://helm.sh/docs/using_helm/#installing-helm" rel="noopener noreferrer"&gt;installation guide&lt;/a&gt;, if you are looking for more options.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do not run &lt;code&gt;helm init&lt;/code&gt; yet.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Helm&lt;/code&gt; relies on a service called &lt;strong&gt;&lt;code&gt;tiller&lt;/code&gt;&lt;/strong&gt; that requires special permission on the kubernetes cluster, so we need to build a &lt;strong&gt;&lt;code&gt;Service Account&lt;/code&gt;&lt;/strong&gt; (RBAC access) for &lt;strong&gt;&lt;code&gt;tiller&lt;/code&gt;&lt;/strong&gt; to use.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;rbac.yaml&lt;/code&gt; file would look like the following:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tiller&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tiller&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-admin&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tiller&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Apply this to the &lt;code&gt;kafka-eks-cluster&lt;/code&gt; cluster:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

&lt;span class="c"&gt;# Verify (listing only the relevant ones)&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sa,clusterrolebindings &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kube-system
NAME                        SECRETS   AGE
&lt;span class="nb"&gt;.&lt;/span&gt;
serviceaccount/tiller       1         5m22s
&lt;span class="nb"&gt;.&lt;/span&gt;

NAME                                                                                                AGE
&lt;span class="nb"&gt;.&lt;/span&gt;
clusterrolebinding.rbac.authorization.k8s.io/tiller                                                 5m23s
&lt;span class="nb"&gt;.&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now, run &lt;strong&gt;&lt;code&gt;helm init&lt;/code&gt;&lt;/strong&gt; using the service account we setup. This will install tiller into the cluster which gives it access to manage resources in your cluster.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;helm init &lt;span class="nt"&gt;--service-account&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tiller

&lt;span class="nv"&gt;$HELM_HOME&lt;/span&gt; has been configured at /Users/Bensooraj/.helm.

Tiller &lt;span class="o"&gt;(&lt;/span&gt;the Helm server-side component&lt;span class="o"&gt;)&lt;/span&gt; has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure &lt;span class="s1"&gt;'allow unauthenticated users'&lt;/span&gt; policy.

To prevent this, run &lt;span class="sb"&gt;`&lt;/span&gt;helm init&lt;span class="sb"&gt;`&lt;/span&gt; with the &lt;span class="nt"&gt;--tiller-tls-verify&lt;/span&gt; flag.

For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Install the Strimzi Kafka Operator &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Add the Strimzi repository and install the Strimzi Helm Chart:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Add the repo&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;helm repo add strimzi http://strimzi.io/charts/
&lt;span class="s2"&gt;"strimzi"&lt;/span&gt; has been added to your repositories

&lt;span class="c"&gt;# Search for all Strimzi  charts&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;helm search strim
NAME                            CHART VERSION   APP VERSION DESCRIPTION                
strimzi/strimzi-kafka-operator  0.14.0          0.14.0      Strimzi: Kafka as a Service

&lt;span class="c"&gt;# Install the kafka operator&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;strimzi/strimzi-kafka-operator
NAME:   bulging-gnat
LAST DEPLOYED: Wed Oct  2 15:23:45 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; v1/ClusterRole
NAME                                 AGE
strimzi-cluster-operator-global      0s
strimzi-cluster-operator-namespaced  0s
strimzi-entity-operator              0s
strimzi-kafka-broker                 0s
strimzi-topic-operator               0s

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; v1/ClusterRoleBinding
NAME                                              AGE
strimzi-cluster-operator                          0s
strimzi-cluster-operator-kafka-broker-delegation  0s

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; v1/Deployment
NAME                      READY  UP-TO-DATE  AVAILABLE  AGE
strimzi-cluster-operator  0/1    1           0          0s

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; v1/Pod&lt;span class="o"&gt;(&lt;/span&gt;related&lt;span class="o"&gt;)&lt;/span&gt;
NAME                                       READY  STATUS             RESTARTS  AGE
strimzi-cluster-operator-6667fbc5f8-cqvdv  0/1    ContainerCreating  0         0s

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; v1/RoleBinding
NAME                                                 AGE
strimzi-cluster-operator                             0s
strimzi-cluster-operator-entity-operator-delegation  0s
strimzi-cluster-operator-topic-operator-delegation   0s

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; v1/ServiceAccount
NAME                      SECRETS  AGE
strimzi-cluster-operator  1        0s

&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; v1beta1/CustomResourceDefinition
NAME                                AGE
kafkabridges.kafka.strimzi.io       0s
kafkaconnects.kafka.strimzi.io      0s
kafkaconnects2is.kafka.strimzi.io   0s
kafkamirrormakers.kafka.strimzi.io  0s
kafkas.kafka.strimzi.io             1s
kafkatopics.kafka.strimzi.io        1s
kafkausers.kafka.strimzi.io         1s

NOTES:
Thank you &lt;span class="k"&gt;for &lt;/span&gt;installing strimzi-kafka-operator-0.14.0

To create a Kafka cluster refer to the following documentation.

https://strimzi.io/docs/0.14.0/#kafka-cluster-str


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;List all the kubernetes objects created again:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get all
NAME                                            READY   STATUS    RESTARTS   AGE
pod/strimzi-cluster-operator-6667fbc5f8-cqvdv   1/1     Running   0          9m25s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;   AGE
service/kubernetes   ClusterIP   10.100.0.1   &amp;lt;none&amp;gt;        443/TCP   90m

NAME                                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/strimzi-cluster-operator   1         1         1            1           9m25s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/strimzi-cluster-operator-6667fbc5f8   1         1         1       9m26s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Deploying the Kafka cluster &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We will now create a Kafka cluster with 3 brokers. The YAML file (&lt;code&gt;kafka-cluster.Kafka.yaml&lt;/code&gt;) for creating the Kafka cluster would like the following:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka.strimzi.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Kafka&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka-cluster&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;kafka&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2.3.0&lt;/span&gt; &lt;span class="c1"&gt;# Kafka version&lt;/span&gt;
    &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt; &lt;span class="c1"&gt;# Replicas specifies the number of broker nodes.&lt;/span&gt;
    &lt;span class="na"&gt;listeners&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# Listeners configure how clients connect to the Kafka cluster&lt;/span&gt;
      &lt;span class="na"&gt;plain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt; &lt;span class="c1"&gt;# 9092&lt;/span&gt;
      &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt; &lt;span class="c1"&gt;# 9093&lt;/span&gt;
    &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;offsets.topic.replication.factor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
      &lt;span class="na"&gt;transaction.state.log.replication.factor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
      &lt;span class="na"&gt;transaction.state.log.min.isr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="na"&gt;log.message.format.version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2.3"&lt;/span&gt;
      &lt;span class="na"&gt;delete.topic.enable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;persistent-claim&lt;/span&gt;
      &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10Gi&lt;/span&gt;
      &lt;span class="na"&gt;deleteClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;zookeeper&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;persistent-claim&lt;/span&gt; &lt;span class="c1"&gt;# Persistent storage backed by AWS EBS&lt;/span&gt;
      &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10Gi&lt;/span&gt;
      &lt;span class="na"&gt;deleteClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;entityOperator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;topicOperator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt; &lt;span class="c1"&gt;# Operator for topic administration&lt;/span&gt;
    &lt;span class="na"&gt;userOperator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Apply the above YAML file:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; kafka-cluster.Kafka.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Analysis &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This is where things get interesting. We will now analyse &lt;strong&gt;some&lt;/strong&gt; of the k8s resources which the &lt;code&gt;strimzi kafka operator&lt;/code&gt; has created for us under the hood.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get statefulsets.apps,pod,deployments,svc
NAME                                       DESIRED   CURRENT   AGE
statefulset.apps/kafka-cluster-kafka       3         3         78m
statefulset.apps/kafka-cluster-zookeeper   3         3         79m

NAME                                                 READY   STATUS    RESTARTS   AGE
pod/kafka-cluster-entity-operator-54cb77fd9d-9zbcx   3/3     Running   0          77m
pod/kafka-cluster-kafka-0                            2/2     Running   0          78m
pod/kafka-cluster-kafka-1                            2/2     Running   0          78m
pod/kafka-cluster-kafka-2                            2/2     Running   0          78m
pod/kafka-cluster-zookeeper-0                        2/2     Running   0          79m
pod/kafka-cluster-zookeeper-1                        2/2     Running   0          79m
pod/kafka-cluster-zookeeper-2                        2/2     Running   0          79m
pod/strimzi-cluster-operator-6667fbc5f8-cqvdv        1/1     Running   0          172m

NAME                                                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/kafka-cluster-entity-operator   1         1         1            1           77m
deployment.extensions/strimzi-cluster-operator        1         1         1            1           172m

NAME                                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                      AGE
service/kafka-cluster-kafka-bootstrap    ClusterIP   10.100.177.177   &amp;lt;none&amp;gt;        9091/TCP,9092/TCP,9093/TCP   78m
service/kafka-cluster-kafka-brokers      ClusterIP   None             &amp;lt;none&amp;gt;        9091/TCP,9092/TCP,9093/TCP   78m
service/kafka-cluster-zookeeper-client   ClusterIP   10.100.199.128   &amp;lt;none&amp;gt;        2181/TCP                     79m
service/kafka-cluster-zookeeper-nodes    ClusterIP   None             &amp;lt;none&amp;gt;        2181/TCP,2888/TCP,3888/TCP   79m
service/kubernetes                       ClusterIP   10.100.0.1       &amp;lt;none&amp;gt;        443/TCP                      4h13m


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Points to note:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The StatefulSet &lt;code&gt;kafka-cluster-zookeeper&lt;/code&gt; has created 3 pods - &lt;code&gt;kafka-cluster-zookeeper-0&lt;/code&gt;, &lt;code&gt;kafka-cluster-zookeeper-1&lt;/code&gt; and &lt;code&gt;kafka-cluster-zookeeper-2&lt;/code&gt;. The headless service &lt;code&gt;kafka-cluster-zookeeper-nodes&lt;/code&gt; facilitates network identity of these 3 pods (the 3 Zookeeper nodes).&lt;/li&gt;
&lt;li&gt;The StatefulSet &lt;code&gt;kafka-cluster-kafka&lt;/code&gt; has created 3 pods - &lt;code&gt;kafka-cluster-kafka-0&lt;/code&gt;, &lt;code&gt;kafka-cluster-kafka-1&lt;/code&gt; and &lt;code&gt;kafka-cluster-kafka-2&lt;/code&gt;. The headless service &lt;code&gt;kafka-cluster-kafka-brokers&lt;/code&gt; facilitates network identity of these 3 pods (the 3 Kafka brokers).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Persistent volumes are dynamically provisioned:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                    STORAGECLASS   REASON   AGE
persistentvolume/pvc-7ff2909f-e507-11e9-91df-0a1e73fdd786   10Gi       RWO            Delete           Bound    default/data-kafka-cluster-zookeeper-1   gp2                     11h
persistentvolume/pvc-7ff290c4-e507-11e9-91df-0a1e73fdd786   10Gi       RWO            Delete           Bound    default/data-kafka-cluster-zookeeper-2   gp2                     11h
persistentvolume/pvc-7ffd1d22-e507-11e9-a775-029ce0835b96   10Gi       RWO            Delete           Bound    default/data-kafka-cluster-zookeeper-0   gp2                     11h
persistentvolume/pvc-a5997b77-e507-11e9-91df-0a1e73fdd786   10Gi       RWO            Delete           Bound    default/data-kafka-cluster-kafka-0       gp2                     11h
persistentvolume/pvc-a599e52b-e507-11e9-91df-0a1e73fdd786   10Gi       RWO            Delete           Bound    default/data-kafka-cluster-kafka-1       gp2                     11h
persistentvolume/pvc-a59c6cd2-e507-11e9-91df-0a1e73fdd786   10Gi       RWO            Delete           Bound    default/data-kafka-cluster-kafka-2       gp2                     11h

NAME                                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/data-kafka-cluster-kafka-0       Bound    pvc-a5997b77-e507-11e9-91df-0a1e73fdd786   10Gi       RWO            gp2            11h
persistentvolumeclaim/data-kafka-cluster-kafka-1       Bound    pvc-a599e52b-e507-11e9-91df-0a1e73fdd786   10Gi       RWO            gp2            11h
persistentvolumeclaim/data-kafka-cluster-kafka-2       Bound    pvc-a59c6cd2-e507-11e9-91df-0a1e73fdd786   10Gi       RWO            gp2            11h
persistentvolumeclaim/data-kafka-cluster-zookeeper-0   Bound    pvc-7ffd1d22-e507-11e9-a775-029ce0835b96   10Gi       RWO            gp2            11h
persistentvolumeclaim/data-kafka-cluster-zookeeper-1   Bound    pvc-7ff2909f-e507-11e9-91df-0a1e73fdd786   10Gi       RWO            gp2            11h
persistentvolumeclaim/data-kafka-cluster-zookeeper-2   Bound    pvc-7ff290c4-e507-11e9-91df-0a1e73fdd786   10Gi       RWO            gp2            11h


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;You can view the provisioned AWS EBS volumes in the UI as well:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fzys2wmubcb42glzzp23m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fzys2wmubcb42glzzp23m.png" alt="EBS UI"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create topics
&lt;/h3&gt;

&lt;p&gt;Before we get started with clients we need to create a &lt;strong&gt;topic&lt;/strong&gt; (with 3 partitions and a replication factor of 3), over which our &lt;code&gt;producer&lt;/code&gt; and the &lt;code&gt;consumer&lt;/code&gt; and produce messages and consume messages on respectively.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka.strimzi.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;KafkaTopic&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-topic&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;strimzi.io/cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka-cluster&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;partitions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Apply the YAML to the k8s cluster:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; create-topics.yaml
kafkatopic.kafka.strimzi.io/test-topic created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Test the Kafka cluster with Node.js clients &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The multi-broker Kafka cluster that we deployed is backed by &lt;code&gt;statefulset&lt;/code&gt;s and their corresponding headless &lt;code&gt;service&lt;/code&gt;s.&lt;/p&gt;

&lt;p&gt;Since each Pod (Kafka broker) now has a network identity, clients can connect to the Kafka brokers via a combination of the pod name and service name: &lt;code&gt;$(podname).$(governing service domain)&lt;/code&gt;. In our case, these would be the following URLs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;kafka-cluster-kafka-0.kafka-cluster-kafka-brokers&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kafka-cluster-kafka-1.kafka-cluster-kafka-brokers&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kafka-cluster-kafka-2.kafka-cluster-kafka-brokers&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If the Kafka cluster is deployed in a different namespace, you will have to expand it a little further: &lt;code&gt;$(podname).$(service name).$(namespace).svc.cluster.local&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Alternatively, the clients can connect to the Kafka cluster using the Service &lt;code&gt;kafka-cluster-kafka-bootstrap:9092&lt;/code&gt; as well. It distributes the connection over the three broker specific endpoints I have listed above. As I no longer keep track of the individual broker endpoints, this method plays out well when I have to scale up or down the number of brokers in the Kafka cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;First, clone this repo: &lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/bensooraj" rel="noopener noreferrer"&gt;
        bensooraj
      &lt;/a&gt; / &lt;a href="https://github.com/bensooraj/strimzi-kafka-aws-eks" rel="noopener noreferrer"&gt;
        strimzi-kafka-aws-eks
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Create the configmap, which contains details such as the broker DNS names, topic name and consumer group ID&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt;/k8s/config.yaml
configmap/kafka-client-config created

&lt;span class="c"&gt;# Create the producer deployment&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt;/k8s/producer.Deployment.yaml
deployment.apps/node-test-producer created

&lt;span class="c"&gt;# Expose the producer deployment via a service of type LoadBalancer (backed by the AWS Elastic Load Balancer). This just makes it easy for me to curl from postman&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt;/k8s/producer.Service.yaml
service/node-test-producer created

&lt;span class="c"&gt;# Finally, create the consumer deployment&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt;/k8s/consumer.Deployment.yaml
deployment.apps/node-test-consumer created



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you list the producer service that we created, you would notice a &lt;code&gt;URL&lt;/code&gt; under EXTERNAL-IP:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get svc
NAME                             TYPE           CLUSTER-IP       EXTERNAL-IP                                                                PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                      AGE
&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nb"&gt;.&lt;/span&gt;
node-test-producer               LoadBalancer   10.100.145.203   ac5f3d0d1e55a11e9a775029ce0835b9-2040242746.ap-south-1.elb.amazonaws.com   80:31231/TCP                 55m



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The URL &lt;code&gt;ac5f3d0d1e55a11e9a775029ce0835b9-2040242746.ap-south-1.elb.amazonaws.com&lt;/code&gt; is an &lt;code&gt;AWS ELB&lt;/code&gt; backed public endpoint which we will be querying for producing messages to the Kafka cluster.&lt;/p&gt;

&lt;p&gt;Also, you can see that there is 1 producer and 3 consumers (one for each partition of the topic &lt;code&gt;test-topic&lt;/code&gt;):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pod
NAME                                             READY   STATUS    RESTARTS   AGE
node-test-consumer-96b44cbcb-gs2km               1/1     Running   0          125m
node-test-consumer-96b44cbcb-ptvjd               1/1     Running   0          125m
node-test-consumer-96b44cbcb-xk75j               1/1     Running   0          125m
node-test-producer-846d9c5986-vcsf2              1/1     Running   0          125m


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The producer app basically exposes 3 URLs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;/kafka-test/green/:message&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/kafka-test/blue/:message&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;/kafka-test/cyan/:message&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Where &lt;code&gt;:message&lt;/code&gt; can be any valid string. Each of these URLs produce a &lt;strong&gt;message&lt;/strong&gt; along with the &lt;strong&gt;colour&lt;/strong&gt; information to the topic &lt;code&gt;test-topic&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The consumer group (the 3 consumer pods that we spin-up) listening for any incoming messages from the topic &lt;code&gt;test-topic&lt;/code&gt;, then receives these messages and prints them on to the console according to the colour instruction.&lt;/p&gt;

&lt;p&gt;I &lt;code&gt;curl&lt;/code&gt; each URL 3 times. From the following GIF you can see how message consumption is distributed across the 3 consumers in a &lt;code&gt;round-robin&lt;/code&gt; manner:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fa3b19iryt7pxff3z8ust.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fa3b19iryt7pxff3z8ust.gif" alt="Producer and Consumer Visualisation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Clean Up! &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;


&lt;span class="c"&gt;# Delete the test producer and consumer apps:&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt;/k8s/
configmap &lt;span class="s2"&gt;"kafka-client-config"&lt;/span&gt; deleted
deployment.apps &lt;span class="s2"&gt;"node-test-consumer"&lt;/span&gt; deleted
deployment.apps &lt;span class="s2"&gt;"node-test-producer"&lt;/span&gt; deleted
service &lt;span class="s2"&gt;"node-test-producer"&lt;/span&gt; deleted

&lt;span class="c"&gt;# Delete the Kafka cluster&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete kafka kafka-cluster
kafka.kafka.strimzi.io &lt;span class="s2"&gt;"kafka-cluster"&lt;/span&gt; deleted

&lt;span class="c"&gt;# Delete the Strimzi cluster operator&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete deployments. strimzi-cluster-operator
deployment.extensions &lt;span class="s2"&gt;"strimzi-cluster-operator"&lt;/span&gt; deleted

&lt;span class="c"&gt;# Manually delete the persistent volumes&lt;/span&gt;
&lt;span class="c"&gt;# Kafka&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pvc data-kafka-cluster-kafka-0
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pvc data-kafka-cluster-kafka-1
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pvc data-kafka-cluster-kafka-2
&lt;span class="c"&gt;# Zookeeper&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pvc data-kafka-cluster-zookeeper-0
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pvc data-kafka-cluster-zookeeper-1
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete pvc data-kafka-cluster-zookeeper-2


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, delete the EKS cluster:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;eksctl delete cluster kafka-eks-cluster
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  using region ap-south-1
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  deleting EKS cluster &lt;span class="s2"&gt;"kafka-eks-cluster"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;✔]  kubeconfig has been updated
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  2 sequential tasks: &lt;span class="o"&gt;{&lt;/span&gt; delete nodegroup &lt;span class="s2"&gt;"ng-9f3cbfc7"&lt;/span&gt;, delete cluster control plane &lt;span class="s2"&gt;"kafka-eks-cluster"&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;async] &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  will delete stack &lt;span class="s2"&gt;"eksctl-kafka-eks-cluster-nodegroup-ng-9f3cbfc7"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  waiting &lt;span class="k"&gt;for &lt;/span&gt;stack &lt;span class="s2"&gt;"eksctl-kafka-eks-cluster-nodegroup-ng-9f3cbfc7"&lt;/span&gt; to get deleted
&lt;span class="o"&gt;[&lt;/span&gt;ℹ]  will delete stack &lt;span class="s2"&gt;"eksctl-kafka-eks-cluster-cluster"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;✔]  all cluster resources were deleted


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Hope this helped!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>devops</category>
      <category>node</category>
    </item>
  </channel>
</rss>
