<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jonathan Monette</title>
    <description>The latest articles on DEV Community by Jonathan Monette (@jmoney).</description>
    <link>https://dev.to/jmoney</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jmoney"/>
    <language>en</language>
    <item>
      <title>Tips for project management</title>
      <dc:creator>Jonathan Monette</dc:creator>
      <pubDate>Tue, 19 May 2020 05:17:57 +0000</pubDate>
      <link>https://dev.to/jmoney/tips-for-project-management-4cpp</link>
      <guid>https://dev.to/jmoney/tips-for-project-management-4cpp</guid>
      <description>&lt;p&gt;Anytime a new project is spun up the first thing that is done is normally to ask how long will this take. Some folks are explicit on this ask and some ask other questions to tease it out but everyone wants to know how long a project will take. Some engineers are better at this than others when put on the spot to give an ETA and some just say they will get back to you.  Here are some tips for proper expectation setting when working on a project. &lt;/p&gt;

&lt;h2&gt;
  
  
  Think Through Admin tasks
&lt;/h2&gt;

&lt;p&gt;This one is normally the hardest. Every engineer can think through the coding tasks that need to be done to ship a feature but rarely do we take into consideration the admin tasks to make that happen.&lt;/p&gt;

&lt;p&gt;Please do not however during your agile ceremonies bring up everyone of these details when breaking down stories or grooming the backlog. This is something to keep in mind when providing estimates but no need to go into the messy details unless asked to explain. Try to keep the explanation clear and concise.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation
&lt;/h3&gt;

&lt;p&gt;A big one forgotten is documentation. Not allotting time to update README, wikis, and runbooks can easily make the simplest task get horrible off schedule. Engineers really underestimate the time to build out a good README. &lt;/p&gt;

&lt;h3&gt;
  
  
  Git Management
&lt;/h3&gt;

&lt;p&gt;Another one forgotten is time spent managing git. This one seems small but have you ever accounted for the time for code reads or conflict resolution, or just discovering which repos need to be updated and attaining access to those repos?  Folks who you need code reads from may not be working on the same feature you are working on and will be distracted with their own tasks. Everyone could be working in a monorepo where one engineer decides to “refactor” a large chunk of code which steps on everyone touching the same code base. Git is sometimes a massive time sink.&lt;/p&gt;

&lt;h2&gt;
  
  
  Write it all down
&lt;/h2&gt;

&lt;p&gt;If you are like me your memory is terrible. I cannot remember anything I did yesterday unless it is recorded in some way. Lately, I use a simple markdown file in git with the tasks I have completed(and when) as well as the tasks I still need to complete. Roughly ordered by priority. I write everything down for a project just so I can remember all the admin gotchas above.&lt;/p&gt;

&lt;p&gt;Just by writing things down you start to think more structured. Meaning, when someone asks you “how long will this take” you can quickly recall a similar feature and all the tasks it took to get done down to whatever detail you want. This can go a long way at estimating an ETA. &lt;/p&gt;

&lt;h2&gt;
  
  
  Communicate early and communicate often
&lt;/h2&gt;

&lt;p&gt;I usually tell junior engineers I work with on projects that I do not care how long a task takes as long as I know you are not blocked. If you are blocked or think you are blocked come find me and let me know immediately. We will help get you unblocked ASAP. It is always best to communicate status of a project to stakeholders. I do not mean letting them know every little commit on every little ticket but let them know when the feature hits Pre-prod, when it has passed / failed QA testing, or especially when it is blocked. Stakeholders always want to know when things are blocked and when hey become unblocked so they can determine for themselves if they should step in and help prioritize the work to unblock or just to know the impact of the timeline.&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrap up
&lt;/h1&gt;

&lt;p&gt;This is not an exhaustive list. There are many other tricks out there or frameworks that help you break down problems better. These are just things I constantly have to remind myself and have built a personal process around to make sure these things are always at the top of my head when working through projects. &lt;/p&gt;

</description>
      <category>engineering</category>
      <category>timemanagement</category>
    </item>
    <item>
      <title>Deploying an AWS PrivateLink for a Kafka Cluster</title>
      <dc:creator>Jonathan Monette</dc:creator>
      <pubDate>Sat, 25 Apr 2020 03:38:36 +0000</pubDate>
      <link>https://dev.to/jmoney/deploying-an-aws-privatelink-for-a-kafka-cluster-86p</link>
      <guid>https://dev.to/jmoney/deploying-an-aws-privatelink-for-a-kafka-cluster-86p</guid>
      <description>&lt;h1&gt;
  
  
  Deploying an AWS PrivateLink for a Kafka Cluster
&lt;/h1&gt;

&lt;p&gt;Kafka is a massively scalable way of delivering events to a multitude of systems. In the cloud however, Kafka is not always readily available all across the same networks. This is typically a problem for any type of application and connecting privately, not exposing to the internet, between cloud networks has been difficult. A feature that recently came out from AWS is PrivateLink. PrivateLink is a technology that allows you to connect to AWS Virtual Private Clouds together privately. The solution leverages AWS Network Load Balancers in the provider account to bind consumers too from the consumer account. In this approach, there are IPs in the consumer account that can be routed too and AWS handles the NAT translation to the IPs for the Network Load Balancer in the provider account. This works great for stateless applications where it does not matter which instance of a service serves the request as the application will behind that Network Load Balancer but for stateful applications it tends to matter which host processes the request especially in a distributed system.&lt;/p&gt;

&lt;p&gt;One such service is Kafka. Kafka has a master/slave implementation where a master needs to process all producing of data requests but slaves, and the master, can be used for consumption of data. The way producing works is you provide a list of bootstrap servers. These bootstrap servers are used for discovering the rest of the cluster as well the metadata of the topics. How this works is the bootstrap servers, which can be any of the servers in the Kafka cluster, respond back with metadata about the rest of the cluster to the client such that the client can connect to the rest of the cluster. This is true for both producing and consuming data to/from the cluster.&lt;/p&gt;

&lt;p&gt;To circumvent the stateful instance load balancing problem we must craft the rules for the load balancer to always send the requests to the right node that needs to respond.&lt;/p&gt;

&lt;h2&gt;
  
  
  Approach
&lt;/h2&gt;

&lt;p&gt;The Producer and Consumer use case are exactly the same in this solution.&lt;br&gt;
The solution relies on routing traffic from the Network Load Balancer with cross zone load balancing enabled based on port numbers. There is a target group per broker instance defined on the Network Load Balancer with a unique port number as the source. What this means is when a specific port arrives at the Network Load Balancer the NLB will route the traffic to a specific instance in the Kafka cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ca-FjMVp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zougma1u9eqt00jdr1ei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ca-FjMVp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zougma1u9eqt00jdr1ei.png" alt="Kafka Broker request path via a PrivateLink" width="880" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above diagram the important pieces are we map the port 6000 to port 9090 on Broker 1, 6001 to port 9090 on Broker 2, and port 6002 to port 9090 on Broker 3. This mapping scheme is the key to making the Kafka connectivity across the PrivateLink work. Now we just have to configure the individual Kafka brokers to respond with the right connection information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Broker 1&lt;/span&gt;
advertised.listeners&lt;span class="o"&gt;=&lt;/span&gt;INTERNAL_PLAINTEXT://:9092,VPCE://kafka.vpce.dev:6000
&lt;span class="nv"&gt;listeners&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;INTERNAL_PLAINTEXT://:9092,VPCE://:9090
inter.broker.listener.name&lt;span class="o"&gt;=&lt;/span&gt;INTERNAL_PLAINTEXT
listener.security.protocol.map&lt;span class="o"&gt;=&lt;/span&gt;VPCE:SSL,INTERNAL_PLAINTEXT:PLAINTEXT
&lt;span class="c"&gt;# Broker 2&lt;/span&gt;
advertised.listeners&lt;span class="o"&gt;=&lt;/span&gt;INTERNAL_PLAINTEXT://:9092,VPCE://kafka.vpce.dev:6001
&lt;span class="nv"&gt;listeners&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;INTERNAL_PLAINTEXT://:9092,VPCE://:9090
inter.broker.listener.name&lt;span class="o"&gt;=&lt;/span&gt;INTERNAL_PLAINTEXT
listener.security.protocol.map&lt;span class="o"&gt;=&lt;/span&gt;VPCE:SSL,INTERNAL_PLAINTEXT:PLAINTEXT
&lt;span class="c"&gt;# Broker 3&lt;/span&gt;
advertised.listeners&lt;span class="o"&gt;=&lt;/span&gt;INTERNAL_PLAINTEXT://:9092,VPCE://kafka.vpce.dev:6002
&lt;span class="nv"&gt;listeners&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;INTERNAL_PLAINTEXT://:9092,VPCE://:9090
inter.broker.listener.name&lt;span class="o"&gt;=&lt;/span&gt;INTERNAL_PLAINTEXT
listener.security.protocol.map&lt;span class="o"&gt;=&lt;/span&gt;VPCE:SSL,INTERNAL_PLAINTEXT:PLAINTEXT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are several things going on in this configuration that the configuration manual for the Kafka brokers should be consulted but the overview is we are telling the brokers to advertise themselves as &lt;code&gt;kafka.vpce.dev:600[0-2]&lt;/code&gt; back to the clients during the bootstrap phase of connecting. This setup does have implication in that kafka.vpce.dev must resolve to the VPC Endpoint Consumer IPs in the PrivateLink. This indicates that the consumer end of the of VPC Endpoint must host/own/agree upon the advertised dns names used in the listeners. This can be accomplished as easily as setting up a private hosted zone, vpce.dev, in Route53 of your AWS account where you dump various VPC Endpoint consumers as Alias A records. In this case, kafka,vpce.dev will be an Alias A Record to the VPC Endpoint consumer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Connecting Kafka across VPCs can be overly complex and not straight forward but as with all complex problems can be fun! However, it will be certainly more easily debuggable if using Kafka using PrivateLinks is avoided. Using AWS Transit Gateway or good ol VPC peering would definitely be preferable.&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>privatelink</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
