<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rohith Kunnath</title>
    <description>The latest articles on DEV Community by Rohith Kunnath (@rohithmenon89).</description>
    <link>https://dev.to/rohithmenon89</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rohithmenon89"/>
    <language>en</language>
    <item>
      <title>Anyone tried PR reviews together as a team every day?</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Thu, 07 Jul 2022 13:06:56 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/anyone-tried-pr-reviews-together-as-a-team-every-day-7g8</link>
      <guid>https://dev.to/rohithmenon89/anyone-tried-pr-reviews-together-as-a-team-every-day-7g8</guid>
      <description>&lt;p&gt;Anyone tried PR reviews together as a team every day instead of individual team members reviewing them? Is it effective?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>codereview</category>
    </item>
    <item>
      <title>Why Springboot and not Dropwizard ?</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Wed, 18 May 2022 08:25:23 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/why-springboot-and-not-dropwizard--81d</link>
      <guid>https://dev.to/rohithmenon89/why-springboot-and-not-dropwizard--81d</guid>
      <description></description>
      <category>java</category>
      <category>framework</category>
    </item>
    <item>
      <title>Making Share My Trip Feature much more robust.</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Wed, 02 Feb 2022 09:00:34 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/making-share-my-trip-feature-much-more-robust-2lgl</link>
      <guid>https://dev.to/rohithmenon89/making-share-my-trip-feature-much-more-robust-2lgl</guid>
      <description>&lt;p&gt;Working with one of the top urban mobility providers in Germany was definitely a High Point in my career. A lot of learning and unlearning happens on every career switch. One of them for me was refactoring or rewriting the Share My Trip feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Share My Trip feature?
&lt;/h2&gt;

&lt;p&gt;Whoever takes a ride in the App, can share the entire Trip route, plus information about the vehicle they are riding on via a Web URL. This is normally used to inform someone expecting your arrival about your current location. However, this can be also regarded as a Security feature in our fast-moving world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Old Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m4_cba6k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70mfgq98jbbslng2tf97.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m4_cba6k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70mfgq98jbbslng2tf97.jpg" alt="Legacy Arch" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The old architecture was a microservice serving users real-time data collected from multiple data sources. The ratio of cons to pros was much higher in this design. Some of the cons caught my eyes and are listed below.&lt;br&gt;
Increasing the chances of “Distributed Denial of Service Attack(DDoS)”.&lt;br&gt;
Since the URL is public which queries the big legacy monolith database with a lot of table joins, external driver service API calls, and who knows how many more things are going to be added there in the future. I would argue it is easy to trigger a DDoS Attack.&lt;br&gt;
Inefficient Resource Usage.&lt;br&gt;
Location Service stores the data in Distributed Redis Clusters and Redis eats nothing but the memory itself. As a company with thousands of transports on the road, we are talking about location updates at the rate of100k/min. Some features like ShareMyTrip making use of this resource every time is an Inefficient Resource Usage.&lt;br&gt;
Questioning “Availablity” and “Reliability” at the same time.&lt;br&gt;
Making our service available to our end users 99.9% percent of the time and being reliable to our end-users depends on the health of our mothership(the legacy database), a lot of microservices, and the monolith service which connects to the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZaOWc4LB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vks2timke9s2oomgvu5v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZaOWc4LB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vks2timke9s2oomgvu5v.jpg" alt="New Arch" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The new architecture works purely based on Events. A hybrid model of Pub/Sub model and Producer/Consumer model is used.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stack:
&lt;/h3&gt;

&lt;p&gt;Database - Mongo&lt;br&gt;
Why? Mongo database helps us to easily mitigate schema changes from a lot of independent services. Similarly helps us to maintain a TTL for each booking.&lt;br&gt;
Service - Spring boot service.&lt;br&gt;
Producer/Consumer - Rabbimq.&lt;br&gt;
Pub/Sub - Redis.&lt;/p&gt;

&lt;p&gt;Understanding this would be easy if I starts explaining a User Journey.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A user initiates the search for a Cab to travel from Position A to Position B.&lt;/li&gt;
&lt;li&gt;Booking Service receives the request and creates publishes an event to the Rabbitmq via Fanout exchange.&lt;/li&gt;
&lt;li&gt;Our new service holds consumers which listen to the above events and save local copies to our Database.&lt;/li&gt;
&lt;li&gt;Once a driver is available and is allocated to the booking, the Driver Service publishes another event to Rabbitmq via Fanout exchange.&lt;/li&gt;
&lt;li&gt;Our service updates the Driver Information for the respective booking with a TTL of estimated driving time plus a buffer of 15 minutes.&lt;/li&gt;
&lt;li&gt;At the same time, our service will create a new subscription on Location Service for the respective Driver Id. This will help us to not receive all the location updates from every driver but only from drivers who are riding now.&lt;/li&gt;
&lt;li&gt;For every further booking update, we receive an event from Booking Service and the same is updated in our database.&lt;/li&gt;
&lt;li&gt;From the moment we receive the Booking event with the status “PASSENGER_CARRY”, the API responds with 200 status and respective body content.&lt;/li&gt;
&lt;li&gt;From the moment we receive the Booking event with the status “PASSENGER_DROPPED”, the Redis Subscription for the Driver Id is deleted and the TTL for the booking will be set to 5mins. This is to make sure the data only exists for 5 more mins for anyone to track.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Learnings:
&lt;/h3&gt;

&lt;p&gt;Make sure you have a max limit in the Queues to avoid bursting up of RabbitMq in case your consumers are Errored out.&lt;br&gt;
Sharding is a priority if you need to scale RabbitMq. This helps us to handle events much faster and perform better during our Daily Peak times (8 am — 10 am).&lt;br&gt;
Using REST API within Event-based architecture is not a great idea. If it is unavoidable, then latency-related complexities, proper re-queuing techniques, and related Error handling need to be evaluated.&lt;br&gt;
Create proper indexes in the document database.&lt;/p&gt;

</description>
      <category>eventdriven</category>
      <category>redis</category>
      <category>rabbitmq</category>
    </item>
    <item>
      <title>Customise the swagger-models</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Wed, 26 May 2021 09:52:16 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/customise-the-swagger-models-31gn</link>
      <guid>https://dev.to/rohithmenon89/customise-the-swagger-models-31gn</guid>
      <description>&lt;p&gt;I was trying to customise the go-lang and swagger models generation tool. &lt;/p&gt;

&lt;p&gt;Came up with something where we can have our formats configured. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/jango89/custom-go-swagger-model-gen"&gt;Custom Golang swagger model generator&lt;/a&gt; &lt;/p&gt;

</description>
      <category>go</category>
      <category>swagger</category>
      <category>codegen</category>
    </item>
    <item>
      <title>Catch before it burst.</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Wed, 26 May 2021 09:47:27 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/catch-before-it-burst-1h7d</link>
      <guid>https://dev.to/rohithmenon89/catch-before-it-burst-1h7d</guid>
      <description>&lt;h2&gt;
  
  
  Redis TTL monitoring and alerting Service
&lt;/h2&gt;

&lt;p&gt;The Service to monitor Redis keys created without any Time To Live (TTL) and alerts based on different configurations &lt;br&gt;
provided by different teams is always better. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;:&lt;br&gt;
&lt;strong&gt;NOT FOR PRODUCTION ENVIRONMENTS&lt;/strong&gt;. Preferred only on Testing/Staging environments.&lt;/p&gt;
&lt;h3&gt;
  
  
  Motivation
&lt;/h3&gt;

&lt;p&gt;To stop or try to alert developers before persisting keys to Redis with no TTL.&lt;/p&gt;

&lt;p&gt;Most of the time, the reason will be keys with no Time to live(TTL). That means, the data exists forever until we delete it explicitly.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to use
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Duplicate &lt;code&gt;env.sample&lt;/code&gt; file and fill with values.&lt;/li&gt;
&lt;li&gt;Move this file to &lt;code&gt;services&lt;/code&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;sh run_from_bash.sh&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Crontab runs once a day. Can be configured in the file called &lt;code&gt;crontab&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Each file in &lt;code&gt;services&lt;/code&gt; folder is picked.&lt;/li&gt;
&lt;li&gt;Evaluate TTL missing keys based on configuration.&lt;/li&gt;
&lt;li&gt;Notify teams about the TTL missing keys.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Run Using Docker
&lt;/h2&gt;

&lt;p&gt;1.&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;docker build --build-arg env_host=hostname --build-arg env_port=30363  -t redis-ttl-missing-alert-service.&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;2.&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;docker run redis-ttl-missing-alert-service:latest&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Container Using Docker
&lt;/h2&gt;

&lt;p&gt;1.&lt;br&gt;
&lt;br&gt;
&lt;code&gt;docker build --build-arg env_host=hostname --build-arg env_port=30363  -t redis-ttl-missing-alert-service.&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;2.&lt;br&gt;
&lt;br&gt;
&lt;code&gt;docker container create redis-ttl-missing-alert-service:latest&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;3.&lt;br&gt;
&lt;br&gt;
&lt;code&gt;docker start #container_name&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/jango89/redis-ttl-missing-alert-service"&gt;Developed with :love&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Catch before it burst.</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Thu, 18 Feb 2021 16:51:57 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/catch-before-it-burst-b9d</link>
      <guid>https://dev.to/rohithmenon89/catch-before-it-burst-b9d</guid>
      <description>&lt;h3&gt;
  
  
  Redis TTL monitoring and alerting Service
&lt;/h3&gt;

&lt;p&gt;The Service to monitor Redis keys created without any Time To Live (TTL) and alerts based on different configurations provided by different teams. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;:&lt;br&gt;
&lt;strong&gt;NOT FOR PRODUCTION ENVIRONMENTS&lt;/strong&gt;. Preferred only on Testing/Staging environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Motivation
&lt;/h3&gt;

&lt;p&gt;To stop or try to alert developers before persisting keys to Redis with no TTL.&lt;/p&gt;

&lt;p&gt;Most of the time, the reason will be keys with no Time to live(TTL). That means, the data exists forever until we delete it explicitly.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Duplicate &lt;code&gt;env.sample&lt;/code&gt; file and fill with values.&lt;/li&gt;
&lt;li&gt;Move this file to &lt;code&gt;services&lt;/code&gt; folder.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;sh run_from_bash.sh&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Crontab runs once a day. Can be configured in the file called &lt;code&gt;crontab&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Each file in &lt;code&gt;services&lt;/code&gt; folder is picked.&lt;/li&gt;
&lt;li&gt;Evaluate TTL missing keys based on configuration.&lt;/li&gt;
&lt;li&gt;Notify teams about the TTL missing keys.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Run Using Docker
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;docker build --build-arg env_host=hostname --build-arg env_port=30363  -t redis-ttl-missing-alert-service.&lt;/li&gt;
&lt;li&gt;docker run redis-ttl-missing-alert-service:latest&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Start Container Using Docker
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;docker build --build-arg env_host=hostname --build-arg env_port=30363  -t redis-ttl-missing-alert-service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;docker container create redis-ttl-missing-alert-service:latest&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;docker start #container_name&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Example Alert
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3x6XwKbK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4xdj1syo1wfybq33lu55.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3x6XwKbK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4xdj1syo1wfybq33lu55.png" alt="Screenshot 2021-02-18 at 17.50.44" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/jango89/redis-ttl-missing-alert-service"&gt;Developed with :love&lt;/a&gt;&lt;/p&gt;

</description>
      <category>redis</category>
      <category>monitoring</category>
      <category>alerting</category>
    </item>
    <item>
      <title>Database Query Statistics</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Tue, 26 Jan 2021 09:09:52 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/database-query-statistics-21db</link>
      <guid>https://dev.to/rohithmenon89/database-query-statistics-21db</guid>
      <description>&lt;p&gt;I am pretty sure most of us developers will face some database performance problem every day. &lt;br&gt;
Ninety percent of the time this could be due to the missing indexes.&lt;/p&gt;

&lt;p&gt;Otherwise, the query does not use the newly created index and instead scans the whole table.&lt;/p&gt;

&lt;p&gt;Most of the time we will not notice this since the application works perfectly fine for the first few weeks/months and performance gets worse slowly every day as the table grows.&lt;/p&gt;

&lt;p&gt;I myself would prefer something automated which will prompt or notify in-case queries take a long time than expected. This is when I thought of making use of the hibernate-statistics setting and turning on this log. But this is doing a lot of things in the background and making use of a lot of non-week references which also takes a fair amount of JVM memory.&lt;/p&gt;

&lt;p&gt;I thought of implementing a small library that will help me to customize to enable what we need and not store any other information. &lt;/p&gt;

&lt;p&gt;My motivation was &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Easy customizations using system properties.&lt;/li&gt;
&lt;li&gt;Injectable even on non-hibernate or non-spring projects by little tweaks.&lt;/li&gt;
&lt;li&gt;Statistics reporting will help the data to be sent to Prometheus or similar databases and create alerts to notify whether there are queries that need to be improved.&lt;/li&gt;
&lt;li&gt;Developer need logs for SQL queries with the time taken to execute.&lt;/li&gt;
&lt;li&gt;Developers can create alerts based on the Splunk (logging system) or Prometheus if there are queries taking more time or fetching rows more than expected.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://github.com/jango89/hibernate-minimal-logger"&gt;Library link here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>database</category>
      <category>statistics</category>
      <category>java</category>
      <category>logging</category>
    </item>
    <item>
      <title>Are libraries/frameworks an overkill?</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Wed, 23 Sep 2020 11:42:02 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/are-libraries-frameworks-an-overkill-141j</link>
      <guid>https://dev.to/rohithmenon89/are-libraries-frameworks-an-overkill-141j</guid>
      <description>&lt;p&gt;I am a Java developer who has been working on different products in different industries for the last 9 to 10 years. One thing I found very common is the frameworks or libraries on which we as developers rely during the development phase.&lt;br&gt;
Most of the time the frameworks itself create antipatterns as part of their minor or major releases. The frameworks do this as part of a hack to nullify the pain caused by them in the earlier releases. Developers will have different opinions around this topic which I am well aware of. However, to be honest I found a huge percent of likeminded people who agree to a lot of pain points but discard them on a daily job though due to different unfamous reasons :).&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintainability hardships.
&lt;/h2&gt;

&lt;p&gt;To be frank, most of the developers know what are be the typical hardships on the maintainability topic or at least faced a couple of them during their daily coding times. Libraries are developed and open-sourced mostly as part of helping the society in a way to avoid reinventing the wheel. But the problem is, some wheels are reinvented only thinking about the near future which is good, but not great in terms of people opting to use the libraries. And these are the services which need to be maintained for the next 20 years. &lt;br&gt;
We use some libraries to decrease the effort of typing some code and increase productivity by letting them generate the static code for us. A sign of relief, right? &lt;br&gt;
Tomorrow someone else in your team updates or includes a completely unrelated library and this breaks the above library or even has a side effect. Now, this is not fun anymore. This allows me to dive into my next sub-topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration hardships.
&lt;/h2&gt;

&lt;p&gt;There have been days where I start my day with a super happy face in the morning where my task is to upgrade the dependent library and thinking about releasing it in another 10 mins. However, we end up doing pair programming figuring out the other dependencies which break my tests now. At-times after spending a day, we revert the changes and think of not updating the library in this sprint since it was not productive and ends up never taking up this task. This adds up the technical debt and also makes some of your colleague's lives hard in-case we switch the company or team without proper comments or findings. &lt;/p&gt;

&lt;h2&gt;
  
  
  Dependency hell
&lt;/h2&gt;

&lt;p&gt;Absolutely hell! Most of our projects by itself will have tons of dependencies, even though what we needed was just a couple of classes from those libraries. Then each of these dependencies will add up another hundred of them. I agree DRY principle is one of the important concepts which everyone needs to have in their mind before coding. However duplicating some classes in each of our services instead of including the whole dependency is not a bad idea, isn't it? We see microservices already adopted duplicating data in their schemas and moving away from the old idea of having data at a single place(one big database). Packaging tools or build automation tools are nowadays used in most of the projects and personally, I have worked with maven a lot. That is one of the reasons I love this plugin &lt;a href="https://plugins.jetbrains.com/plugin/7179-maven-helper"&gt;https://plugins.jetbrains.com/plugin/7179-maven-helper&lt;/a&gt;, and I highly recommend this. At least it eases the pain to some extent on getting closer to heaven from hell (a dream though :)). &lt;/p&gt;

&lt;h2&gt;
  
  
  Cleaning up happens rarely
&lt;/h2&gt;

&lt;p&gt;Most of us join the team by thinking we make the code so beautiful and make them perform better. Also, use all these fancy words such as reliability, performance, availability, etc. a lot. But very few of us deletes or clean up libraries which may not be used anymore. Some of us do not even care or have a simple read about them. However, I cannot complain since we rarely get time to code every day as 90 percent of the day will be in meetings and then we end up working overtime to release features on time. &lt;/p&gt;

&lt;h2&gt;
  
  
  Easily available
&lt;/h2&gt;

&lt;p&gt;Most of us rarely think of copying the logic from the library and not using it as a dependency inside the service itself. Say even if someone does that, Pull Requests will become the real enemies here. There will be comments and declines from most of your teammates saying the library is already available and why are you reinventing the wheel. Most of the time we end up in long discussions which are not productive and finally ending up with adding the library to the service itself. At this point, what I used to do is create a single-purpose library including only the necessary code and use it in this service and not using the entire library with 50+ files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does clean code matters?
&lt;/h2&gt;

&lt;p&gt;Say we follow all the clean code and clean architecture principles, can we say our service follows these principles unless you know how the dependent libraries are developed? At least, I do not agree with this. However, we could argue, that whatever comes under our responsibility or in plain words, whatever we develop or maintain only matters. &lt;br&gt;
Some libraries generate getters and setters for the classes using annotations, and we then spread it everywhere or overuse it. Most of us may not be known, some of these generated methods have the visibility mode set to public and forgetting anyone can mutate the states of these objects. &lt;/p&gt;

&lt;h2&gt;
  
  
  Who needs Immutability?
&lt;/h2&gt;

&lt;p&gt;It is been a long time I am working in Java and people complain about Mutability. Really? Most of these people talking also support a lot of libraries that modify byte code that gets generated during compile time :). I guess sometimes we need to broaden our mindset and try to accept the facts. I prefer immutability and that is one of the reasons for hating these libraries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hard to change.
&lt;/h2&gt;

&lt;p&gt;It is quite hard to change or improve things at times. I remember an example, we use spring framework and also a dependency which helps us to produce/consume events in Rabbitmq. So there were scenarios where we need to switch off our consumers but still produce messages. Nearly impossible to control it, since the consumer implementation has a parent class and it has a private variable where the threads are getting initialized. It took us one week to deprecate the whole library and write simple  AMQP classes to produce/consume messages. &lt;/p&gt;

&lt;h2&gt;
  
  
  Easy configurations via annotations.
&lt;/h2&gt;

&lt;p&gt;One of the reasons why we integrate some libraries is that it would be solving your current problem. However, one thing most of us do not research is how is it being solved and what are the hundred other problems we could end up shortly. Some of the fancy libraries come up with just one line to include in the package management XML file and then its all about creating key-value in the property files (developer-friendly), easy right? Believe me, changing something to include your application-specific feature would be hard in those libraries. Why? Either the library is built in such a way it would be so tightly coupled to the properties and specific logic, or it would be so generic, you need to write code to include your specific logic by implementing or extending the library classes. At times, in the end, you will not be needing the library itself because you wrote almost all the classes including most of the business logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our service is 90% framework code and 10% business logic.
&lt;/h2&gt;

&lt;p&gt;I was even thinking of leaving a blank line for this topic. It is understandable since we hear it from everyone during conferences and talks :). &lt;/p&gt;

&lt;h2&gt;
  
  
  Annotations and reflections.
&lt;/h2&gt;

&lt;p&gt;At this point, let me Thank God for super-powerful cloud machines. We use profilers to measure the problems in our code and we try to improve them. But can we do something about the 100ms spent inside the framework filters and logics? No. Most of the frameworks use reflections and annotation processing to support generic implementations. &lt;br&gt;
This is why I always think twice before creating services for doing something generic. Do we need it for the future? The YAGNI principle is so true and admirable. &lt;/p&gt;

&lt;h2&gt;
  
  
  Facades, factories, and what else.
&lt;/h2&gt;

&lt;p&gt;Similarly, one thing I noticed in some libraries is the development complexity. Simple business logic sits inside another 10 classes and it is like finding the treasure. Somehow I am a big fan of simple and understandable codebase rather than having all the fancy interfaces and abstract classes thinking about future implementations and making something very generic. Most of the time, as developers or users of these libraries, are interested in the file where the business logic is present and ignores all those files in the stack hierarchy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some do reinvent the wheel.
&lt;/h2&gt;

&lt;p&gt;I have some friends working in companies like Revolut and they used to tell me about the development framework used within their company. The framework is very slim and helps the developers to produce more effective code (understanding the concepts before coding) and do not use all the freebie objects strangling around in those fatty libraries. They do strict code checks and have detailed discussions before anyone commits new code into their existing frameworks or libraries to prevent it from being another fat jar. Of course, this kind of development strategy will make you see a lot of duplicate code in the services lying around. &lt;/p&gt;

&lt;h2&gt;
  
  
  Libraries and frameworks via GRPC/microservice?
&lt;/h2&gt;

&lt;p&gt;When we talk about the library, I want to see something small with a maximum of 5 files in it. And doing just one thing and only built for serving one single purpose. But most of them start by doing one thing but then end up supporting a hundred other things with hundred classes and around 10k lines of code. Often I think about having libraries and frameworks helping us via GRPC or maybe extreme cases acting like a microservice. One fine example would be Zalando's Nakadi which has a very similar implementation for event streaming. &lt;/p&gt;

&lt;h2&gt;
  
  
  High-Level Documentations.
&lt;/h2&gt;

&lt;p&gt;Extreme Programming as a methodology for example tells us to have minimum static documentations and have only detailed documentation that will be generated dynamically on every release. Up to an extent, this avoids having outdated pieces of information on what the service does. I agree with this and likes to see libraries with brief documentation on what the library does, what are the side effects if it has any, FAQ's, etc...  &lt;/p&gt;

&lt;h2&gt;
  
  
  Real Test cases.
&lt;/h2&gt;

&lt;p&gt;I believe after talking about documentations, "real test cases" should be the next ideal sub-topic. Having some test classes or files to depict the working scenarios will help us understand what should be expected as the outcome and what should be provided as input. In this way, "copying and pasting" (developer's favorite) the sample code from the test cases would also make things super easy and reduce the time spent to integrate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop auto-configuring for us.
&lt;/h2&gt;

&lt;p&gt;Some libraries even go to the extent of making us developers sit idle and do nothing. I hate to use these libraries. Once we integrate them into our service, they start creating and initializing hundreds of objects and among them, only two of them would be relevant to us. Ninety percent of the time,  developers don't care about starting the application in TRACE or DEBUG level to visualize what kind of objects are being created or lying around and do we need them, etc... We only start to analyze them once we end up in OOM error :) &lt;/p&gt;

&lt;h2&gt;
  
  
  Think twice before including something.
&lt;/h2&gt;

&lt;p&gt;Finally, I would like to emphasize a couple of things. Only include what is necessary and do not accept the big package which is delivered to you for free. Integrating would be easy and we end the day thinking of us being productive on finishing the task.&lt;br&gt;
But the reality is you could be making your colleague's life harder soon. This write-up is not to say all the libraries are bad and is just a waste of time or criticizing some one's excellent job. I was spitting out my feelings and some hard realities I faced in my past nine development years. Ending in a positive note, I do adore some libraries which are supposed to do one thing smart and efficient :). &lt;/p&gt;

</description>
      <category>java</category>
      <category>framework</category>
      <category>library</category>
      <category>learnings</category>
    </item>
    <item>
      <title>Postman-Newman, wherever you go.</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Thu, 17 Oct 2019 10:47:11 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/postman-newman-wherever-you-go-5h1b</link>
      <guid>https://dev.to/rohithmenon89/postman-newman-wherever-you-go-5h1b</guid>
      <description>&lt;p&gt;Postman-Newman&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/postmanlabs"&gt;
        postmanlabs
      &lt;/a&gt; / &lt;a href="https://github.com/postmanlabs/newman"&gt;
        newman
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Newman is a command-line collection runner for Postman
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;I have been writing Postman-Newman tests for a couple of years now.&lt;br&gt;
A new carrier move for a software developer will give new challenges to solve. But most of these new challenges could be resolved using a solution/framework/process which he/she is familiar with. &lt;/p&gt;

&lt;p&gt;In my case when I moved to &lt;code&gt;FREE NOW&lt;/code&gt;(earlier &lt;code&gt;mytaxi&lt;/code&gt;), there were projects being migrated to Spring Cloud Config. There is documentation on &lt;code&gt;Steps to migrate&lt;/code&gt; and clearly mentioned do's and dont's.&lt;/p&gt;

&lt;p&gt;There were mistakes during past migrations and projects are being misconfigured. However, software engineers learn from mistakes and we thought of some process in place for some kind of identification, later notification and finally rectification.&lt;/p&gt;

&lt;p&gt;This is when we came up with &lt;code&gt;The Hero&lt;/code&gt; (Postman-Newman).&lt;/p&gt;

&lt;p&gt;The test or collection of tests includes some bunch of API tests that check whether the configurations are correct or not.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/jango89"&gt;
        jango89
      &lt;/a&gt; / &lt;a href="https://github.com/jango89/postman-test-validate-spring-cloud-configuration"&gt;
        postman-test-validate-spring-cloud-configuration
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Docker image for validating ConnectionFactory created are not overriden for spring cloud projects.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h2&gt;
What&lt;/h2&gt;
&lt;p&gt;Postman Tests for validating&lt;br&gt;
1. ConnectionFactory created for spring cloud projects.&lt;br&gt;
2. Test webhook is created in the configuration project.&lt;br&gt;
Base image is - newman-postman.&lt;/p&gt;
&lt;h2&gt;
Why&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;By default spring cloud migrated projects should have property as follows to notify the application about configuration changes
&lt;code&gt;spring.rabbitmq.host=configbus.mgmt.mytaxi.com&lt;/code&gt;
This docker image has POSTMAN TESTS which runs and validates the prelive and live environment has not overriden this property
Overriding this property will create issues with notifying config changes to app.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Spring cloud also needs a webhook in the configuration repository for listening to config file changes.
The test also checks min 1 webhook is present in the configuration repository.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
How&lt;/h2&gt;
&lt;p&gt;Create a task and add to DefaultPlan in bamboo spec in case missing.&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;
&lt;pre class="notranslate"&gt;&lt;code&gt;private static DockerRunContainerTask getCloudConfigurationPostmanTest()
{
    return new DockerRunContainerTask()
        .description("Check spring.rabbitmq.host mapped to config server")
        .imageName("docker.intapps.it/configservertest:latest")
        .serviceURLPattern("http://localhost:${docker.port}")
        .containerCommand("run api.json --global-var \"servicename=bookingoptionsservice\"")
        .containerWorkingDirectory("/etc/newman")
        .clearVolumeMappings()
}
 private static Stage getDefaultStage()
{&lt;/code&gt;&lt;/pre&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/jango89/postman-test-validate-spring-cloud-configuration"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Why am I so in favor
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Only need to know Request URI to test.&lt;/li&gt;
&lt;li&gt;No basic knowledge is required to write assertions. Of course, we write assertions using javascript however, with an effective documentation there is a very less knowledge curve.&lt;/li&gt;
&lt;li&gt;Easy to plugin the tests to various environments (automated build platforms, docker, linux, mac).&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>testing</category>
      <category>spring</category>
      <category>postman</category>
      <category>apitest</category>
    </item>
    <item>
      <title>Generate jooq classes using docker containers</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Fri, 16 Aug 2019 13:06:09 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/generate-jooq-classes-using-docker-containers-4g62</link>
      <guid>https://dev.to/rohithmenon89/generate-jooq-classes-using-docker-containers-4g62</guid>
      <description>&lt;p&gt;Tech stack - #java, #maven, #liquibase, #docker&lt;/p&gt;

&lt;h1&gt;
  
  
  Why?
&lt;/h1&gt;

&lt;p&gt;I will talk about what we did to achieve below things,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Generate jooq-classes from an in-memory or Adhoc database instead of connecting to prelive/live environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How to apply all the migrations using liquibase before generating jooq-classes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate jooq-classes based on Postgres driver. Jooq supports generating classes connecting to h2 (in-memory database) but not Postgres. We use Postgres mostly and h2 mostly does not support many features Postgres has.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoid using multiple maven plugins and 100 lines of code instead use one maven plugin.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  What we did
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Start "Test-containers" during maven pre-compile stage. (&lt;a href="https://www.testcontainers.org/#about"&gt;https://www.testcontainers.org/#about&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Apply liquibase migrations over the test-container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generate jooq-classes based for the schema provided.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Where can I find
&lt;/h1&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/jango89"&gt;
        jango89
      &lt;/a&gt; / &lt;a href="https://github.com/jango89/jooqgen-liquibase-postgres"&gt;
        jooqgen-liquibase-postgres
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Maven plugin with jooq, liquibase and postgres
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
What is this&lt;/h1&gt;
&lt;p&gt;Maven plugin which can be integrated to any maven project
Sample :&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;&amp;lt;plugin&amp;gt;  
    &amp;lt;groupId&amp;gt;com.mytaxi&amp;lt;/groupId&amp;gt;  
     &amp;lt;artifactId&amp;gt;jooqgen-liquibase-postgres&amp;lt;/artifactId&amp;gt;
    &amp;lt;configuration&amp;gt;
        &amp;lt;schema&amp;gt;bookingoptionsservice&amp;lt;/schema&amp;gt; &amp;lt;-- schema name --&amp;gt;
        &amp;lt;packageName&amp;gt;com.mytaxi.bookingoptionsservice&amp;lt;/packageName&amp;gt; &amp;lt;-- package to be created for generated classes --&amp;gt;
        &amp;lt;liquibaseChangeLogFile&amp;gt;${liquibase.changeLogFile}&amp;lt;/liquibaseChangeLogFile&amp;gt; 
    &amp;lt;/configuration&amp;gt;
    &amp;lt;executions&amp;gt;
         &amp;lt;execution&amp;gt;
             &amp;lt;phase&amp;gt;generate-sources&amp;lt;/phase&amp;gt;
            &amp;lt;goals&amp;gt;
                 &amp;lt;goal&amp;gt;jooqOverPostgresContainer&amp;lt;/goal&amp;gt;
            &amp;lt;/goals&amp;gt;
        &amp;lt;/execution&amp;gt;
    &amp;lt;/executions&amp;gt;
 &amp;lt;/plugin&amp;gt;      
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h1&gt;
What it does&lt;/h1&gt;
&lt;ol&gt;
&lt;li&gt;Starts a postgress docker container.&lt;/li&gt;
&lt;li&gt;Applies liquibase changes over the container.&lt;/li&gt;
&lt;li&gt;Generates JOOQ classes for the source project connecting to postgres container.&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;
Problems and solutions&lt;/h1&gt;
&lt;p&gt;If generated classes fail to compile,&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;include &lt;code&gt; /target/generated-sources/jooq/&lt;/code&gt; folder to corresponding compiler plugin.&lt;/li&gt;
&lt;li&gt;If kotlin-maven-plugin compilation failes, add
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;&amp;lt;configuration&amp;gt;
    &amp;lt;sourceDirs&amp;gt;
        &amp;lt;source&amp;gt;src/main/java&amp;lt;/source&amp;gt;
        &amp;lt;source&amp;gt;target/generated-sources/jooq&amp;lt;/source&amp;gt;
    &amp;lt;/sourceDirs&amp;gt;
&amp;lt;/configuration&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;If the &lt;code&gt;NoClassDefError&lt;/code&gt; happens, this means the class files are missing. Add the following plugin&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt; &amp;lt;plugin&amp;gt;
    &amp;lt;groupId&amp;gt;org.codehaus.mojo&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;build-helper-maven-plugin&amp;lt;/artifactId&amp;gt;
    &amp;lt;executions&amp;gt;
        &amp;lt;execution&amp;gt;
            &amp;lt;phase&amp;gt;generate-sources&amp;lt;/phase&amp;gt;
            &amp;lt;goals&amp;gt;
                &amp;lt;goal&amp;gt;add-source&amp;lt;/goal&amp;gt;
            &amp;lt;/goals&amp;gt;
            &amp;lt;configuration&amp;gt;
                &amp;lt;sources&amp;gt;
                    &amp;lt;source&amp;gt;${project.build.directory}/generated-sources/jooq&amp;lt;/source&amp;gt;
                &amp;lt;/sources&amp;gt;
            &amp;lt;/configuration&amp;gt;
        &amp;lt;/execution&amp;gt;
    &amp;lt;/executions&amp;gt;
 &amp;lt;/plugin&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/jango89/jooqgen-liquibase-postgres"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


</description>
      <category>java</category>
      <category>docker</category>
      <category>liquibase</category>
      <category>jooq</category>
    </item>
    <item>
      <title>Test Liquibase migration changes in the local environment using Docker</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Wed, 05 Jun 2019 09:46:02 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/test-liquibase-migration-changes-in-the-local-environment-using-docker-47jf</link>
      <guid>https://dev.to/rohithmenon89/test-liquibase-migration-changes-in-the-local-environment-using-docker-47jf</guid>
      <description>&lt;p&gt;I am publishing this article because recently found out we had problems in the pre-live environment related to some liquibase migration issue.&lt;/p&gt;

&lt;p&gt;Then I was thinking maybe this could be because of not testing properly in our local environment or not knowing how to do testing itself. &lt;/p&gt;

&lt;p&gt;Unfortunately, some commands use java-maven, but I am sure there are alternatives for these statements.&lt;/p&gt;

&lt;p&gt;Maybe the following could help someone :)&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites :
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Steps to do:
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Docker-compose file content can be copied from here - &lt;a href="https://hub.docker.com/_/postgres"&gt;https://hub.docker.com/_/postgres&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker-compose up&lt;/code&gt; - Runs a container with Postgress DB running inside&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker ps&lt;/code&gt; - Lists the container details&lt;/li&gt;
&lt;li&gt;Do an export of schema from prelive or test environment.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker exec -it {replace_with_container_id} bash&lt;/code&gt; - You can login to the container and import the exported schema.&lt;/li&gt;
&lt;li&gt;From your local, open &lt;code&gt;changelog.xml&lt;/code&gt; and comment out changes other than yours.&lt;/li&gt;
&lt;li&gt;Change liquibase configuration inside pom.xml to local postgres configuration.&lt;/li&gt;
&lt;li&gt;Run the command &lt;code&gt;mvn liquibase:update&lt;/code&gt; to see your changes being applied.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Tips for postgres newbie:
&lt;/h1&gt;

&lt;p&gt;Once you login to the docker container using STEP-5 mentioned in Steps to do,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;execute command &lt;code&gt;psql -U postgres&lt;/code&gt; - "postgres" is the username used in the docker-compose content.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;\c {db_name} - connect to a db&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;\dt - show tables in that database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;\d {table_name} - describe table&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you for reading and I am happy if I could help someone.&lt;/p&gt;

</description>
      <category>liquibase</category>
      <category>docker</category>
      <category>migration</category>
      <category>postgresnewbies</category>
    </item>
    <item>
      <title>Elasticsearch(ES) and the hardships</title>
      <dc:creator>Rohith Kunnath</dc:creator>
      <pubDate>Fri, 20 Apr 2018 14:35:35 +0000</pubDate>
      <link>https://dev.to/rohithmenon89/elasticsearches-and-the-hardships-4hcc</link>
      <guid>https://dev.to/rohithmenon89/elasticsearches-and-the-hardships-4hcc</guid>
      <description>&lt;p&gt;I have been working as a Backend Developer for a CRM industry where it is all about searching :). Yep, you are correct, its a system with a lot of data table columns. So the backend framework chosen to support this highly customizable search was Java with spring integration. Yes, your thoughts are right, we chose Elasticsearch(ES) as our datastore. As always there was up votes and down votes, but it was the right decision in the end. It was four years before and the latest stable version was 1.7. Oops completely forgot, let me talk a bit about ES. &lt;/p&gt;

&lt;p&gt;ES is an open source search engine based on Lucene. Certainly, this is not a primary data store, however efficient for systems with searches everywhere. ES is easily scalable. Elasticsearch stores metadata information about index (consider this as a table in SQL) and other data information in files &lt;a href="https://www.elastic.co/blog/found-dive-into-elasticsearch-storage"&gt;How is it stored in ES&lt;/a&gt;. Nodes are nothing but your servers which can together be called as a cluster. Nodes help to keep replicas of your data. ES keeps data in form of JSON and it particularly follows so-called laws of JSON. If you want to understand more about basics of ES terms &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/6.2/_basic_concepts.html"&gt;visit here.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us get back to what I am actually supposed to talk. Maintaining ES is really hard and costly. What I meant was, since most of the startups do not have so-called DevOps mechanism of auto-scaling and cool kinds of stuff, as the data increases, we need to boot up machines or nodes to keep ES stable. Or else will see "Heap Space Error" since this is completely built using Java, yeah JAVA developers would love this error :). &lt;/p&gt;

&lt;h3&gt;Real Developer Hardships&lt;/h3&gt;


&lt;ol&gt;
&lt;li&gt;Welcome to the real world of indexing. In elasticsearch you will always hear about indexing. Indexing is nothing but writing all your data to ES. Indexing is a create/update operation in the system as well. There are strategies for indexing and should be carefully configured in ES. That means read/write heavy systems should make appropriate configurations in the ES. Indexing eats your system RAM like an Indian eats spicy food (I just love it). Full indexing should be carefully carried out during peak hours since its a heavy operation. &lt;/li&gt;

&lt;li&gt;No parent-child relationship in 1.7 version. Yes, you can maintain relations in ES between indexes in the later versions. How did we resolve it? We had to store child JSON data inside parent as well. This was kind of duplicating because we had the separate index for child data but still needed to keep in parent index. Then the question could be why again in parent index? Because since the system searches happen in your parent page by filtering for child data. The alternative does separate queries in each index and then link them together. But then the pagination needs to be customized and your &lt;a href="https://en.wikipedia.org/wiki/Minimum_viable_product"&gt;MVP&lt;/a&gt; model strategy will not work (cannot release soon because of issues all over). &lt;/li&gt;

&lt;li&gt;Awwww then comes to the field analyzers and index analyzers. Whoever says yeah ES is easy, screw them. No I was just joking. But we had a hard time with configurations of metadata. So in ES data is stored either analyzing them or not. For example, if you want a wild card supporting search field, you need to store them in lowercase strings. If you want the search to work with spaces, then you want some other analyzers and the list goes on. The worst part is you cannot change it in an existing index directly, you can only add it to the new index. So we need to reindex data.&lt;/li&gt;

&lt;li&gt;Say your client came up with something new. They want the search to happen for special characters, now you need to create special analyzers and add it to your metadata. Yes, you can just close the index and add it. But if you want to specify the newly created analyzer in any of the fields, then you cannot do it. So again back to first, create the new index and reindex data. &lt;/li&gt;

&lt;li&gt;Now why can't I store every field with analyzer first itself. You should not do this because the searching and indexing will take more time in fields with analyzers. So always be careful on assigning analyzers to fields.&lt;/li&gt;

&lt;li&gt;Aggregations are the best part of ES. Group by is the best synonym I can find. People love to do aggregations in ES. Yes, but it can even break the system. Since its, a very heavy operation on ES eats up a lot of RAM. Should be carefully built and used. Most of the recommendation systems prefer to use this nowadays.&lt;/li&gt;

&lt;li&gt;
&lt;a href="https://www.npmjs.com/package/elasticdump"&gt;Elasticdump&lt;/a&gt; is the nicest library for smooth releases and reindexing. I remember using this at-least 3  times a month.&lt;/li&gt;

&lt;li&gt;Prefer query builders rather than filters. Filters are only applied after fetching data from nodes. Yes, you are right, its much time-consuming.&lt;/li&gt;
&lt;/ol&gt;


&lt;h3&gt;Conclusion&lt;/h3&gt;

&lt;p&gt;Upgrade to the latest versions as soon as possible. The sooner you upgrade, the better and faster your release process will be. Even though all these issues occurred, I loved working with ES and still loves working on the same.&lt;/p&gt;

</description>
      <category>elasticsearch</category>
      <category>es</category>
      <category>oldversion</category>
      <category>java</category>
    </item>
  </channel>
</rss>
