<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rahul Atram</title>
    <description>The latest articles on DEV Community by Rahul Atram (@rahul_atram_986a35c080e21).</description>
    <link>https://dev.to/rahul_atram_986a35c080e21</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rahul_atram_986a35c080e21"/>
    <language>en</language>
    <item>
      <title>Engineering the Guardian: A Deep Dive into Atomic API Guardrails and Real-Time Systems</title>
      <dc:creator>Rahul Atram</dc:creator>
      <pubDate>Sun, 19 Apr 2026 20:26:31 +0000</pubDate>
      <link>https://dev.to/rahul_atram_986a35c080e21/engineering-the-guardian-a-deep-dive-into-atomic-api-guardrails-and-real-time-systems-4a4m</link>
      <guid>https://dev.to/rahul_atram_986a35c080e21/engineering-the-guardian-a-deep-dive-into-atomic-api-guardrails-and-real-time-systems-4a4m</guid>
      <description>&lt;p&gt;In modern backend architecture, building an API that simply "works" is no longer enough. To compete at the highest level, you must build systems that are resilient, stateless, and capable of defending themselves against massive scale. &lt;/p&gt;

&lt;p&gt;Recently, I embarked on a journey to engineer the ArogyaDoot Guardian - a high-performance Spring Boot microservice designed as an unbreakable shield for central API ecosystems. Here is the technical breakdown of how I leveraged distributed state and atomic operations to eliminate bot spam and prevent AI compute runaway.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. The Atomic Gatekeeper: Zero-Race-Condition Concurrency
&lt;/h2&gt;

&lt;p&gt;The most brutal test for any API is the Spam Test: 200 concurrent bot requests hitting a single post at the exact same millisecond. &lt;/p&gt;

&lt;p&gt;Standard application-level checks fail because they are prone to race conditions. To solve this, I moved the intelligence directly to the data layer using Redis Lua Scripts. By executing the logic on the Redis server itself, we ensure that the check-and-increment operations happen in a single, uninterruptible trip. This guarantees that we stop at exactly 100 bot replies - providing absolute accuracy regardless of how many thousands of requests are fired in parallel.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Multi-Dimensional Guardrails: Horizontal, Vertical, and Time
&lt;/h2&gt;

&lt;p&gt;Safety in this system is three-dimensional:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Horizontal Cap: Exactly 100 bot interactions per post.&lt;/li&gt;
&lt;li&gt;Vertical Cap: Recursive depth protection that rejects any comment thread deeper than 20 levels, preventing memory exhaustion and UI breakage.&lt;/li&gt;
&lt;li&gt;Cooldown Cap: A 10-minute interaction lock between specific bots and humans, implemented via Redis TTL (Time-To-Live) keys.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Real-Time Virality Calculation
&lt;/h2&gt;

&lt;p&gt;We treated virality as a living metric. Every interaction - be it a Bot Reply, a Human Like, or a Human Comment - is instantly calculated and indexed in Redis. By assigning weighted scores (+1, +20, and +50), the system can determine the heat of any post in microseconds without touching the main database, ensuring the core remains responsive even during viral surges.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Intelligent Notification Batching: The CRON Sweeper
&lt;/h2&gt;

&lt;p&gt;User engagement can be destroyed by notification spam. To solve this, I engineered a Throttler and Buffer model. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Direct delivery only happens if the user hasn't been notified in the last 15 minutes.&lt;/li&gt;
&lt;li&gt;All other notifications are buffered into a Redis List.&lt;/li&gt;
&lt;li&gt;A Spring @Scheduled task runs every 5 minutes to sweep the buffer, summarize the activity (example: "Bot X and 5 others interacted with your post"), and clear the queue. This provides a clean, human-centric user experience.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Stealth Auditing: The Power of Aspect-Oriented Programming (AOP)
&lt;/h2&gt;

&lt;p&gt;One of the most unique features of this project is the AuditAspect. Instead of cluttering the business logic with logging code, I used AOP to create an invisible security layer. This aspect intercepts every guardrail hit and automatically logs it as a Security Threat event. This separation of concerns ensures that the code remains pristine while the security team gets a perfect audit trail.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Hardening for the Cloud: Multi-Stage Docker
&lt;/h2&gt;

&lt;p&gt;Professional engineering doesn't stop at the code level. I designed an Optimized Multi-Stage Dockerfile using Eclipse Temurin 17. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The build happens in a heavy Gradle container, but the final runtime is a lightweight, secure JRE image.&lt;/li&gt;
&lt;li&gt;For maximum security, the container runs as a non-root user, and standard JVM container-aware flags ensure the memory footprint is perfectly tuned for environments like Kubernetes or Google Cloud Run.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;Engineering the ArogyaDoot Guardian was a masterclass in distributed state management. By combining the speed of Redis with the robustness of Spring Boot 3.x, we created a system that is not only high-performing but truly resilient.&lt;/p&gt;

&lt;p&gt;The results speak for themselves: Under massive parallel load, the guardrails held firm, the logs captured every threat, and the database remained the pristine source of truth.&lt;/p&gt;

&lt;p&gt;The Guardian is live. The competition is over.&lt;br&gt;
Visit = (&lt;a href="https://github.com/Rahulatram321/Core-API-Guardrails.git" rel="noopener noreferrer"&gt;https://github.com/Rahulatram321/Core-API-Guardrails.git&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>api</category>
      <category>microservices</category>
      <category>security</category>
      <category>springboot</category>
    </item>
    <item>
      <title>UNMASKING THE MASTERPIECE: How I Leveraged Semantic AI to Decode 5,000 Years of Art History</title>
      <dc:creator>Rahul Atram</dc:creator>
      <pubDate>Sun, 19 Apr 2026 17:05:46 +0000</pubDate>
      <link>https://dev.to/rahul_atram_986a35c080e21/unmasking-the-masterpiece-how-i-leveraged-semantic-ai-to-decode-5000-years-of-art-history-3k3d</link>
      <guid>https://dev.to/rahul_atram_986a35c080e21/unmasking-the-masterpiece-how-i-leveraged-semantic-ai-to-decode-5000-years-of-art-history-3k3d</guid>
      <description>&lt;p&gt;AUTHOR: Rahul Atram&lt;br&gt;
INSTITUTION: SGGS Institute of Engineering &amp;amp; Technology, Nanded&lt;br&gt;
DATE: 19 April 2026&lt;/p&gt;




&lt;h2&gt;
  
  
  THE MOMENT OF DISCOVERY
&lt;/h2&gt;

&lt;p&gt;It was a quiet Sunday evening in Nanded, the kind of evening where the air is still and the mind begins to wander. I had just returned from a short trip, unpacking my bags and preparing for the week ahead, when a notification flashed on my screen: the Ex-Machina Hackathon was officially live. The challenge was unique—classify thousands of years of human art history using nothing but digital metadata. &lt;/p&gt;

&lt;p&gt;As a third-year Computer Science student, I have always believed that data is not just a collection of numbers; it is a story waiting to be told. However, looking at the spreadsheet of 5,000 messy museum records, it felt more like a riddle than a story. There were missing years, inconsistent measurements, and cryptic notes. But that is exactly where the journey began. What started as a late-night curiosity turned into a high-performance machine learning pipeline that achieved a verified 94.10% accuracy. This is the story of that discovery.&lt;/p&gt;




&lt;h2&gt;
  
  
  DECODING THE PROBLEM: ART BEYOND THE IMAGE
&lt;/h2&gt;

&lt;p&gt;The competition was a test of "Metadata Intelligence." We were given 4,000 artworks and asked to predict their medium—the physical substance they were made of. This wasn't about looking at photos of paintings; it was about understanding the words used to describe them.&lt;/p&gt;

&lt;p&gt;We were dealing with eight beautiful, distinct categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Acrylic: The modern, vibrant medium of the 20th century.&lt;/li&gt;
&lt;li&gt;Ink: The sharp, decisive lines of sketches and calligraphy.&lt;/li&gt;
&lt;li&gt;Oil on Canvas: The heavy, textured gold standard of the masters.&lt;/li&gt;
&lt;li&gt;Oil on Wood and Panel: The rigid, durable ancestors of modern painting.&lt;/li&gt;
&lt;li&gt;Print: The art of reproduction, etching, and woodblocks.&lt;/li&gt;
&lt;li&gt;Tempera: The ancient egg-yolk based paint used in historical icons.&lt;/li&gt;
&lt;li&gt;Watercolor: The translucent, flowing beauty of landscapes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The challenge wasn't just to get a high score. It was to build a system that truly understood the nuances of art cataloging.&lt;/p&gt;




&lt;h2&gt;
  
  
  THE "AHA!" MOMENT: LISTENING TO THE DATA
&lt;/h2&gt;

&lt;p&gt;Most people start a machine learning project by immediately writing code. I decided to start by reading. I spent the first hour simply scrolling through the rows of the data, and that is when I found my "Smoking Gun."&lt;/p&gt;

&lt;p&gt;I noticed a column called 'Caption'. While other columns were missing or fragmented, the curators at these museums were incredibly consistent in their captions. They would write things like: "A watercolor landscape titled 'The River'..." &lt;/p&gt;

&lt;p&gt;This was my breakthrough. I realized that the answer wasn't hidden; it was written in plain English right in front of us. The machine didn't need to guess; it just needed to learn how to read. This "Caption Signal" became the heart of my entire strategy.&lt;/p&gt;




&lt;h2&gt;
  
  
  THE JOURNEY THROUGH THE EXPERIMENTS
&lt;/h2&gt;

&lt;p&gt;In my quest for accuracy, I followed a path of increasing intelligence. I didn't want to build a "black box"; I wanted to understand the progression of my model's brain.&lt;/p&gt;

&lt;p&gt;Step 1: The Keyword Baseline&lt;br&gt;
I started with a technique called TF-IDF. Think of this as a very fast librarian who scans a book for important keywords. If the librarian sees the word "canvas," they guess "Oil." This simple approach got me to a solid 91% accuracy. It was a great start, but art is more than just keywords.&lt;/p&gt;

&lt;p&gt;Step 2: The Margin Carver&lt;br&gt;
I then upgraded to a Linear Support Vector Machine. This is a bit like a judge who tries to draw the absolute sharpest line between two different piles of evidence. It brought our accuracy up to 94.20%. We were getting closer to the truth.&lt;/p&gt;

&lt;p&gt;Step 3: The Champion Model — SBERT + CatBoost&lt;br&gt;
Then came the real revolution. I introduced a "Transformer" model called Sentence-BERT. Unlike my earlier librarian who only counted words, Sentence-BERT actually "reads" the sentence. It understands context. It knows that "pigment on fabric" is the same as "painting on canvas" even if the words are different.&lt;/p&gt;

&lt;p&gt;I combined this "reading brain" with CatBoost—a gradient boosting model that acted as the "historical memory." CatBoost looked at the years (y0/y1) and the size of the artwork (area) and combined them with the text. This hybrid approach allowed us to hit a verified cross-validation peak of 94.10%, with initial probe estimates reaching even higher.&lt;/p&gt;




&lt;h2&gt;
  
  
  REALITY CHECK: HONESTY IN DATA SCIENCE
&lt;/h2&gt;

&lt;p&gt;As a student, it is tempting to chase a 100% score. But this hackathon taught me a valuable lesson in professional honesty. While my initial probes hit 99% accuracy because of the heavy "Caption Signal," I realized that a truly useful model must be robust. &lt;/p&gt;

&lt;p&gt;In my final version, I focused on proper feature integration and handling missing values. I realized that 94.10% is not just a number; it represents a model that is balanced, realistic, and ready for the real world. This intellectual maturity—knowing that data is never perfect—is perhaps the most important thing I learned throughout this experience.&lt;/p&gt;




&lt;h2&gt;
  
  
  THREE LIFE-CHANGING LESSONS
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Observation is more powerful than Algorithms: The "Caption Signal" was discovered because I spent time looking at the raw data, not just the code. Always look at your data first.&lt;/li&gt;
&lt;li&gt;Baselines are the ground you stand on: Never start with a complex neural network. Start small to understand the "floor" of your performance.&lt;/li&gt;
&lt;li&gt;Art and Science are not enemies: Using AI to understand human creativity was a beautiful experience. Machine learning is simply a new way to appreciate the precision of those who have cataloged human history for centuries.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  CONCLUSION: THE FUTURE OF THE MUSEUM
&lt;/h2&gt;

&lt;p&gt;What I built in the Ex-Machina hackathon was more than a classifier. It was a bridge between the historical archives of the past and the intelligent systems of the future. I learned that while a machine doesn't have an "eye" for art, it definitely has an "ear" for the language we use to describe it.&lt;/p&gt;

&lt;p&gt;To my fellow students at SGGS and beyond: don't be afraid of the complexity. AI is just a tool, and your curiosity is the power that makes it work. Let’s keep building, keep questioning, and keep telling the stories hidden inside the data.&lt;/p&gt;

&lt;p&gt;Let's build the future together.&lt;br&gt;
Journey continues = &lt;a href="https://github.com/Rahulatram321/Artistic-Medium-Classification-from-Metadata.git" rel="noopener noreferrer"&gt;https://github.com/Rahulatram321/Artistic-Medium-Classification-from-Metadata.git&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
