<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Machine coding Master</title>
    <description>The latest articles on DEV Community by Machine coding Master (@machinecodingmaster).</description>
    <link>https://dev.to/machinecodingmaster</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/machinecodingmaster"/>
    <language>en</language>
    <item>
      <title>Stop Hand-Rolling AI Glue: Claude 4 MCP + Spring AI is the New Enterprise Standard</title>
      <dc:creator>Machine coding Master</dc:creator>
      <pubDate>Sun, 26 Apr 2026 05:26:11 +0000</pubDate>
      <link>https://dev.to/machinecodingmaster/stop-hand-rolling-ai-glue-claude-4-mcp-spring-ai-is-the-new-enterprise-standard-1koc</link>
      <guid>https://dev.to/machinecodingmaster/stop-hand-rolling-ai-glue-claude-4-mcp-spring-ai-is-the-new-enterprise-standard-1koc</guid>
      <description>&lt;h2&gt;
  
  
  Stop Hand-Rolling AI Glue: Claude 4 MCP + Spring AI is the New Enterprise Standard
&lt;/h2&gt;

&lt;p&gt;In 2026, if you are still building proprietary wrappers to connect LLMs to your microservices, you are creating technical debt that will haunt your team for years. The industry has consolidated around Claude 4 and the Model Context Protocol (MCP) because it finally brings the type safety and standardized discovery we’ve lacked in the "prompt engineering" era.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Most Developers Get This Wrong
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Schema Drift:&lt;/strong&gt; Treating LLM tools as loosely typed REST calls, leading to Claude 4 hallucinating parameter types and crashing your downstream services.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Over-Orchestration:&lt;/strong&gt; Building complex "Agent Frameworks" from scratch instead of using Spring AI’s native MCP client to handle the JSON-RPC over SSE handshake.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Security Blindness:&lt;/strong&gt; Exposing raw database access to the model instead of defining granular &lt;code&gt;McpResource&lt;/code&gt; boundaries via the Java SDK.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Right Way
&lt;/h2&gt;

&lt;p&gt;The goal is to move from "LLM-as-a-chatbot" to "LLM-as-a-typed-runtime" by exposing your domain logic through a standardized MCP Server.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Use the &lt;code&gt;spring-ai-mcp-boot-starter&lt;/code&gt; to automatically expose Spring beans as discoverable tools.&lt;/li&gt;
&lt;li&gt;  Enforce contract-first development by defining your tool inputs as Java &lt;code&gt;Records&lt;/code&gt;—the MCP SDK uses these to generate the JSON-RPC schemas Claude 4 requires.&lt;/li&gt;
&lt;li&gt;  Implement &lt;code&gt;McpResourceProvider&lt;/code&gt; for read-only data (like logs or docs) and reserve &lt;code&gt;McpTool&lt;/code&gt; strictly for side-effect-heavy operations.&lt;/li&gt;
&lt;li&gt;  Standardize on Server-Sent Events (SSE) for the transport layer to bypass the overhead of persistent WebSockets in serverless environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Show Me The Code
&lt;/h2&gt;

&lt;p&gt;This is how you define a type-safe bridge between Claude 4 and your enterprise inventory system using the 2026 Spring AI MCP SDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Configuration&lt;/span&gt;
&lt;span class="nd"&gt;@EnableMcpServer&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"Inventory-Service"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"2.4.0"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;McpEnterpriseConfig&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;McpTool&lt;/span&gt; &lt;span class="nf"&gt;stockChecker&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;InventoryService&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;McpTool&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;builder&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"query_erp_stock"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Get real-time SKU availability from SAP/Oracle"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;inputSchema&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;StockRequest&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// Auto-generates JSON-RPC schema&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;fetch&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;skuId&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;warehouseCode&lt;/span&gt;&lt;span class="o"&gt;()))&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Claude 4 now sees a strictly typed 'query_erp_stock' tool via SSE&lt;/span&gt;
&lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="nf"&gt;StockRequest&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;skuId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;warehouseCode&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Decoupling:&lt;/strong&gt; MCP is the "JDBC for LLMs"—it separates your model logic from your data infrastructure.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Type Safety:&lt;/strong&gt; Using Java Records with MCP ensures Claude 4 never sends a string where your service expects an integer.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Discovery:&lt;/strong&gt; Claude 4 agents can now "introspect" your entire service capability at runtime, eliminating the need for manual prompt updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Want to go deeper? &lt;a href="https://javalld.com" rel="noopener noreferrer"&gt;javalld.com&lt;/a&gt; — machine coding interview problems with working Java code and full execution traces.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>java</category>
      <category>ai</category>
      <category>llm</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Java LLD: Designing a High-Concurrency Twitter Feed</title>
      <dc:creator>Machine coding Master</dc:creator>
      <pubDate>Sat, 25 Apr 2026 05:00:32 +0000</pubDate>
      <link>https://dev.to/machinecodingmaster/java-lld-designing-a-high-concurrency-twitter-feed-17cg</link>
      <guid>https://dev.to/machinecodingmaster/java-lld-designing-a-high-concurrency-twitter-feed-17cg</guid>
      <description>&lt;h2&gt;
  
  
  Java LLD: Designing a High-Concurrency Twitter Feed
&lt;/h2&gt;

&lt;p&gt;Designing a Twitter-style news feed is a staple of FAANG machine coding interviews. It forces you to demonstrate a deep understanding of read-vs-write trade-offs and thread-safe data structures in a high-scale environment.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I built &lt;a href="https://javalld.com" rel="noopener noreferrer"&gt;javalld.com&lt;/a&gt; while prepping for senior roles — complete LLD problems with execution traces, not just theory.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Mistake Most Candidates Make
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;The Pull Model Trap:&lt;/strong&gt; Querying the database for all followed users' tweets and sorting them by timestamp on every single read request. This results in $O(N \log N)$ read latency, which kills performance as the social graph grows.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Ignoring Concurrency:&lt;/strong&gt; Using standard collections like &lt;code&gt;ArrayList&lt;/code&gt; or &lt;code&gt;HashMap&lt;/code&gt; for feeds, leading to &lt;code&gt;ConcurrentModificationException&lt;/code&gt; when one thread posts a tweet while another reads the feed.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Lack of Bound Checks:&lt;/strong&gt; Failing to implement a sliding window or capacity limit for in-memory feeds, eventually leading to &lt;code&gt;OutOfMemoryError&lt;/code&gt; as the system scales.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Right Approach
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Core Mental Model:&lt;/strong&gt; Use a "Push-based" (Fan-out on Write) architecture where a tweet is delivered to every follower's pre-computed timeline at the moment it is posted.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Key Entities:&lt;/strong&gt; &lt;code&gt;User&lt;/code&gt;, &lt;code&gt;Tweet&lt;/code&gt;, &lt;code&gt;FeedService&lt;/code&gt;, &lt;code&gt;SocialGraphService&lt;/code&gt;, and &lt;code&gt;Timeline&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why it beats the naive approach:&lt;/strong&gt; It optimizes for the 99% read-heavy nature of social media by making feed retrieval a simple $O(1)$ memory access, shifting complexity to the asynchronous write path.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Key Insight (Code)
&lt;/h2&gt;

&lt;p&gt;To maintain a sorted, thread-safe feed that automatically discards old tweets, use &lt;code&gt;ConcurrentSkipListMap&lt;/code&gt;. This allows concurrent access while keeping tweets ordered by timestamp.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FeedService&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Long&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;ConcurrentSkipListMap&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Long&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Tweet&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;userFeeds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ConcurrentHashMap&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="no"&gt;MAX_FEED_SIZE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;fanOut&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="n"&gt;authorId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Tweet&lt;/span&gt; &lt;span class="n"&gt;tweet&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;Set&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Long&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;followers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;socialGraph&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getFollowers&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;authorId&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Long&lt;/span&gt; &lt;span class="n"&gt;followerId&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;followers&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// Get or create the thread-safe sorted feed&lt;/span&gt;
            &lt;span class="nc"&gt;ConcurrentSkipListMap&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Long&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Tweet&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;feed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;userFeeds&lt;/span&gt;
                &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;computeIfAbsent&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;followerId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ConcurrentSkipListMap&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class="nc"&gt;Comparator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;reverseOrder&lt;/span&gt;&lt;span class="o"&gt;()));&lt;/span&gt;

            &lt;span class="n"&gt;feed&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tweet&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getTimestamp&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;tweet&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

            &lt;span class="c1"&gt;// Maintain sliding window&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;feed&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;MAX_FEED_SIZE&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;feed&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;pollLastEntry&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Fan-out on Write:&lt;/strong&gt; Always optimize for the most frequent operation; in Twitter's case, viewing a timeline happens orders of magnitude more often than posting.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;The Observer Pattern:&lt;/strong&gt; Decouple the &lt;code&gt;PostTweet&lt;/code&gt; logic from the &lt;code&gt;FanOut&lt;/code&gt; logic to ensure that a slow feed update doesn't block the user's primary action.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Thread-Safety without Locking:&lt;/strong&gt; Leverage &lt;code&gt;java.util.concurrent&lt;/code&gt; structures like &lt;code&gt;ConcurrentSkipListMap&lt;/code&gt; to handle high-frequency updates without the performance bottleneck of &lt;code&gt;synchronized&lt;/code&gt; blocks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Full working implementation with execution trace available at &lt;a href="https://javalld.com/problems/twitter-feed" rel="noopener noreferrer"&gt;https://javalld.com/problems/twitter-feed&lt;/a&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>systemdesign</category>
      <category>interview</category>
      <category>oop</category>
    </item>
    <item>
      <title>Claude 3.7 + JEP 480: Stop Building Fragile AI Agents with CompletableFuture</title>
      <dc:creator>Machine coding Master</dc:creator>
      <pubDate>Fri, 24 Apr 2026 05:22:25 +0000</pubDate>
      <link>https://dev.to/machinecodingmaster/claude-37-jep-480-stop-building-fragile-ai-agents-with-completablefuture-4hdf</link>
      <guid>https://dev.to/machinecodingmaster/claude-37-jep-480-stop-building-fragile-ai-agents-with-completablefuture-4hdf</guid>
      <description>&lt;h2&gt;
  
  
  Claude 3.7 + JEP 480: Stop Building Fragile AI Agents with CompletableFuture
&lt;/h2&gt;

&lt;p&gt;Claude 3.7 just dropped with superior reasoning and "Computer Use" capabilities, but your agentic workflow is only as strong as its orchestration layer. If you're still using &lt;code&gt;CompletableFuture.allOf()&lt;/code&gt; to manage parallel tool calls, you're building a distributed system that fails silently and leaks resources.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Heads up:&lt;/strong&gt; if you want to see these patterns applied to real interview problems, &lt;a href="https://javalld.com" rel="noopener noreferrer"&gt;javalld.com&lt;/a&gt; has full machine coding solutions with traces.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why Most Developers Get This Wrong
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unbounded thread creation:&lt;/strong&gt; Wrapping blocking Claude API calls in &lt;code&gt;CachedThreadPools&lt;/code&gt; is a recipe for OOM errors when agentic loops spin out of control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orphaned tool executions:&lt;/strong&gt; When one tool call fails (e.g., a timeout on a web search), its siblings keep running, wasting tokens and compute because there's no parent-child relationship.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Swallowed exceptions:&lt;/strong&gt; Standard async patterns make it nearly impossible to propagate a specific &lt;code&gt;ToolExecutionException&lt;/code&gt; back to the Claude context without losing the stack trace.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Right Way
&lt;/h2&gt;

&lt;p&gt;Use &lt;code&gt;StructuredTaskScope&lt;/code&gt; (JEP 480) to treat a multi-tool execution block as a single unit of work that fails fast and cleans up after itself.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deterministic Shutdown:&lt;/strong&gt; Use &lt;code&gt;StructuredTaskScope.ShutdownOnFailure()&lt;/code&gt; to ensure that if one tool call fails, all other pending tasks in that reasoning step are immediately cancelled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Thread Integration:&lt;/strong&gt; Map each Claude tool request to a Virtual Thread (JEP 444) to handle high-concurrency I/O without the memory overhead of platform threads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context Propagation:&lt;/strong&gt; Ensure the &lt;code&gt;anthropic-beta: computer-use-2024-10-22&lt;/code&gt; headers and conversation state are scoped to the current task hierarchy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Show Me The Code
&lt;/h2&gt;

&lt;p&gt;This is how you orchestrate parallel tool calls for a Claude 3.7 agent using structured concurrency.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ToolResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;executeParallelTools&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ToolCall&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;calls&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;throws&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StructuredTaskScope&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ShutdownOnFailure&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Subtask&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ToolResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;calls&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;map&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;call&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;fork&lt;/span&gt;&lt;span class="o"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;toolRegistry&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;execute&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="o"&gt;)))&lt;/span&gt;
            &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toList&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

        &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;join&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;           &lt;span class="c1"&gt;// Wait for all tool calls&lt;/span&gt;
        &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;throwIfFailed&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;  &lt;span class="c1"&gt;// Propagate the first error encountered&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;map&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;Subtask:&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;toList&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JEP 480 is the "Try-With-Resources" for threads:&lt;/strong&gt; It turns asynchronous tool calls into a synchronous-looking, deterministic control flow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude 3.7's reasoning is expensive:&lt;/strong&gt; Don't let a stray thread or a zombie API call burn your budget; use &lt;code&gt;ShutdownOnFailure&lt;/code&gt; to kill sub-tasks the moment one fails.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fail-fast is the only strategy:&lt;/strong&gt; For production agents, if one tool dies, the whole reasoning step should pivot or retry—never allow partial, inconsistent state.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>java</category>
      <category>concurrency</category>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Your Virtual Threads Are Leaking: Why ScopedValue is the Only Way Forward</title>
      <dc:creator>Machine coding Master</dc:creator>
      <pubDate>Thu, 23 Apr 2026 21:13:57 +0000</pubDate>
      <link>https://dev.to/machinecodingmaster/your-virtual-threads-are-leaking-why-scopedvalue-is-the-only-way-forward-in-2025-pie</link>
      <guid>https://dev.to/machinecodingmaster/your-virtual-threads-are-leaking-why-scopedvalue-is-the-only-way-forward-in-2025-pie</guid>
      <description>&lt;h2&gt;
  
  
  Your Virtual Threads Are Leaking: Why ScopedValue is the Only Way Forward.
&lt;/h2&gt;

&lt;p&gt;If you're spinning up millions of Virtual Threads but still clinging to &lt;code&gt;ThreadLocal&lt;/code&gt;, you're building a memory bomb. Java 21 changed the game, and if you haven't migrated to &lt;code&gt;ScopedValue&lt;/code&gt; yet, you're missing the actual point of lightweight concurrency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Most Developers Get This Wrong
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Scalability Trap:&lt;/strong&gt; Treating Virtual Threads like Platform Threads. Thinking millions of &lt;code&gt;ThreadLocal&lt;/code&gt; maps won't wreck your heap is a rookie mistake; the per-thread overhead adds up fast when you scale to 100k+ concurrent tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Mutability Nightmare:&lt;/strong&gt; Using &lt;code&gt;ThreadLocal.set()&lt;/code&gt; creates unpredictable side effects in deep call stacks. In a world of massive concurrency, mutable global state is a debugging death sentence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual Cleanup Failures:&lt;/strong&gt; Relying on &lt;code&gt;try-finally&lt;/code&gt; to &lt;code&gt;.remove()&lt;/code&gt; locals. It inevitably fails during unhandled exceptions or complex async handoffs, leading to "ghost" data bleeding between requests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Right Way
&lt;/h2&gt;

&lt;p&gt;Shift from long-lived, mutable thread-bound state to scoped, immutable context propagation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;ScopedValue.where(...)&lt;/code&gt; to define strict, readable boundaries for your data (like Tenant IDs or User principals).&lt;/li&gt;
&lt;li&gt;Embrace &lt;strong&gt;Structured Concurrency&lt;/strong&gt;: use &lt;code&gt;StructuredTaskScope&lt;/code&gt; to ensure context propagates automatically and safely to child threads.&lt;/li&gt;
&lt;li&gt;Treat context as strictly immutable; if you need to change a value, you re-bind it in a nested scope rather than mutating the current one.&lt;/li&gt;
&lt;li&gt;Optimize for memory: &lt;code&gt;ScopedValue&lt;/code&gt; is designed to be lightweight, often stored in a single internal array rather than a complex hash map.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Show Me The Code
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="nc"&gt;ScopedValue&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;TENANT_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ScopedValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;newInstance&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;serveRequest&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;tenant&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Runnable&lt;/span&gt; &lt;span class="n"&gt;logic&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Context is bound to this scope and its children only&lt;/span&gt;
    &lt;span class="nc"&gt;ScopedValue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;TENANT_ID&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tenant&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="o"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;performBusinessLogic&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;});&lt;/span&gt;
    &lt;span class="c1"&gt;// Outside this block, TENANT_ID is automatically cleared&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;performBusinessLogic&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// O(1) access, no risk of memory leaks, completely immutable&lt;/span&gt;
    &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;currentTenant&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;TENANT_ID&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt; 
    &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Working for: "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;currentTenant&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Memory Efficiency:&lt;/strong&gt; &lt;code&gt;ScopedValue&lt;/code&gt; eliminates the heavy &lt;code&gt;ThreadLocalMap&lt;/code&gt; overhead, making it the only viable choice for high-density Virtual Thread architectures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safety by Default:&lt;/strong&gt; Immutability isn't a limitation; it's a feature that prevents "spooky action at a distance" across your call stack.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured Inheritance:&lt;/strong&gt; Unlike &lt;code&gt;InheritableThreadLocal&lt;/code&gt;, which performs expensive data copying, &lt;code&gt;ScopedValue&lt;/code&gt; shares data efficiently with child threads within a &lt;code&gt;StructuredTaskScope&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Want to go deeper? &lt;a href="https://javalld.com" rel="noopener noreferrer"&gt;javalld.com&lt;/a&gt; — machine coding interview problems with working Java code and full execution traces.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>java</category>
      <category>concurrency</category>
      <category>systemdesign</category>
      <category>programming</category>
    </item>
    <item>
      <title>Java Machine Coding: Building a Dependency Resolver with Topological Sort</title>
      <dc:creator>Machine coding Master</dc:creator>
      <pubDate>Thu, 23 Apr 2026 20:59:09 +0000</pubDate>
      <link>https://dev.to/machinecodingmaster/java-machine-coding-building-a-dependency-resolver-with-topological-sort-3gpj</link>
      <guid>https://dev.to/machinecodingmaster/java-machine-coding-building-a-dependency-resolver-with-topological-sort-3gpj</guid>
      <description>&lt;h2&gt;
  
  
  Java Machine Coding: Building a Dependency Resolver with Topological Sort
&lt;/h2&gt;

&lt;p&gt;Dependency management is the backbone of build tools like Maven or Gradle and a high-signal problem in senior-level machine coding interviews. If you cannot correctly order task execution or detect circular dependencies, your system will fail at scale.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you're prepping for interviews, I've been building &lt;a href="https://javalld.com" rel="noopener noreferrer"&gt;javalld.com&lt;/a&gt; — real machine coding problems with full execution traces.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Mistake Most Candidates Make
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Naive Recursion:&lt;/strong&gt; Using simple DFS without state tracking, which leads to redundant computations or infinite loops in circular graphs.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Ignoring In-Degrees:&lt;/strong&gt; Attempting to "find" the next task by iterating over the entire list repeatedly ($O(N^2)$), rather than tracking which tasks are ready.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Coupling Logic:&lt;/strong&gt; Mixing the task execution logic directly into the graph traversal code, making the system hard to test and extend.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Right Approach
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Core Mental Model:&lt;/strong&gt; Represent the system as a Directed Acyclic Graph (DAG) and use Kahn’s Algorithm (BFS) to resolve nodes with zero incoming edges first.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Key Entities:&lt;/strong&gt; &lt;code&gt;Task&lt;/code&gt;, &lt;code&gt;DependencyGraph&lt;/code&gt;, &lt;code&gt;DependencyResolver&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why it beats the naive approach:&lt;/strong&gt; Kahn’s Algorithm provides $O(V + E)$ time complexity and offers a "free" way to detect cycles—if your output list is smaller than your input, you have a circular dependency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Key Insight (Code)
&lt;/h2&gt;

&lt;p&gt;The heart of the resolver is maintaining the &lt;code&gt;inDegree&lt;/code&gt; map. When a task's in-degree hits zero, it is ready for execution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;adj&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Map&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Integer&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;inDegree&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;Queue&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;LinkedList&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt;
    &lt;span class="n"&gt;inDegree&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;forEach&lt;/span&gt;&lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;degree&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;degree&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt; &lt;span class="o"&gt;});&lt;/span&gt;

    &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;order&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ArrayList&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;(!&lt;/span&gt;&lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isEmpty&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;current&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;poll&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;current&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;neighbor&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;adj&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getOrDefault&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;current&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ArrayList&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;()))&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;inDegree&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;put&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;neighbor&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;inDegree&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;neighbor&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inDegree&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;neighbor&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;neighbor&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;inDegree&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CircularDependencyException&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;In-degree Tracking:&lt;/strong&gt; This is the most efficient way to identify "ready-to-run" tasks without re-scanning the graph.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cycle Detection:&lt;/strong&gt; Always validate that the final sorted list size equals the total number of nodes to catch hidden circularities.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Separation of Concerns:&lt;/strong&gt; Keep your graph representation (nodes/edges) distinct from the execution engine to allow for future features like parallel execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Full working implementation with execution trace available at &lt;a href="https://javalld.com/problems/dependency-builder" rel="noopener noreferrer"&gt;https://javalld.com/problems/dependency-builder&lt;/a&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>algorithms</category>
      <category>interview</category>
      <category>programming</category>
    </item>
    <item>
      <title>Java LLD: Designing a Kafka-Like Message Queue for Machine Coding Interviews</title>
      <dc:creator>Machine coding Master</dc:creator>
      <pubDate>Thu, 23 Apr 2026 20:29:05 +0000</pubDate>
      <link>https://dev.to/machinecodingmaster/java-lld-designing-a-kafka-like-message-queue-for-machine-coding-interviews-411l</link>
      <guid>https://dev.to/machinecodingmaster/java-lld-designing-a-kafka-like-message-queue-for-machine-coding-interviews-411l</guid>
      <description>&lt;h1&gt;
  
  
  Java LLD: Designing a Kafka-Like Message Queue for Machine Coding Interviews
&lt;/h1&gt;

&lt;p&gt;Designing a high-performance message queue is a frequent requirement in senior-level machine coding rounds. It tests your ability to balance thread safety with decoupled architecture while managing stateful consumer progress.&lt;/p&gt;

&lt;h2&gt;
  
  
  The mistake most candidates make
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Using a standard &lt;code&gt;java.util.Queue&lt;/code&gt; that removes elements upon polling, which prevents multiple consumer groups from reading the same data.&lt;/li&gt;
&lt;li&gt;  Coupling the Producer directly to the Consumer logic, violating the Pub-Sub principle and making the system brittle to scale.&lt;/li&gt;
&lt;li&gt;  Failing to implement independent offset management, leading to data loss or duplicate processing when one consumer lags.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The right approach
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Core mental model&lt;/strong&gt;: An immutable, append-only log where messages are persisted per topic, allowing consumers to track their own progress independently.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Key entities&lt;/strong&gt;: &lt;code&gt;Topic&lt;/code&gt;, &lt;code&gt;Message&lt;/code&gt;, &lt;code&gt;Subscriber&lt;/code&gt;, &lt;code&gt;ConsumerGroup&lt;/code&gt;, &lt;code&gt;OffsetManager&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why it wins&lt;/strong&gt;: It enables "replayability" and allows multiple heterogeneous systems to consume the same stream at different speeds without interference.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation Insight: The Thread-Safe Log
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Topic&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ArrayList&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;ReentrantLock&lt;/span&gt; &lt;span class="n"&gt;lock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ReentrantLock&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;addMessage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Message&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;lock&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;add&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;unlock&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getMessagesFrom&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="nc"&gt;List&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;of&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;subList&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Decouple producers and consumers by using an append-only log structure instead of a destructive queue.&lt;/li&gt;
&lt;li&gt;  Use &lt;code&gt;ReentrantLock&lt;/code&gt; to ensure atomic appends to the message log in a multi-threaded producer environment.&lt;/li&gt;
&lt;li&gt;  Manage consumer state via independent offsets, allowing each &lt;code&gt;ConsumerGroup&lt;/code&gt; to process messages at its own pace.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Full working implementation with execution trace available at &lt;a href="https://javalld.com/problems/message-queue" rel="noopener noreferrer"&gt;https://javalld.com/problems/message-queue&lt;/a&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>design</category>
      <category>concurrency</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Java LLD Interview Prep</title>
      <dc:creator>Machine coding Master</dc:creator>
      <pubDate>Thu, 23 Apr 2026 20:20:09 +0000</pubDate>
      <link>https://dev.to/machinecodingmaster/java-lld-interview-prep-3ec1</link>
      <guid>https://dev.to/machinecodingmaster/java-lld-interview-prep-3ec1</guid>
      <description>&lt;h1&gt;
  
  
  Java LLD: Designing a Scalable Job Scheduler Without Busy Polling
&lt;/h1&gt;

&lt;p&gt;In my time at companies like Apple and Amazon, the "Job Scheduler" was a go-to interview question for senior roles. It isn't just about whether you can write a loop; it’s a litmus test for your understanding of resource management, thread safety, and low-level synchronization primitives. If you can't explain how to wake up a thread at the exact millisecond a job is due, you aren't passing the bar.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Topic Matters
&lt;/h2&gt;

&lt;p&gt;Designing a job scheduler is a foundational Low-Level Design (LLD) problem because it mirrors real-world systems like Crontab, Quartz, or the internal task executors used in microservices. In an interview, you aren't being asked to use &lt;code&gt;ScheduledExecutorService&lt;/code&gt;; you are being asked to &lt;em&gt;build&lt;/em&gt; it. The interviewer is looking for precision, efficiency, and a deep grasp of Java’s &lt;code&gt;java.util.concurrent&lt;/code&gt; package.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Mistake Most Candidates Make
&lt;/h2&gt;

&lt;p&gt;The most common pitfall is "Busy Polling." I’ve seen countless candidates implement a &lt;code&gt;while(true)&lt;/code&gt; loop that calls &lt;code&gt;Thread.sleep(1000)&lt;/code&gt; and checks a queue for pending tasks. &lt;/p&gt;

&lt;p&gt;This approach is flawed for two reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Latency:&lt;/strong&gt; If a job is scheduled for &lt;code&gt;T+50ms&lt;/code&gt; and your thread just started a 1000ms sleep, the job will be 950ms late.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;CPU Waste:&lt;/strong&gt; If the queue is empty, the thread keeps waking up unnecessarily, consuming CPU cycles and draining battery/resources for no reason.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Experienced engineers know that the goal is to have the scheduler thread sleep for the &lt;em&gt;exact&lt;/em&gt; duration until the next task is due, or until a new, higher-priority task is added.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Right Approach: The Mental Model
&lt;/h2&gt;

&lt;p&gt;To build a production-grade scheduler, you need to combine the &lt;strong&gt;Command Pattern&lt;/strong&gt; with a &lt;strong&gt;Priority Queue&lt;/strong&gt; and fine-grained synchronization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Entities
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Job:&lt;/strong&gt; A wrapper around a &lt;code&gt;Runnable&lt;/code&gt; that includes an &lt;code&gt;executionTime&lt;/code&gt; (epoch timestamp).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Scheduler:&lt;/strong&gt; A Singleton that manages the lifecycle of the background worker.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;JobStore:&lt;/strong&gt; A &lt;code&gt;PriorityQueue&lt;/code&gt; (min-heap) that keeps the job with the earliest execution time at the head.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;WorkerThread:&lt;/strong&gt; The consumer thread that waits for jobs to become "due."&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why This Wins
&lt;/h3&gt;

&lt;p&gt;Instead of generic sleeping, we use &lt;code&gt;ReentrantLock&lt;/code&gt; and its associated &lt;code&gt;Condition&lt;/code&gt; object. This allows us to implement a "wait-with-timeout" strategy. If a new job arrives that is scheduled earlier than the current head of the queue, we can signal the worker thread to wake up, recalculate its wait time, and go back to sleep—or execute the new job immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Insight: Smart Waiting
&lt;/h2&gt;

&lt;p&gt;The magic happens in the worker's execution loop. We don't just wait; we wait &lt;em&gt;conditionally&lt;/em&gt;. We use &lt;code&gt;Condition.await(delay)&lt;/code&gt; to put the thread into a timed-waiting state. This is more efficient than &lt;code&gt;Thread.sleep&lt;/code&gt; because it can be interrupted by a &lt;code&gt;signal()&lt;/code&gt; call when a new job is pushed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;running&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;lock&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobQueue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;isEmpty&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;newJobCondition&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;await&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// Wait indefinitely for the first job&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;

            &lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="n"&gt;currentTime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;currentTimeMillis&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
            &lt;span class="nc"&gt;Job&lt;/span&gt; &lt;span class="n"&gt;nextJob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;jobQueue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;peek&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
            &lt;span class="kt"&gt;long&lt;/span&gt; &lt;span class="n"&gt;delay&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nextJob&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getExecutionTime&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;currentTime&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;delay&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;executorService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;submit&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobQueue&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;poll&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt; &lt;span class="c1"&gt;// Execute due job&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="c1"&gt;// Wait until the job is due OR a new job is added&lt;/span&gt;
                &lt;span class="n"&gt;newJobCondition&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;await&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;delay&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;TimeUnit&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;MILLISECONDS&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;InterruptedException&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;Thread&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;currentThread&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;interrupt&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;unlock&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Concurrency Angle: Thread Safety and Atomicity
&lt;/h2&gt;

&lt;p&gt;When designing this, you must account for the &lt;strong&gt;Producer-Consumer&lt;/strong&gt; race condition. While the worker thread is checking the &lt;code&gt;peek()&lt;/code&gt; of the queue, a producer thread might be calling &lt;code&gt;addJob()&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;If you use a standard &lt;code&gt;PriorityQueue&lt;/code&gt;, it is not thread-safe. Even if you use a &lt;code&gt;PriorityBlockingQueue&lt;/code&gt;, it doesn't solve the "wait-until-due" logic on its own. You need the &lt;code&gt;ReentrantLock&lt;/code&gt; to ensure that the process of checking the time and deciding to wait is &lt;strong&gt;atomic&lt;/strong&gt;. &lt;/p&gt;

</description>
      <category>java</category>
      <category>programming</category>
      <category>interview</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
