<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chaitanya Burgupalli</title>
    <description>The latest articles on DEV Community by Chaitanya Burgupalli (@chaitanya_burgupalli_9bb1).</description>
    <link>https://dev.to/chaitanya_burgupalli_9bb1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chaitanya_burgupalli_9bb1"/>
    <language>en</language>
    <item>
      <title>When Manual Wins</title>
      <dc:creator>Chaitanya Burgupalli</dc:creator>
      <pubDate>Wed, 06 May 2026 11:24:16 +0000</pubDate>
      <link>https://dev.to/chaitanya_burgupalli_9bb1/when-manual-wins-36nf</link>
      <guid>https://dev.to/chaitanya_burgupalli_9bb1/when-manual-wins-36nf</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Lessons from replacing a brittle chat integration with a lean, SSE-based LangChain flow&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction: A simple stack
&lt;/h2&gt;

&lt;p&gt;Throughout my software development career, my focus has evolved from "architecture is most important" to "quality is everything" to "engineers are the backbone." While these views are different, they all share one common goal: building good software.&lt;/p&gt;

&lt;p&gt;But there has always been one aspect that adds the most cost to the product lifecycle: deployment and maintenance. A team can build one great product, but if the architecture does not account for deployment and maintenance, that team may never build a second product. It will just keep fighting fires.&lt;/p&gt;

&lt;p&gt;With that in mind, I started a new project to build an AI-integrated application that improves user workflows. The app must make decisions based on status, report information in a rich format, and provide a conversational interface.&lt;/p&gt;

&lt;p&gt;For this project, I set a strict constraint: minimize the surface area. I wanted a robust analysis pipeline without the deployment tax of extra services. The goal was a clean, four-element ecosystem:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The Frontend:&lt;/strong&gt; React (TypeScript), including a conversational interface&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Backend:&lt;/strong&gt; Node.js / Express&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Database:&lt;/strong&gt; Postgres (handling data, vector embeddings and job queuing via &lt;code&gt;pg-boss&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The AI:&lt;/strong&gt; A self-deployed LLM (&lt;code&gt;Ollama&lt;/code&gt; + &lt;code&gt;Qwen 2.5&lt;/code&gt;) for local processing and LangChain&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With a tight delivery timeline, Cursor was the right tool to accelerate implementation. Cursor supported development well, until it hit a wall with the conversational interface using CopilotKit. I had to step in with more details and design guidance to successfully implement the interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I tried
&lt;/h2&gt;

&lt;p&gt;I did some research on the right components to implement my requirements and CopilotKit came up strong. I already had my LangChain endpoint ready and verified, so I instructed Cursor to use CopilotKit and add a chat interface to the frontend and integrate with my LangChain endpoint. Things did not work out very well and every iteration brought up new problems. &lt;/p&gt;

&lt;h2&gt;
  
  
  What kept breaking
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The primary challenge was that the runtime would not recognize my LLM settings from the environment. I had to hardcode the model into the runtime code. &lt;/li&gt;
&lt;li&gt;The data would not reach my LangChain endpoint. Cursor recommended an HTTP adapter and implemented it.&lt;/li&gt;
&lt;li&gt;Once LangChain processed the input and responded the client interface would not recognize the content. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since CopilotKit is being actively improved, using the latest version was the first mistake. From what I read, every version works well as long as the front end is connected directly to the runtime. But once a new intermediate layer was introduced, the integration failed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I switched to manual (guided) integration
&lt;/h2&gt;

&lt;p&gt;I did not want to compromise on my requirement to reduce stack spread, so I reset to a previous stable point in my code and started building a custom integration with LangChain. This required three parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Chat interfaces&lt;/li&gt;
&lt;li&gt;LangChain interface&lt;/li&gt;
&lt;li&gt;A way to send back the response. SSE was the answer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As soon as I shifted to this design the implementation was quicker and smoother. Cursor built a chat interface with the inputs and customizations I asked for. Then the SSE layer ensured that communication between the server and client was based on standard protocols. The format translation between the client input and the LangChain interface was trivial.&lt;/p&gt;

&lt;p&gt;This ensured that chat and tool responses transmitted to the client seamlessly. Now it was a matter of defining the protocol that would help the client choose between displaying the response as a chat message or using a client side component to display rich content. I did hit a few more challenges after that, but they were primarily due to differences in how commercial LLMs (Vertex, OpenAI) handled tools compared to local LLMs. That is a topic for another time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Coding agents are amazing at speeding up development, improving test coverage, and even keeping interfaces documented. But some patterns are still too new for current agents. In this case, the limitations were easy to spot. In other cases, agents may produce code that works but introduce design choices that make long-term maintenance harder. Defining clear design criteria before implementation helps avoid those pitfalls.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>design</category>
    </item>
  </channel>
</rss>
