<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Snehitha Domakuntla</title>
    <description>The latest articles on DEV Community by Snehitha Domakuntla (@snehitha_domakuntla_86fa5).</description>
    <link>https://dev.to/snehitha_domakuntla_86fa5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/snehitha_domakuntla_86fa5"/>
    <language>en</language>
    <item>
      <title>Build Agent workflows using LangGraph and Trace using LangSmith</title>
      <dc:creator>Snehitha Domakuntla</dc:creator>
      <pubDate>Wed, 09 Jul 2025 08:10:58 +0000</pubDate>
      <link>https://dev.to/snehitha_domakuntla_86fa5/build-agent-workflows-using-langgraph-and-trace-using-langsmith-30bc</link>
      <guid>https://dev.to/snehitha_domakuntla_86fa5/build-agent-workflows-using-langgraph-and-trace-using-langsmith-30bc</guid>
      <description>&lt;p&gt;“Graphs beat prompt soup.”&lt;br&gt;
– Every tired engineer who’s ever debugged an infinite-loop LLM agent&lt;/p&gt;


&lt;h2&gt;
  
  
  1 | What even is an Agent?
&lt;/h2&gt;

&lt;p&gt;An agent is nothing more than three moving parts:&lt;br&gt;
    • LLM - Chooses what to do next&lt;br&gt;
    • Tools - Give the agent hands to touch the outside world&lt;br&gt;
    • Prompt - Tells the LLM how to think and act&lt;/p&gt;

&lt;p&gt;The LLM runs in a tight loop –&lt;br&gt;
    1.  Pick a tool &amp;amp; craft its input.&lt;br&gt;
    2.  Receive the result (observation).&lt;br&gt;
    3.  Decide what to do next.&lt;br&gt;
    4.  Repeat until a stopping condition fires.&lt;/p&gt;

&lt;p&gt;Plenty of frameworks promise to wrangle that loop; my favourite these days is LangGraph (yes, it has two g’s).&lt;/p&gt;


&lt;h2&gt;
  
  
  2 | LangGraph in 60 Seconds
&lt;/h2&gt;

&lt;p&gt;LangGraph represents an agent’s workflow as — surprise — graphs:&lt;/p&gt;

&lt;p&gt;To put it in simple words, LangGraph represents agent workflow as graphs. Now, these graph structures have 3 major elements –&lt;br&gt;
    • State - A snapshot of everything the agent needs to remember (often a TypedDict or Pydantic model).&lt;br&gt;
    • Nodes - Regular Python functions that do work and return an updated State.&lt;br&gt;
    • Edges - Functions that decide which node to run next.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;nodes do the work, edges tell what to do next&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Defining a Graph&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The 1st thing to do while defining a graph is to define the State of the graph. The State consists of ‘schema of the graph’ and ‘reducer functions’ explaining how to update the state. The schema of the State is the input schema for all of the Nodes and Edges. Nodes use the reducer function to update the State of the system.&lt;/p&gt;

&lt;p&gt;Usually, all Nodes in a graph communicate with a single schema i.e., read and write to the same state channels. But, we can also have nodes write to a private channel - Private State, for internal communication.&lt;/p&gt;

&lt;p&gt;There are two special nodes - START and END.&lt;br&gt;
    • START node represents the node that sends user input to graph. It’s objective is to determine which node to call 1st.&lt;br&gt;
    • END node represents the terminal node and used to denote edges that have no action after they are done.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
from langgraph.graph import START, END

graph.add_edge(START, "node_a")   # entry-point
graph.add_edge("node_a", END)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edges tell how different nodes communicate with each other and are very important in guiding how the agents work. Four types of Edges –&lt;br&gt;
    1.  Normal Edges: Go directly from one node to the next.&lt;br&gt;
    2.  Conditional Edges: Call a function to determine which node(s) to go to next.&lt;br&gt;
    3.  Entry Point: Which node to call first when user input arrives.&lt;br&gt;
    4.  Conditional Entry Point: Call a function to determine which node(s) to call first when user input arrives.&lt;/p&gt;

&lt;p&gt;And a node can have multiple outgoing edges, which are run in parallel as part of the next superset.&lt;/p&gt;

&lt;p&gt;Now, let’s dive deeper by looking at a practical example of building a Text-to-SQL agent using LangGraph. We’ll walk through the key parts to get a sense of how it all comes together.&lt;/p&gt;


&lt;h2&gt;
  
  
  3 | A Practical Walkthrough — Text-to-SQL Agent
&lt;/h2&gt;

&lt;p&gt;Below is the full script (collapsed for sanity). We’ll unpack it step by step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.1 Environment &amp;amp; LLM&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model_name = os.getenv("GPT_MODEL", "openai:gpt-4.1")
llm = init_chat_model(model=model_name, temperature=0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why a temperature of 0?&lt;br&gt;
For production-ish SQL it’s safer to be deterministic; one flaky token can break a query.&lt;/p&gt;

&lt;p&gt;Swap in your favourite model. – Point GPT_MODEL at an Ollama model or leave it to hit OpenAI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.2 Connecting to Postgres&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;connection_string = (
    f"postgresql://{username}:{password}@{host}:{port}/{database}"
)
db = SQLDatabase.from_uri(connection_string)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SQLDatabase is LangChain’s lightweight wrapper around SQLAlchemy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.3 Tooling up with SQLDatabaseToolkit&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;toolkit = SQLDatabaseToolkit(db=db, llm=llm)
tools = toolkit.get_tools()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The toolkit auto-generates three tools:&lt;/p&gt;

&lt;p&gt;Tool | What it does&lt;br&gt;
sql_db_list_tables | Lists every table the DB user can see&lt;br&gt;
sql_db_schema | Returns the CREATE TABLE DDL&lt;br&gt;
sql_db_query | Executes arbitrary SQL and returns rows&lt;/p&gt;

&lt;p&gt;Wrapping each of these in a ToolNode means they can plug straight into a graph node.&lt;/p&gt;

&lt;p&gt;Next up, we define several Nodes. Each node handles a distinct task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.4 Defining the Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Node name | Responsibility&lt;br&gt;
list_tables | Fire sql_db_list_tables unconditionally so the agent always starts with a map of the territory.&lt;br&gt;
call_get_schema | Ask the LLM to decide which tables matter, then call sql_db_schema.&lt;br&gt;
generate_query | Craft the SQL based on user question and schema, without forcing a tool call — gives the LLM space.&lt;br&gt;
check_query | A mini-lint pass that looks for the usual blunders (NOT IN + NULL, UNION vs UNION ALL, etc.). If it rewrites, it preserves the original message ID so downstream edges still line up.&lt;br&gt;
run_query | Finally executes the SQL and streams back rows.&lt;/p&gt;

&lt;p&gt;All five return a dict shaped like {"messages": […]} because that’s the contract of MessagesState.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.5 Wiring the Edges&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Edges define the logic for transitioning from one node to another. In our Text-to-SQL agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.add_edge(START, "list_tables")
builder.add_edge("list_tables", "call_get_schema")
builder.add_edge("call_get_schema", "get_schema")
builder.add_edge("get_schema", "generate_query")
builder.add_conditional_edges("generate_query", should_continue)
builder.add_edge("check_query", "run_query")
builder.add_edge("run_query", "generate_query")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A quick translation to English (Refer to the graph in the 7th step) –&lt;br&gt;
    • We begin at the START node, moving directly into the list_tables node to initialize database awareness.&lt;br&gt;
    • Start → list_tables — always.&lt;/p&gt;

&lt;p&gt;From there, the workflow moves to call_get_schema → get_schema → generate_query.&lt;br&gt;
    • list_tables → call_get_schema — always.&lt;br&gt;
    • call_get_schema → get_schema — always.&lt;br&gt;
    • get_schema → generate_query — always&lt;/p&gt;

&lt;p&gt;The edge from generate_query is conditional (should_continue). If the LLM output includes a query needing validation, it moves to check_query; otherwise, it goes directly to the END node.&lt;br&gt;
    generate_query →&lt;br&gt;
    •END if the LLM didn’t produce a tool call (e.g. simple natural-language answer).&lt;br&gt;
    • check_query if it did request a SQL execution.&lt;/p&gt;

&lt;p&gt;Once checked and corrected, the workflow continues to run_query and loops back to generate_query if more queries or refinements are needed&lt;br&gt;
    • check_query → run_query → generate_query — a feedback &lt;br&gt;
loop that lets the agent react to query results until it’s happy.&lt;/p&gt;

&lt;p&gt;Because edges are ordinary Python functions, adding more intelligence (rate limiting, logging, routing by cost, etc.) is a one-liner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.6 Kicking the tyres&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once our graph is set up, we simply query the agent with natural language. For instance, asking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query_agent(agent, "What is the average salary by department?")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;LangGraph’s stream() method yields chunks as they arrive, so in the terminal you’ll watch the thought-process unroll:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; list_tables
&amp;gt; get_schema("employees")
&amp;gt; run_query("SELECT department, AVG(salary)…")
Average salary by department:
• Sales – $92 k
• Engineering – $131 k
• …

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.7 Visualising the Flow&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;png_bytes = agent.get_graph().draw_mermaid_png()
with open("graph.png", "wb") as f:
    f.write(png_bytes)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mermaid diagrams are my new favourite doc-as-code trick. The generated graph.png looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtj1dv1mhzoi9k49u927.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtj1dv1mhzoi9k49u927.png" alt="mermaid image" width="308" height="645"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(Handy for both README screenshots and onboarding the next engineer who asks “why another framework?”)&lt;/p&gt;




&lt;h2&gt;
  
  
  4 | Observability 101 — Why You Need It
&lt;/h2&gt;

&lt;p&gt;Shipping an agent to prod without observability is like flying a plane with the cockpit lights off.&lt;/p&gt;

&lt;p&gt;Unlike a deterministic API, an LLM-powered workflow can loop, branch, and call tools dozens of times per request — each hop incurring latency, tokens, and cost. To keep users happy (and cloud bills sane) you need to see:&lt;br&gt;
    • What the agent did (the exact prompts, queries, tool calls).&lt;br&gt;
    • When it did it (latency of every hop, overall wall-clock).&lt;br&gt;
    • How much it cost (tokens, image credits, embeddings, etc.).&lt;br&gt;
    • Why it failed (stack traces, model errors, tool exceptions).&lt;/p&gt;

&lt;p&gt;That trio — logs, metrics, and traces — is what the wider software world calls observability. For LLM apps, the king signal is the trace: a tree that starts at the user request and fans out into every child run, tool invocation, and model call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enter LangSmith&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;LangSmith is LangChain’s hosted (or self-hosted) observability and evaluation platform. Flip one environment variable and every LangChain/LangGraph call streams to a LangSmith backend where you can:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wiring our Text-to-SQL Agent to LangSmith&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The good news: we already did. Notice the decorator:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langsmith import traceable

@traceable(name="text2sql_agent_main")
def main():
    ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any function wrapped in @traceable becomes a Run in LangSmith. Child runs (LLM calls, tool invocations) are auto-captured, so the whole graph — list_tables → get_schema → generate_query … — shows up as an explorable trace.&lt;/p&gt;

&lt;p&gt;Add two environment variables and hit Save:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export LANGSMITH_TRACING="true"
export LANGSMITH_API_KEY="ls_live_xxx"   # grab from app.langchain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run your script again, open the LangSmith dashboard, and you’ll see something like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2h278bx2e4bz1w6u0v8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2h278bx2e4bz1w6u0v8.png" alt="langsmith traces" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  5 | Take-Home Checklist
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Turn on tracing in dev today — bugs are cheaper before launch.
• Tag your runs (tags=["prod", "user:42"]) so prod traffic doesn’t drown out staging.
• Watch the waterfall; anything &amp;gt;1 s is a conversion killer.
• Set a TTL once you’re happy with retention; traces pile up fast.
• Automate evals — they’re the CI tests of LLM engineering.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;h2&gt;
  
  
  6 | What’s Next?
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Swap Postgres for Snowflake, BigQuery, or PlanetScale — The only line you’ll touch is the connection string.
• Add guardrails — Insert a node after generate_query that runs pglast or sqlfluff to enforce style or limit cost.
• Size-up with parallel branches — LangGraph supports fan-out edges — run sentiment analysis and SQL queries at the same time, then merge the results downstream.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Give it a spin, break it, and ping me on 🐦 X or email with what you build.&lt;/p&gt;

&lt;p&gt;Happy graphing!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Introducing myself to the world!</title>
      <dc:creator>Snehitha Domakuntla</dc:creator>
      <pubDate>Sat, 21 Jun 2025 07:25:39 +0000</pubDate>
      <link>https://dev.to/snehitha_domakuntla_86fa5/introducing-myself-to-the-world-1opj</link>
      <guid>https://dev.to/snehitha_domakuntla_86fa5/introducing-myself-to-the-world-1opj</guid>
      <description>&lt;p&gt;Hello World!&lt;/p&gt;

&lt;p&gt;I’m an engineer😋.&lt;/p&gt;

&lt;p&gt;I’ve been telling myself I’d start writing for over a year now, but couldn’t get myself to do it. So, I slapped it onto my 2025 New Year’s resolution (to create some kinda content) and here I am, finally working on it now with 194 days left in this year.&lt;/p&gt;

&lt;p&gt;So, Substack suggested a template for the first blog. It is to share my story - who I am, why I am blogging, why now, and details about my future blogs. Let’s go with that flow and lemme tell you guys about myself.&lt;/p&gt;




&lt;h2&gt;
  
  
  My story
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Who am I?
&lt;/h3&gt;

&lt;p&gt;As I said earlier — I’m an &lt;strong&gt;engineer&lt;/strong&gt;. Not the kind who fixes machines or builds bridges. I’m a &lt;strong&gt;Computer Science Engineer&lt;/strong&gt; who just wrapped up a master’s at the University of Houston. (You might’ve picked that up from the classic “Hello World” opening 😄).&lt;/p&gt;

&lt;p&gt;One of my courses—Digital Image Processing—introduced me to this thing called Gradient Descent. At the time, I had no clue about large language models or deep learning, but something about that algorithm sparked my curiosity.&lt;/p&gt;

&lt;p&gt;So I started digging.&lt;/p&gt;

&lt;p&gt;That rabbit hole led me to Andrew Ng’s Deep Learning specialization - built neural networks from scratch, learned about NN architecture and optimization algorithms, tuned hyperparameters, worked with PyTorch &amp;amp; TensorFlow.&lt;/p&gt;

&lt;p&gt;The idea that you could feed data into a system and have it learn to solve problems—sometimes better than humans—was mind-blowing. So I dove deeper into &lt;strong&gt;embeddings, transformers, NLP,&lt;/strong&gt; and all the wild stuff after that. Most of my projects and coursework since then have revolved around building AI applications.&lt;/p&gt;

&lt;p&gt;If I had to describe myself in two words: curious and thorough. I love going deep—really deep—into anything I learn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My long-term dream?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To become a research scientist working on &lt;strong&gt;Conscious AI (AGI?)&lt;/strong&gt;. Cool, right?&lt;/p&gt;

&lt;p&gt;For now, I’m focused on stepping in that direction - by building AI systems that actually &lt;em&gt;solve real problems&lt;/em&gt;. I’m looking to work as an AI/ML/Foundational/Generative Engineer (yep, so many names 😅) at a company doing impactful work.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why am I blogging and why now?
&lt;/h3&gt;

&lt;p&gt;I realized I was spending a lot of time learning cool things, but none of it was documented. And if I didn’t start now, I never would. So here I am, excited to share them!&lt;/p&gt;

&lt;p&gt;And to be honest, this blog isn’t just for others. It’s also for &lt;em&gt;me&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I want a place to come back to a year from now and be like, “Ohh right, that’s how this thing worked.”&lt;/p&gt;

&lt;p&gt;That means my blogs will be simple, honest, and easy to follow. No over-engineering. Just real learnings, as they happen.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to Expect Here:
&lt;/h2&gt;

&lt;p&gt;I’ll be posting once a week (that’s the goal!) about things I’m learning in:&lt;/p&gt;

&lt;p&gt;• AI / ML&lt;/p&gt;

&lt;p&gt;• System Design&lt;/p&gt;

&lt;p&gt;• Cool tech I stumble on&lt;/p&gt;

&lt;p&gt;• Maybe even some project breakdowns or dev rants&lt;/p&gt;

&lt;p&gt;If you’re learning this stuff too—or just curious about how engineers in the AI space think—this blog might be for you.&lt;/p&gt;

&lt;p&gt;Also, I’d love feedback. If you’ve got thoughts on what I should write about, or how I write it, let me know!&lt;/p&gt;




&lt;p&gt;That’s all for now.&lt;/p&gt;

&lt;p&gt;Took me 1.5 hours to write this (why does nobody talk about how hard writing is??), but I’m glad I did.&lt;/p&gt;

&lt;p&gt;Catch you in the next one —&lt;/p&gt;

&lt;p&gt;Snehitha :)&lt;/p&gt;

</description>
      <category>techjourney</category>
      <category>buildinpublic</category>
      <category>womenintech</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
