<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: joshinii</title>
    <description>The latest articles on DEV Community by joshinii (@joshinii).</description>
    <link>https://dev.to/joshinii</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/joshinii"/>
    <language>en</language>
    <item>
      <title>State, Memory, and Context: What AI Actually “Remembers”</title>
      <dc:creator>joshinii</dc:creator>
      <pubDate>Fri, 23 Jan 2026 06:44:04 +0000</pubDate>
      <link>https://dev.to/joshinii/state-memory-and-context-what-ai-actually-remembers-328n</link>
      <guid>https://dev.to/joshinii/state-memory-and-context-what-ai-actually-remembers-328n</guid>
      <description>&lt;p&gt;Have you noticed this before?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The model forgets something it clearly knew earlier
&lt;/li&gt;
&lt;li&gt;The same input gives a different answer later
&lt;/li&gt;
&lt;li&gt;Adding a bit of “context” suddenly fixes the behavior
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren’t model quirks.&lt;br&gt;&lt;br&gt;
They’re signals that &lt;strong&gt;state is unclear in the system design&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This post builds a clean mental model for how &lt;strong&gt;state, memory, and context&lt;/strong&gt; work in AI-enabled systems—and how to design without accidental coupling.&lt;/p&gt;




&lt;h2&gt;
  
  
  Traditional Systems: Where State Is Clear
&lt;/h2&gt;

&lt;p&gt;State in traditional applications is usually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stored in databases&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Permanent or semi-permanent information that the system relies on.&lt;br&gt;&lt;br&gt;
&lt;em&gt;Example:&lt;/em&gt; Customer records, order history, or inventory levels in an Oracle or PostgreSQL database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cached intentionally&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Temporary copies of data to improve performance or reduce repeated computations.&lt;br&gt;&lt;br&gt;
&lt;em&gt;Example:&lt;/em&gt; Session data stored in Redis or frequently accessed product catalog data cached in memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Passed explicitly through APIs&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Data sent between services or components as part of a request.&lt;br&gt;&lt;br&gt;
&lt;em&gt;Example:&lt;/em&gt; A service call that includes a user ID and account type to fetch specific account information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auditable and recoverable&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Every state change can be tracked and traced for debugging or compliance.&lt;br&gt;&lt;br&gt;
&lt;em&gt;Example:&lt;/em&gt; Versioned financial transactions, order status changes, or audit logs that can reconstruct system behavior at any point.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most importantly, this state is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Explicit&lt;/strong&gt; – you know exactly where it lives
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Addressable&lt;/strong&gt; – you can query, update, or delete it
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deterministic&lt;/strong&gt; – the same inputs produce predictable results
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When something breaks, you can trace &lt;strong&gt;what changed&lt;/strong&gt; and &lt;strong&gt;where&lt;/strong&gt;, which makes debugging reliable.&lt;/p&gt;

&lt;p&gt;AI challenges this—not by removing state, but by &lt;strong&gt;hiding where developers expect it to live&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Model Does &lt;em&gt;Not&lt;/em&gt; Have
&lt;/h2&gt;

&lt;p&gt;Let’s start with a hard boundary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Models do not have memory.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does not remember previous requests&lt;/li&gt;
&lt;li&gt;Does not retain conversation history&lt;/li&gt;
&lt;li&gt;Does not accumulate state over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each call is independent.&lt;/p&gt;

&lt;p&gt;If something feels “remembered,” your system &lt;strong&gt;supplied it again&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three Concepts That Must Stay Separate
&lt;/h2&gt;

&lt;p&gt;Many AI bugs come from mixing these up.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context
&lt;/h3&gt;

&lt;p&gt;Context is &lt;strong&gt;input&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt text&lt;/li&gt;
&lt;li&gt;Instructions&lt;/li&gt;
&lt;li&gt;Examples&lt;/li&gt;
&lt;li&gt;Retrieved documents&lt;/li&gt;
&lt;li&gt;Conversation history you resend&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exists only for one request&lt;/li&gt;
&lt;li&gt;Token-limited&lt;/li&gt;
&lt;li&gt;Consumed, not stored&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of context like &lt;strong&gt;function arguments&lt;/strong&gt;, not variables.&lt;/p&gt;




&lt;h3&gt;
  
  
  Memory
&lt;/h3&gt;

&lt;p&gt;Memory is &lt;strong&gt;external state you manage&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stored chat history&lt;/li&gt;
&lt;li&gt;User preferences&lt;/li&gt;
&lt;li&gt;Retrieved embeddings&lt;/li&gt;
&lt;li&gt;Cached tool outputs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lives outside the model&lt;/li&gt;
&lt;li&gt;Must be fetched intentionally&lt;/li&gt;
&lt;li&gt;Must be injected back into context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If memory feels implicit, the design is already fragile.&lt;/p&gt;




&lt;h3&gt;
  
  
  State
&lt;/h3&gt;

&lt;p&gt;State is &lt;strong&gt;system-level truth&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workflow progress&lt;/li&gt;
&lt;li&gt;Decisions made&lt;/li&gt;
&lt;li&gt;User-visible outcomes&lt;/li&gt;
&lt;li&gt;Audit logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;State must:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persist beyond a single request&lt;/li&gt;
&lt;li&gt;Be inspectable&lt;/li&gt;
&lt;li&gt;Be owned by the application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI can influence state, but the actual state must be stored and controlled outside the model.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Most Common Design Failure
&lt;/h2&gt;

&lt;p&gt;Many systems accidentally treat the model as stateful.&lt;/p&gt;

&lt;p&gt;This shows up as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Relying on conversation flow instead of stored data&lt;/li&gt;
&lt;li&gt;Assuming consistency without re-supplying context&lt;/li&gt;
&lt;li&gt;Letting decisions live only in generated text&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Non-reproducible behavior&lt;/li&gt;
&lt;li&gt;Debugging without ground truth&lt;/li&gt;
&lt;li&gt;Silent behavior drift&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If restarting your service changes outcomes, state is leaking.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Cleaner Mental Model
&lt;/h2&gt;

&lt;p&gt;A more stable framing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Models compute&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Systems remember&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Applications decide&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model is a stateless function.&lt;br&gt;&lt;br&gt;
Your system decides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What context to assemble&lt;/li&gt;
&lt;li&gt;What outputs to persist&lt;/li&gt;
&lt;li&gt;What becomes authoritative state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once this boundary is clear, complexity drops sharply.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Simple Rule
&lt;/h2&gt;

&lt;p&gt;If something matters tomorrow, it &lt;strong&gt;cannot live only in today’s prompt&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Store it.&lt;br&gt;&lt;br&gt;
Version it.&lt;br&gt;&lt;br&gt;
Own it.&lt;/p&gt;

&lt;p&gt;AI can assist—but it should never quietly carry state for you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where This Leads Next
&lt;/h2&gt;

&lt;p&gt;Context and Data Flow: Feeding AI the Right Information&lt;/p&gt;

&lt;p&gt;That’s where the next post goes.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Who Owns the Decision When AI Is Involved?</title>
      <dc:creator>joshinii</dc:creator>
      <pubDate>Fri, 23 Jan 2026 06:19:14 +0000</pubDate>
      <link>https://dev.to/joshinii/who-owns-the-decision-when-ai-is-involved-3fd1</link>
      <guid>https://dev.to/joshinii/who-owns-the-decision-when-ai-is-involved-3fd1</guid>
      <description>&lt;p&gt;When AI becomes part of an application, a natural question comes up:&lt;/p&gt;

&lt;p&gt;If AI influenced the outcome, who is actually responsible for the decision?&lt;/p&gt;

&lt;p&gt;This question matters more than model choice or tooling, because it directly affects how systems are designed, tested, and trusted.&lt;/p&gt;




&lt;h2&gt;
  
  
  Decision Ownership Has Always Existed
&lt;/h2&gt;

&lt;p&gt;In traditional systems, decision ownership is usually clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business rules decide eligibility
&lt;/li&gt;
&lt;li&gt;Services enforce constraints
&lt;/li&gt;
&lt;li&gt;Workflows control state transitions
&lt;/li&gt;
&lt;li&gt;Databases protect consistency
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even in distributed systems, the system itself owns decisions.&lt;br&gt;&lt;br&gt;
Code executes rules, and responsibility is traceable.&lt;/p&gt;

&lt;p&gt;AI changes &lt;strong&gt;how decisions are informed&lt;/strong&gt;, not &lt;strong&gt;who owns them&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What AI Contributes — and What It Doesn’t
&lt;/h2&gt;

&lt;p&gt;An AI component does not make decisions in the architectural sense.&lt;/p&gt;

&lt;p&gt;It provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Interpretations&lt;/li&gt;
&lt;li&gt;Classifications&lt;/li&gt;
&lt;li&gt;Recommendations&lt;/li&gt;
&lt;li&gt;Summaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These outputs are &lt;strong&gt;inputs to a decision&lt;/strong&gt;, not the decision itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
An AI labels a support ticket as “high priority.”&lt;br&gt;&lt;br&gt;
The system decides whether to escalate, notify, or auto-respond.&lt;/p&gt;

&lt;p&gt;AI informs.&lt;br&gt;&lt;br&gt;
The system acts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Boundary Matters
&lt;/h2&gt;

&lt;p&gt;When AI is treated as the decision-maker:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Failures become hard to explain&lt;/li&gt;
&lt;li&gt;Responsibility becomes unclear&lt;/li&gt;
&lt;li&gt;Safety checks are easy to bypass&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From an architectural standpoint, “the AI decided” is not a useful explanation.&lt;/p&gt;

&lt;p&gt;Clear systems ensure that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decisions are enforced by deterministic logic
&lt;/li&gt;
&lt;li&gt;Constraints live outside the AI
&lt;/li&gt;
&lt;li&gt;Outcomes remain attributable to the system
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  A Common Architectural Pattern
&lt;/h2&gt;

&lt;p&gt;In practice, many AI-enabled systems follow this flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI evaluates context and produces a recommendation
&lt;/li&gt;
&lt;li&gt;Deterministic logic validates constraints
&lt;/li&gt;
&lt;li&gt;Workflows decide the next action
&lt;/li&gt;
&lt;li&gt;Auditing records the outcome
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This keeps authority where guarantees still exist.&lt;/p&gt;

&lt;p&gt;AI contributes &lt;strong&gt;judgment&lt;/strong&gt;, not &lt;strong&gt;authority&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Components vs Agentic Behavior
&lt;/h2&gt;

&lt;p&gt;This distinction helps avoid confusion.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;AI component&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Produces an output when invoked
&lt;/li&gt;
&lt;li&gt;Has no control over next steps
&lt;/li&gt;
&lt;li&gt;Operates within strict boundaries
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An &lt;strong&gt;agentic system&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses AI output to drive actions
&lt;/li&gt;
&lt;li&gt;Orchestrates multiple steps
&lt;/li&gt;
&lt;li&gt;Is deliberately designed to act autonomously
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Agentic behavior is a system-level design choice.&lt;br&gt;&lt;br&gt;
Responsibility still belongs to the system — not the model.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practical Implications for Developers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Treat AI output as a recommendation, not a command
&lt;/li&gt;
&lt;li&gt;Keep final decisions deterministic
&lt;/li&gt;
&lt;li&gt;Add explicit checks for high-risk actions
&lt;/li&gt;
&lt;li&gt;Avoid explanations that stop at “AI made the call”
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These practices keep systems understandable even when reasoning is probabilistic.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Simple Mental Model
&lt;/h2&gt;

&lt;p&gt;Think of AI as a senior advisor.&lt;/p&gt;

&lt;p&gt;It can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Surface patterns you might miss
&lt;/li&gt;
&lt;li&gt;Provide strong suggestions
&lt;/li&gt;
&lt;li&gt;Add useful context
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But it does not approve decisions.&lt;/p&gt;

&lt;p&gt;That responsibility remains with the system you design.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing Thought
&lt;/h2&gt;

&lt;p&gt;AI can influence decisions, but it should not own them.&lt;/p&gt;

&lt;p&gt;So, amongst &lt;a href="https://dev.to/joshinii/state-memory-and-context-what-ai-actually-remembers-328n"&gt;State, Memory, and Context: What AI Actually “Remembers”?&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>discuss</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>If AI outputs aren’t guaranteed, how do systems stay reliable?</title>
      <dc:creator>joshinii</dc:creator>
      <pubDate>Thu, 22 Jan 2026 23:33:27 +0000</pubDate>
      <link>https://dev.to/joshinii/if-ai-outputs-arent-guaranteed-how-do-systems-stay-reliable-3n9o</link>
      <guid>https://dev.to/joshinii/if-ai-outputs-arent-guaranteed-how-do-systems-stay-reliable-3n9o</guid>
      <description>&lt;p&gt;When AI becomes part of an application, the first thing that starts to feel less clear is the contract.&lt;/p&gt;

&lt;p&gt;What does correctness mean now?&lt;br&gt;
What can still be validated?&lt;br&gt;
Where do guarantees actually live?&lt;/p&gt;

&lt;p&gt;Contracts don’t disappear — they shift.&lt;/p&gt;




&lt;h2&gt;
  
  
  Traditional Contracts: The Baseline We’re Used To
&lt;/h2&gt;

&lt;p&gt;In most Java-based or similar systems, contracts clearly define expectations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input&lt;/strong&gt; — what data is allowed and in what format
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Behavior&lt;/strong&gt; — what the system does with valid input
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output&lt;/strong&gt; — what is returned and how it is structured
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Errors&lt;/strong&gt; — how failures are reported
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stability&lt;/strong&gt; — what remains consistent over time
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Concrete example&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A REST API that accepts a &lt;code&gt;customerId&lt;/code&gt; guarantees:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only valid IDs are processed
&lt;/li&gt;
&lt;li&gt;The same request produces the same response
&lt;/li&gt;
&lt;li&gt;Failures surface as explicit errors
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These guarantees are why systems are predictable, testable, and easy to compose.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where AI Changes the Shape of a Contract
&lt;/h2&gt;

&lt;p&gt;AI components still participate in contracts — but the &lt;strong&gt;nature of the guarantees changes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AI does not execute fixed logic paths.&lt;br&gt;&lt;br&gt;
It evaluates information and produces an output that is &lt;em&gt;likely&lt;/em&gt; to be useful.&lt;/p&gt;

&lt;p&gt;That difference shows up at the boundary:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Traditional Component&lt;/th&gt;
&lt;th&gt;AI Component&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Input&lt;/td&gt;
&lt;td&gt;Strict schema&lt;/td&gt;
&lt;td&gt;Context-rich, sometimes incomplete&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Behavior&lt;/td&gt;
&lt;td&gt;Deterministic execution&lt;/td&gt;
&lt;td&gt;Inference-based reasoning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Output&lt;/td&gt;
&lt;td&gt;Exact and repeatable&lt;/td&gt;
&lt;td&gt;Reasonable, may vary slightly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Failure&lt;/td&gt;
&lt;td&gt;Errors / exceptions&lt;/td&gt;
&lt;td&gt;Low confidence, ambiguity&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A rule-based system classifies a support ticket using fixed conditions
&lt;/li&gt;
&lt;li&gt;An AI component reads the ticket text and infers urgency and intent
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI result is often correct — but it is not guaranteed in the same way.&lt;/p&gt;




&lt;h2&gt;
  
  
  Context and Intent
&lt;/h2&gt;

&lt;p&gt;Two ideas explain why AI contracts feel different: &lt;strong&gt;context&lt;/strong&gt; and &lt;strong&gt;intent&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context
&lt;/h3&gt;

&lt;p&gt;Context is everything surrounding a request:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Previous interactions
&lt;/li&gt;
&lt;li&gt;Related records
&lt;/li&gt;
&lt;li&gt;Business constraints
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
User message: &lt;em&gt;“This hasn’t arrived yet.”&lt;/em&gt;&lt;br&gt;&lt;br&gt;
Context may include order history, shipping status, and prior messages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Intent
&lt;/h3&gt;

&lt;p&gt;Intent is what the system infers the user is trying to achieve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Checking status
&lt;/li&gt;
&lt;li&gt;Escalating an issue
&lt;/li&gt;
&lt;li&gt;Requesting a refund
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Traditional systems encode intent explicitly in endpoints or request types.&lt;br&gt;&lt;br&gt;
AI components &lt;strong&gt;infer intent from context&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That inference is powerful — and inherently less certain.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Components vs Agentic Systems
&lt;/h2&gt;

&lt;p&gt;This distinction is important architecturally.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;AI component&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Produces an output (classification, summary, suggestion)&lt;/li&gt;
&lt;li&gt;Has no authority to act&lt;/li&gt;
&lt;li&gt;Is invoked within an existing flow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Summarizing a document or extracting intent from a message.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;agentic system&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses AI output to decide next steps&lt;/li&gt;
&lt;li&gt;Orchestrates multiple actions&lt;/li&gt;
&lt;li&gt;May operate over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A system that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reads a support ticket
&lt;/li&gt;
&lt;li&gt;Decides to fetch account data
&lt;/li&gt;
&lt;li&gt;Generates a response
&lt;/li&gt;
&lt;li&gt;Updates a ticketing system
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Agentic behavior is a &lt;strong&gt;system design choice&lt;/strong&gt;, not something inherent to AI models.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Contracts Are Enforced Around AI
&lt;/h2&gt;

&lt;p&gt;Because AI behavior is inference-based, contracts are enforced &lt;strong&gt;around&lt;/strong&gt; the AI component — not inside it.&lt;/p&gt;

&lt;p&gt;Three familiar architectural ideas make this work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wrapping&lt;/strong&gt; — AI is accessed through a service layer that prepares inputs and validates outputs
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bounding&lt;/strong&gt; — AI is limited to specific responsibilities and controlled data access
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supervision&lt;/strong&gt; — AI outputs are monitored, filtered, or reviewed when needed
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Concrete example&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An AI suggests a reply to a customer email:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The system controls what data the AI can see
&lt;/li&gt;
&lt;li&gt;The output is checked before sending
&lt;/li&gt;
&lt;li&gt;Low-confidence responses trigger fallback logic
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI assists — it does not own the outcome.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practical Implications for Developers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Contracts still matter — but guarantees shift from &lt;em&gt;correctness&lt;/em&gt; to &lt;em&gt;reasonableness&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;AI outputs should be treated as &lt;strong&gt;recommendations&lt;/strong&gt;, not facts&lt;/li&gt;
&lt;li&gt;Agentic behavior must be designed deliberately&lt;/li&gt;
&lt;li&gt;Deterministic systems remain responsible for safety, correctness, and control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Strong systems combine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traditional software for &lt;strong&gt;rules and guarantees&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;AI components for &lt;strong&gt;interpretation and judgment&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Where This Leads Next
&lt;/h2&gt;

&lt;p&gt;Once AI becomes part of the system boundary, another question follows naturally:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/joshinii/who-owns-the-decision-when-ai-is-involved-3fd1"&gt;Who Owns the Decision When AI Is Involved?&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>softwareengineering</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>AI Is No Longer a Tool, It’s an Architectural Layer!</title>
      <dc:creator>joshinii</dc:creator>
      <pubDate>Thu, 22 Jan 2026 06:28:26 +0000</pubDate>
      <link>https://dev.to/joshinii/ai-is-no-longer-a-tool-its-an-architectural-layer-7g2</link>
      <guid>https://dev.to/joshinii/ai-is-no-longer-a-tool-its-an-architectural-layer-7g2</guid>
      <description>&lt;p&gt;I’m a software developer, and transitioning into AI-assisted development hasn’t felt natural.&lt;/p&gt;

&lt;p&gt;When AI is mostly presented as a tool that generates code from prompts, a question comes up quickly:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If AI can already write code, what’s left for experienced developers to do?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The goal of this series is not to use AI tools better, but to understand how AI fits into modern application architecture.&lt;/p&gt;




&lt;p&gt;For most Java and web developers, the systems we build follow a familiar pattern.&lt;/p&gt;

&lt;p&gt;A request comes in.&lt;br&gt;&lt;br&gt;
Code runs.&lt;br&gt;&lt;br&gt;
A response goes out.&lt;/p&gt;

&lt;p&gt;Even in larger enterprise setups — Spring services, Oracle databases, Kafka pipelines — the underlying assumption is usually the same:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Given the same input, the system behaves the same way.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This assumption is deeply embedded in how we design, test, and reason about software.&lt;/p&gt;

&lt;p&gt;AI-powered systems begin to stretch this assumption.&lt;/p&gt;

&lt;p&gt;Not because they are unreliable, but because they produce results in a &lt;strong&gt;fundamentally different way&lt;/strong&gt;. Recognizing this difference is the first step toward understanding where AI fits architecturally.&lt;/p&gt;

&lt;p&gt;This shift isn’t really about generating code faster.&lt;br&gt;&lt;br&gt;
It’s about treating AI as another system component — one with its own boundaries, failure modes, and responsibilities, much like APIs, databases, or message brokers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architectural Perspective: Traditional vs AI-Powered Systems
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Traditional Web Systems&lt;/th&gt;
&lt;th&gt;AI-Powered Systems&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Core behavior&lt;/td&gt;
&lt;td&gt;Deterministic execution&lt;/td&gt;
&lt;td&gt;Reasoning based on likelihood&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logic location&lt;/td&gt;
&lt;td&gt;Code (services, rules engines)&lt;/td&gt;
&lt;td&gt;Model + orchestration layer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Input handling&lt;/td&gt;
&lt;td&gt;Strictly validated&lt;/td&gt;
&lt;td&gt;Context-heavy, flexible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Output&lt;/td&gt;
&lt;td&gt;Predictable and repeatable&lt;/td&gt;
&lt;td&gt;Usually correct, not guaranteed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Failure mode&lt;/td&gt;
&lt;td&gt;Errors, exceptions&lt;/td&gt;
&lt;td&gt;Degraded or unclear responses&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Example for clarity:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traditional: A REST endpoint receives a user ID, queries the database, returns exact user info, or a 404 error.&lt;/li&gt;
&lt;li&gt;AI: A system receives a vague prompt, infers intent, consults multiple sources, and returns a reasonable response — which may vary each time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This combined view highlights the &lt;strong&gt;key shift in reasoning&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traditional systems answer: &lt;em&gt;“What should I do?”&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;AI systems answer: &lt;em&gt;“What makes sense here?”&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system is not random; it’s &lt;strong&gt;making judgments instead of executing fixed rules&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;For developers used to strict control flow, this can feel unfamiliar — not because it’s wrong, but because it solves a different class of problems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Isn’t a Step Backwards
&lt;/h2&gt;

&lt;p&gt;At first glance, AI systems can feel harder to trust:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Outputs aren’t exact&lt;/li&gt;
&lt;li&gt;Testing isn’t always binary&lt;/li&gt;
&lt;li&gt;Behavior may vary slightly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the same time, they handle scenarios where traditional systems often struggle.&lt;/p&gt;

&lt;p&gt;AI systems work well for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Interpreting ambiguous or unstructured input&lt;/li&gt;
&lt;li&gt;Connecting information across many sources&lt;/li&gt;
&lt;li&gt;Supporting decisions when rules are incomplete&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They are generally &lt;strong&gt;not&lt;/strong&gt; suitable replacements for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Financial calculations&lt;/li&gt;
&lt;li&gt;Authorization logic&lt;/li&gt;
&lt;li&gt;Transactional consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That separation is an architectural choice, not a tooling limitation.&lt;/p&gt;

&lt;p&gt;In practice, many systems benefit from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deterministic software for &lt;strong&gt;control and correctness&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;AI systems for &lt;strong&gt;interpretation and decision support&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  A Practical Mental Model
&lt;/h2&gt;

&lt;p&gt;One simple way to frame the shift:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Traditional software executes instructions.&lt;br&gt;&lt;br&gt;
AI software evaluates possibilities.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Both approaches can coexist within the same application.&lt;/p&gt;

&lt;p&gt;AI doesn’t replace backend systems.&lt;br&gt;&lt;br&gt;
It changes &lt;strong&gt;where and how certain decisions are made&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Once this distinction is clear, many AI architecture discussions become easier to follow.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Series Is Really About
&lt;/h2&gt;

&lt;p&gt;This series is &lt;strong&gt;not&lt;/strong&gt; focused on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt techniques&lt;/li&gt;
&lt;li&gt;Model comparisons&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It focuses on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mapping familiar concepts to newer ones&lt;/li&gt;
&lt;li&gt;Understanding how application architecture is evolving&lt;/li&gt;
&lt;li&gt;Exploring how existing backend skills still apply&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each part will relate new ideas back to systems many developers already know — REST APIs, databases, events, and observability.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;When outputs are no longer exact, system boundaries become more important — not less.&lt;/p&gt;

&lt;p&gt;AI components need to be surrounded by well-defined interfaces that decide what is trusted, what is validated, and what happens when confidence is low.&lt;/p&gt;

&lt;p&gt;So, &lt;a href="https://dev.to/joshinii/if-ai-outputs-arent-guaranteed-how-do-systems-stay-reliable-3n9o"&gt;If AI outputs aren’t guaranteed, how do systems stay reliable?&lt;/a&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>ai</category>
      <category>backend</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
