There was a time when writing software felt like giving instructions to a machine that never questioned you.
You told it what to do, and it obeyed.
If a user clicked a button, you knew exactly where they would land. If they asked for something, you knew exactly which API would be called. Every path was planned ahead of time. Every outcome was predictable at least in theory.
That predictability gave us confidence. It made systems easier to reason about. It made debugging possible. It gave us the sense that if we just wrote enough conditions, covered enough scenarios, and handled enough edge cases, we could build something complete.
But the real world never behaved that neatly.
There was always one more scenario. One more edge case. One more situation we hadn’t anticipated.
And then came large language models.
Before we talk about agentic systems, it helps to step back and understand something more fundamental.
What exactly is a framework?
A framework is not your application. It doesn’t solve your problem for you. Instead, it gives you a structured foundation to build on. Think of it like scaffolding around a building.
The scaffolding doesn’t decide what the building looks like. It doesn’t design the rooms or choose the materials. But without it, constructing the building would be slow, chaotic, and inconsistent.
That’s what a framework does in software.
It provides:
- structure
- reusable components
- best practices
- a consistent way of building things
So instead of solving the same problems over and over again like routing, state management, error handling you focus on what actually matters for your application.
In simple terms:
- A framework doesn’t build the system for you.
- It makes building the system faster, safer, and more consistent.
🤖 What is an Agent Framework?
Now, take that idea and extend it into the world of AI.
An agent framework is a specialized framework designed to build systems that don’t just execute logic… but can reason, act, and adapt.
At its core, an agent framework connects a large language model to the real world.
Not just as a chatbot but as something that can:
- use tools
- remember context
- make decisions
- interact with other systems
- operate across multiple steps
Instead of writing every step manually, you provide the building blocks. The framework takes care of the plumbing.
đź§ What Does That Plumbing Actually Mean?
Without an agent framework, we would have to build everything ourself like
- How tools are registered and invoked
- How context is stored and retrieved
- How multi-step workflows are managed
- How failures are retried
- How state is persisted
- How humans intervene when needed
Each of these sounds manageable in isolation.
But together?
They quickly become complex, fragile, and hard to scale.
An agent framework abstracts all of that.
It gives you layers that handle this complexity so you can focus on behavior instead of infrastructure.
At first, LLMs are looked like just another tool. Another API to integrate. Another service to wrap inside our existing architecture. We treated them the same way we treated everything else call the model, get a response, move on.
But slowly, something started to feel different.
Because an LLM doesn’t behave like a function.
It doesn’t simply execute instructions. It interprets them. It considers context. It makes decisions that are not explicitly written anywhere in your code.
And once you recognize that, a deeper realization follows:
- You are no longer building systems that just follow instructions.
- You are building systems that can decide how to act.
That realization is where AI Agentic Frameworks come in.
The first time you build with an agentic framework, the experience changes in a way that’s hard to ignore.
You don’t begin by writing logic. You don’t start with if-else conditions or routing rules.
Instead, you start by thinking about capabilities.
- What should this system be able to do?
- What tools should it have access to?
- What information should it remember?
- What boundaries should it respect?
And then you assemble these pieces.
It feels less like programming and more like constructing something modular almost like working with Lego blocks. Each piece has a purpose, but the final behavior emerges from how those pieces interact.
The difference is that this system doesn’t just sit there once you build it.
It observes. It adapts. It reasons.
One of the most common misunderstandings is trying to explain this shift using traditional machine learning concepts.
People ask whether this is supervised learning, semi-supervised learning, or something entirely new.
But that framing misses the point.
Nothing fundamental has changed about how the model is trained.
What has changed is how we use it at runtime.
Traditional software executes predefined logic. Every decision is encoded in advance.
Agentic systems operate differently. They take in context, interpret intent, decide what matters, choose an action, observe the outcome, and adjust accordingly.
This loop happens dynamically, not because you explicitly coded every branch, but because the system is capable of reasoning through the situation.
And that changes your role as a developer.
In the past, your responsibility was to define every possible path.
Now, your responsibility is to define the environment in which decisions are made.
You are no longer writing every step.
You are shaping how the system behaves when it encounters something new.
That might sound subtle, but it’s a profound shift.
This is where modern agentic frameworks start to make sense.
Frameworks like LangGraph bring structure to this new world. They allow you to define workflows, maintain state, and introduce controlled transitions, while still leaving room for the agent to make decisions. It feels familiar, but it behaves differently.
At the same time, platforms like Microsoft Agent Framework are extending this idea into enterprise environments. Here, agents are not isolated. They collaborate, follow policies, and operate within governed systems. Intelligence is not enough control and accountability matter just as much.
On the cloud side, AWS Strands takes a production first approach and Model driven. It asks the hard questions: How do these systems scale? How do they remain secure? What happens when they fail? Because reasoning is powerful, but production systems demand reliability.
Then there are frameworks like CrewAI, which introduce a different perspective altogether. Instead of relying on a single agent, they model systems as teams. One agent researches, another plans, another executes. The interaction between them creates a form of collective intelligence that feels closer to how humans actually solve problems.
If you step back from the tools and look at the bigger picture, the shift becomes clear.
For years, we tried to eliminate uncertainty by encoding every possible outcome.
Now, we are accepting uncertainty and designing systems that can operate within it.
We are no longer trying to predict every scenario.
We are building systems that can handle scenarios we didn’t predict.
The best way to describe this transformation is simple:
We are moving from writing logic to designing behavior.
When you write logic, you are responsible for every outcome.
When you design behavior*, **you are responsible for how the system adapts to outcomes*.
That is a very different kind of engineering.
It requires thinking about constraints, not just conditions.
About capabilities, not just code paths.
About systems, not just functions.
This doesn’t mean control is gone.
It means control has moved.
We are no longer controlling every step.
We are controlling the environment in which steps are chosen.
And that shift from direct control to guided autonomy is what defines AI agentic systems.
This is not the end of software engineering.
If anything, it raises the bar.
Because now, we are not just building systems that execute instructions.
We are building systems that decide which instructions matter.
And that requires more thought, not less. More design, not less. More responsibility, not less.
The tools will evolve. The frameworks will mature. The patterns will stabilize.
But this shift this move from deterministic logic to adaptive systems is not going away.
Because once you’ve seen what it means to build something that can reason…
It’s very hard to go back to building something that only follows instructions.
Thanks
✍️ Sreeni Ramadorai





Top comments (0)