Translation of the original Spanish essay “La Era Agentic: de la Inteligencia Artificial a la Infraestructura Cognitiva,” originally published on Dev.to by Pillippa Pérez Pons.
Summary
Agentic AI is the new technological wave, powered by agents capable of reasoning and autonomously executing tasks. Its current maturity is not a fad but the result of three converging factors: standardization (MCP, A2A), distributed infrastructure, and empirical validation through reproducible benchmarks.
This essay argues that the inflection point began with the mass adoption of language models in 2022 and solidified between 2024 and 2025, following the pattern of previous waves like email, the web, and mobility. From a frontend development perspective, it contends that agenticity will cease to be optional and become a structural layer on par with usability and accessibility.
This text outlines the timeline of that inflection, analyzes the enabling factors, and proposes a reading of how agenticity integrates into the cognitive infrastructure of contemporary software. It is also part of a series exploring the transition toward an infrastructure where humans and systems share agency. This first installment covers the conceptual, historical, and technical foundations of Agentic AI; future pieces will delve into interaction patterns, trust metrics, and applied agentic design.
Introduction
No technological revolution begins by declaring itself as such. First it appears as curiosity, then it becomes an experiment, and only later does it transform into structure. So it was with email in the seventies, with the web in the nineties, and with mobility after 2007. In each case, what began as a marginal idea ended up as an essential layer of the digital world, unseen but indispensable.
In 2025, Agentic AI is tracing the same path. What in 2023 looked like a set of fragile, scattered trials has consolidated into a transversal layer of technological infrastructure. Agents, entities capable of reasoning, executing tasks, and collaborating with humans and systems, have moved from the lab to production environments and from prototype to standard. This essay maintains that their presence no longer belongs to speculation but to everyday practice, and that their integration will transform not only how we program but how we conceive the relationship between intelligence and software.
From a frontend development perspective, agenticity represents a new structural layer. Like usability or accessibility, it will cease to be optional and become an inherent principle of interface design and architecture. Its impact goes beyond interaction, it redefines the very notion of collaboration between humans and machines. Understanding this transition, from automation to autonomy and from code to behavior, is essential for anticipating how the next decade of software will take shape.
1. A Timeline of the Inflection
2002: Publication of the FIPA Agent Communication Language (ACL) and Agent Communication Protocol (ACP) standards by IEEE, establishing the basis for interoperability among agents.
November 30, 2022: Public launch of ChatGPT, democratizing access to language models and marking the start of mass adoption of generative AI.
February 2023: Publication of Toolformer, a Meta AI paper demonstrating that LLMs can learn to use external tools without direct supervision.
March 30, 2023: Emergence of Auto-GPT, the first popular autonomous agent based on GPT-4, introducing notions of self-orchestration and subtask execution.
October 10, 2023: Publication of SWE-Bench, a reproducible benchmark to measure whether agents can fix real bugs in software repositories.
November 25, 2024: Anthropic introduces the Model Context Protocol (MCP), an open standard enabling agents to connect with tools and data contexts.
February 25, 2025: Cloudflare launches the Agents SDK, enabling the building and deployment of agents at the edge via Workers and Workers AI.
April 2, 2025: Publication of the MCP Safety Audit paper, identifying security risks and vulnerabilities in MCP implementations.
April 9, 2025: Google announces the Agent2Agent Protocol (A2A), designed to enable direct communication between agents from different providers.
June 3, 2025: Cloudflare publishes examples of human-in-the-loop patterns using the Agents SDK, showing how agents can incorporate real-time human decisions.
March 11, 2025: OpenAI launches the Responses API, successor to Assistants, focused on agent flows and MCP support.
May 21, 2025: OpenAI adds support for remote MCP servers within the Responses API, consolidating interoperability.
June 16, 2025: Publication of the study MCP at First Glance, analyzing the security and maintainability of MCP servers.
June 23, 2025: The Linux Foundation adopts the Agent2Agent (A2A) project as an open protocol under community governance.
July 2025: Google releases A2A SDK improvements with gRPC support and security extensions.
August 2025: Cloudflare documents Agents on the Edge capabilities, combining persistence, workflows, and real-time human control.
2. Why Now and Not Before?
Agents existed in potential since the first attention architectures and autoregressive models developed in the early 2020s. Those architectures, introduced in Attention Is All You Need (Vaswani et al., 2017), replaced recurrence with attention mechanisms capable of dynamically weighing which parts of a sequence are relevant given the context. For the first time, statistical computation had the ability to reason in a distributed way, retaining long dependencies and learning to assign focus, a principle closer to thought than to mere calculation. That was the origin of a new form of intelligence, one that not only processes information but decides what to process.
In that sense, agents already had a brain but lacked a body. Attention gave them memory, rudimentary reasoning, and planning, yet they remained confined to experimental labs. What was missing was the infrastructure, standards, and technical culture to embody them in production systems. Until 2023, agent execution remained costly, fragile, and fragmented, prototypes like AutoGPT or BabyAGI relied on manual setups, isolated APIs, and prohibitive resources. By 2025, however, several curves crossed. The maturity of edge computing, the standardization of protocols, and the drop in inference costs turned fragility into structure. The agentic era ceased to be a promise and became a tangible layer of the present, where distributed reasoning finally found an operational body.
2.1. Standardization and Governance
With the arrival of the Model Context Protocol (MCP) (Anthropic, 2024) and the Agent2Agent Protocol (A2A) (Google, 2025), communication among agents stopped being a proprietary experiment and became a standard. These protocols solved three historic problems, interoperability, authentication, and contextual transmission. For the first time, an OpenAI-based agent can invoke a tool hosted on Cloudflare, query data in Neon, and share state with another agent on Hugging Face without manual integrations or glue code. The openness of these standards not only enabled cooperation among distinct ecosystems but also introduced a principle of shared technical governance.
2.2. Mature Infrastructure and Falling Costs
Hardware stopped being a barrier. Between 2023 and 2025, the rise of NVIDIA H100 GPUs and GH200 Grace Hopper Superchips, together with the consolidation of edge computing, Cloudflare Workers AI, and AWS Local Zones, reduced latency and inference costs by more than 70%. Today, thanks to tools like the Agents SDK, an agent can run directly at the network edge, near the user, without relying on centralized servers. What once cost hundreds of dollars per day now costs cents per hour, and that economic leap changed the scale of experimentation.
2.3. Evaluation and Reproducibility
In parallel, the academic and engineering communities established reproducible evaluation mechanisms. Benchmarks like SWE-Bench (Stanford, 2023) and HELM (Stanford CRFM, 2022–2024) turned the measurement of agent performance into a verifiable, traceable discipline. Success is no longer measured by anecdotal demos but by consistent, comparable results. Just as ImageNet marked the leap in computer vision in 2012, SWE-Bench and HELM define the quality standard for autonomous intelligence.
2.4. Cultural and Cognitive Curve
But the transition wasn’t only technical, it was cultural. Mass exposure to ChatGPT, Copilot, and other conversational interfaces taught millions to dialogue with intelligent systems. We learned to iterate, correct, ask for explanations, and evaluate responses. The conversational interface replaced the button as the basic unit of interaction. For the first time, society learned the language of the models before the models learned the language of society.
2.5. Epistemological Transition
This change in scale and practice entailed an epistemological shift, computing ceased to be understood as executing instructions and became a process of negotiation. Humans no longer program tasks deterministically, they orchestrate intelligences that share agency and context. Programming stopped being synonymous with dictating orders and came to mean defining boundaries, intentions, and collaboration criteria.
2.6. Ecosystem and Adoption Pressure
Finally, the industrial ecosystem closed the loop. Cloudflare, Anthropic, Microsoft, and the Linux Foundation opened SDKs, APIs, and governance spaces where agents from different environments can coexist and cooperate. The cost of experimenting fell, interoperability rose, and the risk of not doing so became strategic. In 2020, adopting agents was a bet, in 2025, it is a requirement. What once was innovation is now infrastructure, and infrastructure, by definition, is no longer debated, it is simply used.
3. Historical Comparisons
Technology does not advance in jumps but in waves of integration, each revolution starts as curiosity, consolidates as habit, and ends up as infrastructure. Agentic AI follows the same evolutionary pattern. Like communication, the web, and mobility, it began as a demonstration of possibility and is on course to redefine digital life. Each earlier wave expanded the reach of human machine interaction and left a new language behind. The fourth wave, agenticity, seeks not merely to extend artificial intelligence but to turn it into the new fabric of collaboration.
3.1. Email (1971): Communication as a Layer
In 1971, Ray Tomlinson sent the first electronic message over ARPANET, linking user and machine with the @ symbol. It was not a commercial product but a technical experiment, yet it defined a principle that would transform the world, asynchronous communication between distributed nodes. For the first time, distance ceased to be a temporal barrier to message transmission. Half a century later, email has become invisible, we no longer perceive it as technology, yet it underpins digital identity, authentication, and notifications in virtually all systems (Tomlinson, 1971). What began as a hack on a military network ended up as the basic grammar of human connectivity. The invisible, as always, is what endures.
3.2. The Web (1990): Hypertext as the World’s Structure
In 1990, Tim Berners-Lee proposed the World Wide Web as a tool to link scientific documents. Its hypertext system, HTTP, URL, HTML, was rudimentary, but its simplicity made it universal. What started as an information network for researchers became a global fabric interconnecting culture, economy, and education (Berners-Lee, 1990). The web did not just organize knowledge, it defined how we inhabit it. Today it is not a collection of pages but the invisible base of all digital interaction. It did not displace software, it absorbed it, turning it into surface, medium, and context at the same time.
3.3. Mobility (2007): Ubiquity as a Condition
In 2007, Steve Jobs introduced the iPhone as “three devices in one”, a phone, an iPod, and a browser. At the time, it was a luxury object, not a standard, but it changed the human technology relationship forever. In less than a decade, mobility became the lowest common denominator of every digital experience, personal, ubiquitous, and always available. Interfaces, content, and infrastructure reconfigured around the user on the move (Jobs, 2007). Mobility did not replace the web, it extended it into the pocket. Its true legacy was not the device but the expectation of immediacy and constant presence that still shapes contemporary design.
3.4. The Agentic Era (2025): Collaboration as Fabric
Agentic AI represents the fourth wave, agency understood as infrastructure. Just as email decentralized communication, the web decentralized information, and mobility decentralized access, agents are decentralizing action. They are no longer mere assistants or accessory tools, but interoperable actors capable of reasoning, executing, and collaborating with humans and systems. Their presence today is visible in dashboards, flows, and prototypes, although their destiny is to become invisible and form part of every application and digital environment. In a few years, the concept of an “agent” will be as everyday as “web” or “app”, a functional unit integrated into the contemporary software ecosystem.
3.5. Synthesis
The adoption of agents is not an emerging fad but a natural evolution of computing as social fabric. Each prior wave prepared the next, email taught us to wait, the web to search, mobility to be present, and Agentic AI will teach us to delegate. In all of them, the extraordinary became mundane and the visible disappeared. Agenticity will follow the same path. Its purpose is not to occupy center stage but to support the system’s operation from its base. When the presence of agents goes unnoticed, it will mean their integration is complete and the new cognitive infrastructure has matured.
4. The Parallel with Frontend
In frontend we learned that usability and accessibility were not embellishments but foundations. They are not tacked on at the end, they are conceived from the beginning. The same is true with Agentic AI. What today looks like an experiment, a talking interface, a proposing assistant, will soon become the base on which all digital experiences are built. This shift not only broadens the scope of design but redefines its purpose, it is no longer enough for an interface to be functional or aesthetic, it must also be explanatory, negotiable, and trustworthy.
4.1. From Interface to Dialogue
For years, frontend translated human intent into action via clicks, gestures, or animations. The agentic era alters that grammar, the minimal unit of interaction is no longer the click but the conversation. Designing for agents means designing relationships, not just screens. It involves thinking how the system explains what it will do, how it seeks permission, and how it demonstrates it understood the user’s intent. In this new paradigm, the challenge is not moving pixels but building trust, deciding when the agent should propose, when it should wait, and when it should act on its own without breaking the sense of human control.
4.2. From Experience to Transparency
The quality of a digital experience is no longer measured only by its smoothness but by its ability to reveal the reasons behind each decision. An agent that acts without showing its reasoning breeds distrust, one that exposes its logic, sources, or motivations feels more legitimate. Transparency replaces efficiency as a marker of maturity. The contemporary user does not just want to achieve a goal but to understand why the system achieved it that way and how to undo it if desired. In that mutual understanding, trust is born, and with it, the new canon of agentic design.
4.3. From Standards to Fabric
Just as usability and accessibility became the silent language of frontend, unseen yet essential, agenticity will follow suit. Soon we will not debate whether a product “should have” an agent, it simply will, as products today have semantic HTML or ARIA labels. Agency will become part of software’s invisible fabric, expressed in measurable properties like decision traceability, guaranteed reversal, or accessible explanation. In that environment, frontend reclaims its deepest essence, being the meeting point between machine reason and human empathy, the space where intelligence takes shape, tone, and context.
4.4. What’s Next
The upcoming challenges of agentic design will center on defining new interaction patterns, how to request confirmation, how to communicate intent before acting, how to negotiate ambiguous actions or offer safe alternatives. These patterns, still forming, must be evaluated with trust, transparency, and perceived control metrics. Designing for autonomy does not mean abandoning supervision, it means integrating it into the experience. Future explorations should map these principles, name them, and standardize them, just as we once did with components and accessibility guidelines.
4.5. Synthesis
If the web taught us to communicate and mobility to accompany, Agentic AI will teach us to collaborate and delegate. Frontend will be the stage where this collaboration occurs, the space where humans and systems learn to understand each other, correct each other, and co-design the flow of action. What we once measured as “interface” we will soon call “relationship”. And within that relationship, agenticity will cease to be optional and become structure, as inevitable as accessibility, as invisible as the network that sustains it.
5. Impact on Software Work
The incorporation of agents does not eliminate the software engineer’s role, it reconfigures it. As with deployment automation or higher-level frameworks, the change does not imply the disappearance of work but its shift toward a more abstract, strategic level. Agents expand the scope of engineering, they do not replace it. Their deepest effect is not measured in lines of code written but in the decisions that define how, when, and for what purpose that code is generated.
5.1. From Execution to Supervision
Agents already automate a significant part of the development cycle, they generate documentation, propose unit tests, refactor functions, detect obsolete dependencies, and even draft explanatory commits. Tasks once manual and repetitive now become points of collaboration between humans and systems. Code stops being the final product and becomes the starting point of a conversation. The developer’s role expands, from executing instructions to defining behaviors, boundaries, and policies for the agents that will do so. Responsibility is no longer purely technical, it is also ethical and contextual, because supervising intelligences requires understanding not only what they do but why they do it.
5.2. Emerging Roles
New professional functions arise from this transition:
- AI Product Engineer: integrates AI as part of the product design and development cycle, not as an external add-on.
- AI Orchestrator: designs multi-agent flows, defines collaboration protocols, and tunes performance and trust metrics.
- AI Interaction Designer: creates hybrid experiences where humans and agents share tasks and control.
These profiles do not replace existing roles, they complement and expand them. Just as DevOps culture did not eliminate backend engineers, agents will not eliminate frontend or architects, but they will redraw the boundary between design, development, and supervision.
5.3. Changes in the Software Economy
The economic impact of this transformation will be gradual and cumulative. Various studies (Acemoglu, 2023; Brookings Institution, 2024) indicate that the adoption of generative AI tends to redistribute tasks rather than eliminate jobs. Routine functions, testing, documentation, internal support, are first to be automated, while review, integration, and systems supervision grow in demand. Consequently, AI does not replace technical work, it specializes it. Value shifts toward reliability, orchestration, and interpretation of results, dimensions where human judgment remains indispensable.
5.4. Supervising Intelligences, Not Just Systems
The deepest change is not laboral but cognitive. Engineering moves from supervising deterministic software to supervising systems that reason, fail, and correct themselves. This shift demands new competencies, critical thinking, understanding of biases, trust design, and reading explanatory metrics. Programming ceases to mean merely building machines that function and becomes the task of creating systems that understand their purpose and can explain their decisions. Engineering thus becomes a practice of interpretation, where the boundaries between writing code and educating models grow progressively blurrier.
5.5. A Necessary Close
Engineers do not disappear, they evolve. They move from authors of code to supervisors of behavior, and from system architects to orchestrators of intelligences. Software work changes shape but not purpose, it remains the art of building the tools that define how we think, work, and create. Agentic AI does not replace engineering, it amplifies it, reminding us that technology transforms not only what we do but also how we understand what it means to create.
6. Limitations and Next Steps
This essay focuses on the structural and cultural dimension of Agentic AI rather than its technical implementation. The choice is not an omission but a necessity, before describing how agentic systems are built, it was essential to understand why they arise and when they become inevitable. As with every technological wave, understanding precedes standardization. Before the web had browsers or mobility had sensors, there was a shared narrative that gave meaning to their existence. Agenticity is in that same foundational phase, it requires a common language, a conceptual grammar capable of articulating the relationship between autonomy, interoperability, and trust. This text sought to contribute precisely to that base, outlining the factors that transformed a technical curiosity into a structural layer of contemporary software.
In that pursuit, topics demanding a more applied treatment were left out, the interaction patterns that will define collaboration between humans and agents, the transparency and trust metrics needed to assess performance, and the ethical frameworks that will ensure responsible operation. Including them would have diverted from the main purpose, which was to trace the historical and conceptual trajectory of Agentic AI and show how it integrates into the natural evolution of the frontend. Nevertheless, these topics are the next logical step. Future explorations will focus on how to design communication with reasoning systems, how to measure the reliability of their decisions, and what governance principles should accompany their deployment. The challenge will not be merely technical but epistemological, learning to design relationships with intelligences that also design.
The agentic era is only beginning, but its direction is already in sight. It is not just another trend within the AI cycle but a deep reconfiguration of how knowledge is organized and circulates within digital systems. Agents do not just execute tasks, they interpret, argue, and co-construct meaning with us. Understanding this transition and actively participating in its design will be a shared responsibility among engineers, designers, and thinkers. Because beyond code, Agentic AI confronts us with an essential question, how do we want to think, decide, and create in a world where intelligence, human or artificial, ceases to be a tool and becomes infrastructure?
Conclusion
Agentic AI is not a future hypothesis but a reality taking shape since 2022. Like email, the web, and mobility, its adoption follows an inevitable curve, from curiosity to infrastructure. The difference is that this time the change occurs within the software development cycle itself.
A clear example can be seen in the frontend, today it is possible to deploy agents directly at the edge via Cloudflare Workers AI or the Agents SDK, capable of adapting the interface, analyzing user intent, and adjusting the experience in real time. These hybrid flows, where human and agent negotiate visual or content decisions, show how agenticity stops being an add-on and becomes a structural property of interaction.
Just as we learned that you cannot design without considering accessibility, soon you will not be able to design without considering agency. Although products that ignore usability or accessibility are still approved today, the norm has already changed, they are part of the canon of modern frontend. The same will happen with Agentic AI. It will not be an add-on but the foundation on which we build software that learns, collaborates, and evolves with us.
The essential difference in this wave lies in its cognitive nature, it does not merely extend technical infrastructure but redefines how knowledge is organized and how decision flows within systems. Agentic AI inaugurates the cognitive infrastructure of the 21st century, a network where every agent, human or artificial, acts as a thinking node within a shared fabric.
Understanding and designing for this new infrastructure is not just an engineering task but a cultural responsibility. It is at this convergence of code, decision, and awareness where the future of software will be decided, and with it, our way of thinking alongside machines.
References
Acemoglu, D. (2023). Artificial Intelligence and the Future of Work. MIT Press.
https://nap.nationalacademies.org/read/27644/chapter/1
Anthropic. (2024, noviembre 25).Introducing the Model Context Protocol. [Blog post]. Anthropic.
https://www.anthropic.com/news/model-context-protocol
Berners-Lee, T. (1990). Information Management: A Proposal. CERN.
https://www.w3.org/History/1989/proposal.html
Brookings Institution. (2024). Generative AI, the American worker, and the future of work. Brookings Institution Report.
https://www.brookings.edu/articles/generative-ai-the-american-worker-and-the-future-of-work/
Cloudflare. (2025, febrero 25). Introducing the Agents SDK
https://developers.cloudflare.com/changelog/2025-02-25-agents-sdk/
Cloudflare. (2025, abril). Piecing together the Agent puzzle: MCP, authentication & authorization, and Durable Objects free tier. Cloudflare Blog.
https://blog.cloudflare.com/building-ai-agents-with-mcp-authn-authz-and-durable-objects/
Cloudflare. (2025, junio 3). Building an AI Agent that puts humans in the loop with Knock and Cloudflare’s Agents SDK. Cloudflare Blog.
https://blog.cloudflare.com/building-agents-at-knock-agents-sdk/
Google. (2025, abril 9). Announcing the Agent2Agent Protocol (A2A). Google Developers Blog.
https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/
Google Cloud. (2025, julio). Announcing a complete developer toolkit for scaling A2A agents on Google Cloud. Google Cloud Blog.
https://cloud.google.com/blog/products/ai-machine-learning/agent2agent-protocol-is-getting-an-upgrade
IEEE Computer Society. (2002). Standards and Interoperability – IEEE Power & Energy Society Multi-Agent Systems Working Group. IEEE Press.
https://site.ieee.org/pes-mas/agent-technology/standards-and-interoperability/
Jimenez, C., et al. (2023). SWE-bench: Can Language Models Resolve Real-World GitHub Issues? arXiv:2310.06770.
https://arxiv.org/abs/2310.06770
Linux Foundation. (2025, junio 23). Linux Foundation Launches the Agent2Agent Protocol Project to Enable Secure, Intelligent Communication Between AI Agents. Linux Foundation.
https://www.linuxfoundation.org/press/linux-foundation-launches-the-agent2agent-protocol-project-to-enable-secure-intelligent-communication-between-ai-agents
NVIDIA Corporation. (2023). NVIDIA H100 Tensor Core GPU. NVIDIA Technical Whitepaper.
https://www.nvidia.com/en-us/data-center/h100/
NVIDIA Corporation. (2024). V1.21 NVIDIA GH200 Grace Hopper Superchip Architecture
https://resources.nvidia.com/en-us-grace-cpu/nvidia-grace-hopper
OpenAI. (2022, noviembre 30). Introducing ChatGPT. OpenAI Blog.
https://openai.com/index/chatgpt/
OpenAI. (2025, marzo 11). New features in the Responses API. OpenAI Blog.
https://openai.com/index/new-tools-for-building-agents/
OpenAI. (2025, mayo 21). New tools and features in the Responses API. OpenAI Blog.
https://openai.com/index/new-tools-and-features-in-the-responses-api/
Schick, T., Dwivedi-Yu, J., & Schütze, H. (2023). Toolformer: Language Models Can Teach Themselves to Use Tools. arXiv:2302.04761.
https://arxiv.org/abs/2302.04761
Significant Gravitas. (2023, marzo 30). Auto-GPT Repository. GitHub.
https://github.com/Torantulino/Auto-GPT
Stanford Center for Research on Foundation Models (CRFM). (2022–2024). HELM: A holistic framework for evaluating foundation models. Stanford University.
https://crfm.stanford.edu/helm/classic/latest/
Tomlinson, R. (1971). 1971: First Ever Email.
https://www.guinnessworldrecords.com/news/2015/8/60/1971-first-ever-email-392973
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention Is All You Need. arXiv:1706.03762
https://arxiv.org/abs/1706.03762
arXiv. (2025, abril 2). MCP Safety Audit: LLMs with the Model Context Protocol Allow Major Security Exploits. arXiv:2504.03767.
https://arxiv.org/abs/2504.03767
arXiv. (2025, junio 16). Model Context Protocol (MCP) at First Glance: Studying the Security and Maintainability of MCP Servers. arXiv:2506.13538.
https://arxiv.org/abs/2506.13538
Top comments (0)