<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Manish Shivanandhan</title>
    <description>The latest articles on DEV Community by Manish Shivanandhan (@manishmshiva).</description>
    <link>https://dev.to/manishmshiva</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/manishmshiva"/>
    <language>en</language>
    <item>
      <title>Why Your “Simple Deploy” Turned Into a Week of Infrastructure Work</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Tue, 05 May 2026 04:40:19 +0000</pubDate>
      <link>https://dev.to/manishmshiva/why-your-simple-deploy-turned-into-a-week-of-infrastructure-work-3po3</link>
      <guid>https://dev.to/manishmshiva/why-your-simple-deploy-turned-into-a-week-of-infrastructure-work-3po3</guid>
      <description>&lt;p&gt;If you are running production workloads, this is for you.&lt;/p&gt;

&lt;p&gt;Not side projects. Not early-stage experiments. Not a single-service app with low traffic.&lt;/p&gt;

&lt;p&gt;This is for teams shipping real systems. Systems with users, uptime expectations, and release pressure.&lt;/p&gt;

&lt;p&gt;Because at that stage, your deploy process is no longer a convenience. It is part of your product.&lt;/p&gt;

&lt;p&gt;And right now, for most teams, it is the weakest part.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Promise You Were Sold
&lt;/h2&gt;

&lt;p&gt;Every modern stack makes the same promise.&lt;/p&gt;

&lt;p&gt;Shipping is easy. Deploying is automated. Infrastructure is abstracted away.&lt;/p&gt;

&lt;p&gt;Push your code. Watch it go live. That promise works , until it doesn’t.&lt;/p&gt;

&lt;p&gt;And when it breaks, it does not fail gracefully. It expands.&lt;/p&gt;

&lt;p&gt;A “simple deploy” turns into a multi-day investigation across systems you never intended to own.&lt;/p&gt;

&lt;p&gt;Not because your team is careless. Because the model itself assumes you will take on more responsibility than it admits.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Contract You Are Already Operating Under
&lt;/h2&gt;

&lt;p&gt;When you deploy today, you are not just shipping code.&lt;/p&gt;

&lt;p&gt;You are agreeing to run a distributed system of tools.&lt;/p&gt;

&lt;p&gt;You own the build pipeline. The container lifecycle. The runtime configuration. The network rules. The secrets layer. The scaling logic. The observability stack.&lt;/p&gt;

&lt;p&gt;Each of these is presented as a separate concern. In reality, they are tightly coupled.&lt;/p&gt;

&lt;p&gt;And you are the only layer holding them together. That is the hidden contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  You Are Already Acting Like a Platform Team
&lt;/h2&gt;

&lt;p&gt;If your deploy process involves CI pipelines, container registries, cloud services, environment variables, and monitoring tools, you are not just an application team anymore. You are running a platform.&lt;/p&gt;

&lt;p&gt;You are defining how code moves from commit to production. You are deciding how failures are handled. You are shaping how services communicate.&lt;/p&gt;

&lt;p&gt;That is platform engineering work.&lt;/p&gt;

&lt;p&gt;The issue is not that this work exists. The issue is that most teams take it on unintentionally, without the structure, tooling, or dedicated ownership a real platform team would require.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost Is Not Complexity. It Is Time
&lt;/h2&gt;

&lt;p&gt;It is easy to describe this problem as “complexity.”&lt;/p&gt;

&lt;p&gt;That undersells it.&lt;/p&gt;

&lt;p&gt;The real cost shows up in how your team spends its time.&lt;/p&gt;

&lt;p&gt;Deploys that should take minutes stretch into hours. Then days. &lt;/p&gt;

&lt;p&gt;Engineers context-switch from product work into debugging CI caches, fixing misconfigured secrets, or tracing network failures across services.&lt;/p&gt;

&lt;p&gt;Releases slow down. Not because your team cannot build features, but because shipping them becomes unpredictable.&lt;/p&gt;

&lt;p&gt;Onboarding gets harder. New engineers do not just learn the codebase. They have to learn your deployment system.&lt;/p&gt;

&lt;p&gt;None of this appears on a roadmap. But it directly impacts how fast you can move.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why “It Works on My Machine” Still Exists
&lt;/h2&gt;

&lt;p&gt;We were supposed to have solved this.&lt;/p&gt;

&lt;p&gt;Containers. Infrastructure as code. Reproducible builds.&lt;/p&gt;

&lt;p&gt;Yet the gap between local and production still shows up at the worst possible moment.&lt;/p&gt;

&lt;p&gt;Because the problem was never just environment parity.&lt;br&gt;
It is system parity.&lt;/p&gt;

&lt;p&gt;Your local setup does not include the same limits, permissions, network paths, or scaling behavior as production.&lt;/p&gt;

&lt;p&gt;Those differences only surface when everything is wired together.&lt;br&gt;
Which means they surface during deploys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fragmentation Is the Root Problem
&lt;/h2&gt;

&lt;p&gt;Modern tooling did not remove infrastructure complexity.&lt;br&gt;
It redistributed it.&lt;/p&gt;

&lt;p&gt;Instead of managing servers, you manage integrations between services.&lt;/p&gt;

&lt;p&gt;Instead of a single failure domain, you have many.&lt;/p&gt;

&lt;p&gt;A deploy can fail because of a CI issue, a registry timeout, a secret misconfiguration, a networking rule, or a scaling limit.&lt;/p&gt;

&lt;p&gt;Each lives in a different system. Each requires different context.&lt;br&gt;
Individually, these tools are well-designed. Collectively, they form a system that is hard to reason about under pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Model Breaks as You Scale
&lt;/h2&gt;

&lt;p&gt;This only works while your system is small.&lt;br&gt;
But production systems do not stay small.&lt;/p&gt;

&lt;p&gt;More services mean more pipelines. More configurations. More failure points.&lt;/p&gt;

&lt;p&gt;Over time, the effort required to maintain your deployment system grows faster than the product itself.&lt;br&gt;
That is the inflection point.&lt;/p&gt;

&lt;p&gt;Where engineering time shifts away from building features and toward maintaining the machinery that ships them.&lt;/p&gt;

&lt;p&gt;If you are already feeling that shift, it is not temporary. It is structural.&lt;/p&gt;

&lt;p&gt;At some point, there is a question that becomes hard to ignore: Why are you still managing this yourself?&lt;/p&gt;

&lt;p&gt;Not because you cannot. But because it is no longer clear that you should.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shift Toward Platforms
&lt;/h2&gt;

&lt;p&gt;This is where Platform as a Service changes the model.&lt;/p&gt;

&lt;p&gt;Not by adding more tools. But by taking ownership of the system those tools create.&lt;/p&gt;

&lt;p&gt;A PaaS defines a path from code to production. That path is opinionated, constrained, and consistent.&lt;/p&gt;

&lt;p&gt;Those constraints are not limitations. They are what remove entire categories of failure.&lt;br&gt;
Instead of assembling a deployment pipeline, you adopt one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Stop Paying For
&lt;/h2&gt;

&lt;p&gt;Moving to a PaaS is often framed as convenience. For production teams, it is closer to cost removal.&lt;/p&gt;

&lt;p&gt;You stop spending time deciding how builds run, how services are exposed, how scaling is configured, how logs are collected.&lt;/p&gt;

&lt;p&gt;You stop debugging the integration points between those decisions. You trade flexibility for predictability.&lt;/p&gt;

&lt;p&gt;And for most teams, predictability is the constraint that actually matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Infrastructure Work Back to Product Work
&lt;/h2&gt;

&lt;p&gt;The biggest change is not in your architecture.&lt;br&gt;
It is in your allocation of engineering effort.&lt;/p&gt;

&lt;p&gt;Time spent debugging deploys shifts back to building features.&lt;br&gt;
Time spent maintaining pipelines shifts to improving the product.&lt;br&gt;
Deploys become routine again.&lt;/p&gt;

&lt;p&gt;Not because they are simpler in theory, but because the system around them is controlled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Collapsing the Stack
&lt;/h2&gt;

&lt;p&gt;The advantage of a PaaS is not abstraction. It is consolidation.&lt;/p&gt;

&lt;p&gt;Build, deploy, runtime, and observability are integrated into a single system.&lt;/p&gt;

&lt;p&gt;There are fewer layers to coordinate. Fewer places to look when something fails. And fewer decisions to get wrong.&lt;/p&gt;

&lt;p&gt;Platforms like &lt;a href="https://sevalla.com/" rel="noopener noreferrer"&gt;Sevalla&lt;/a&gt;, Railway, and Render are pushing this further by tightening the loop between code and production, reducing both the number of systems involved and the surface area developers need to understand.&lt;/p&gt;

&lt;p&gt;The goal is operational clarity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trade-Off You Are Actually Making
&lt;/h2&gt;

&lt;p&gt;The common objection is control. And it is valid.&lt;/p&gt;

&lt;p&gt;You give up the ability to customize every layer of your infrastructure.&lt;/p&gt;

&lt;p&gt;But in practice, most teams are not using that control to create differentiation. They are using it to keep a fragile system running, and it’s what keeps teams stuck maintaining systems they shouldn’t own.&lt;/p&gt;

&lt;p&gt;Every custom configuration adds another failure point. Another dependency. Another thing to maintain under pressure.&lt;br&gt;
The trade-off is not control versus convenience.&lt;/p&gt;

&lt;p&gt;It is control versus reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  When This Becomes Urgent
&lt;/h2&gt;

&lt;p&gt;You do not need a major outage to justify a change.&lt;br&gt;
The signals show up earlier.&lt;/p&gt;

&lt;p&gt;Deploys feel unpredictable. Releases slow down. Engineers spend more time on pipelines than product logic. Onboarding takes longer than it should.&lt;/p&gt;

&lt;p&gt;These are not isolated issues.&lt;/p&gt;

&lt;p&gt;They are indicators that your current model is not scaling with your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a “Simple Deploy” Actually Means
&lt;/h2&gt;

&lt;p&gt;A simple deploy is not one that feels easy when everything works. It is one that continues to work as your system grows.&lt;/p&gt;

&lt;p&gt;It is predictable. Failures are rare. When they happen, they are easy to diagnose.&lt;/p&gt;

&lt;p&gt;And most importantly, it does not require your engineers to think about infrastructure to ship code.&lt;/p&gt;

&lt;p&gt;That outcome is not achieved by adding more tools. It is achieved by reducing the system you have to manage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thought
&lt;/h2&gt;

&lt;p&gt;Your deploy did not turn into a week of infrastructure work because you missed something. It turned into that because you are operating a model that expects you to.&lt;/p&gt;

&lt;p&gt;You can continue investing in that model. Or you can adopt one where deploying is a solved problem.&lt;/p&gt;

&lt;p&gt;For production teams, that is no longer a philosophical choice. It is an operational one.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>devops</category>
      <category>architecture</category>
    </item>
    <item>
      <title>The Hidden Tax of Infrastructure: Why Your Team Shouldn’t Be Running It Anymore</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Wed, 22 Apr 2026 06:19:29 +0000</pubDate>
      <link>https://dev.to/manishmshiva/the-hidden-tax-of-infrastructure-why-your-team-shouldnt-be-running-it-anymore-5647</link>
      <guid>https://dev.to/manishmshiva/the-hidden-tax-of-infrastructure-why-your-team-shouldnt-be-running-it-anymore-5647</guid>
      <description>&lt;p&gt;Most engineering teams do not set out to manage infrastructure. They start with a product idea, a customer need, or a business problem.&lt;/p&gt;

&lt;p&gt;Infrastructure enters the picture as a means to an end. Servers need to be provisioned. Databases need to be configured. Networks need to be secured. At first, this work feels necessary and even empowering. It gives teams control.&lt;/p&gt;

&lt;p&gt;But over time, that control turns into a burden.&lt;/p&gt;

&lt;p&gt;What begins as a few &lt;a href="https://www.freecodecamp.org/news/how-to-get-started-with-terraform/" rel="noopener noreferrer"&gt;Terraform scripts&lt;/a&gt; or cloud console clicks evolves into a growing layer of responsibility.&lt;/p&gt;

&lt;p&gt;Teams find themselves maintaining deployment pipelines, debugging networking issues, rotating credentials, patching systems, and responding to incidents unrelated to their product logic.&lt;/p&gt;

&lt;p&gt;This is the hidden tax of infrastructure. It is not a line item in your budget, but it is paid every day in engineering time, cognitive load, and lost focus.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure is not a one-time cost
&lt;/h2&gt;

&lt;p&gt;A common mistake teams make is treating infrastructure as a setup task. Something you “get right” once and move on from.&lt;/p&gt;

&lt;p&gt;In reality, infrastructure is a continuous system. It changes with scale, traffic patterns, security threats, and team structure.&lt;/p&gt;

&lt;p&gt;Every component you introduce adds a long tail of operational work. A load balancer is not just a load balancer. It requires configuration tuning, monitoring, failover planning, and periodic upgrades. A database is not just storage. It brings backup strategies, replication concerns, indexing decisions, and performance tuning.&lt;/p&gt;

&lt;p&gt;Even with &lt;a href="https://www.freecodecamp.org/news/iac-with-apis-how-to-automate-cloud-resources/" rel="noopener noreferrer"&gt;infrastructure-as-code tools&lt;/a&gt;, the maintenance burden does not disappear. It becomes codified, but it still exists. Engineers must review changes, manage state, handle drift, and respond when things break.&lt;/p&gt;

&lt;p&gt;The cost compounds quietly. It shows up in slower delivery cycles, longer onboarding times for new engineers, and increased risk during deployments. It is not visible in sprint planning, but it is always there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The cognitive load problem
&lt;/h2&gt;

&lt;p&gt;One of the most underestimated aspects of infrastructure management is cognitive load.&lt;/p&gt;

&lt;p&gt;Modern systems are complex. Distributed architectures, microservices, container orchestration, and multi-region deployments all introduce layers of abstraction that engineers must understand.&lt;/p&gt;

&lt;p&gt;When a team owns its infrastructure, every engineer becomes partially responsible for this complexity. Even if you have dedicated platform engineers, application developers still need to understand enough to debug issues and deploy changes safely.&lt;/p&gt;

&lt;p&gt;This context switching has a real cost. An engineer working on a feature must also think about container resource limits, networking rules, observability gaps, and failure modes. Instead of focusing on business logic, they are juggling operational concerns.&lt;/p&gt;

&lt;p&gt;Cognitive load slows teams down. It increases the chance of mistakes. It makes systems harder to reason about. And it reduces the time engineers spend on the work that actually differentiates your product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reliability is harder than it looks
&lt;/h2&gt;

&lt;p&gt;Running infrastructure in production means owning reliability. This includes uptime, latency, data integrity, and incident response. Many teams underestimate how difficult this is to do well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.ibm.com/think/topics/high-availability" rel="noopener noreferrer"&gt;High availability&lt;/a&gt; is not just about redundancy. It requires careful design, testing, and ongoing validation. Failover mechanisms must be exercised. Monitoring systems must be tuned to detect real issues without creating noise. Incident response processes must be defined and practised.&lt;/p&gt;

&lt;p&gt;When something goes wrong, the cost is immediate and visible. Engineers are pulled into debugging sessions. Customers are affected. Business metrics drop. Postmortems are written. Action items are created, which often add more infrastructure complexity.&lt;/p&gt;

&lt;p&gt;Over time, teams build layers of safeguards and tooling to improve reliability. But each layer adds more to manage. The system becomes harder to change. The risk of unintended consequences increases.&lt;/p&gt;

&lt;p&gt;This is the paradox of self-managed infrastructure. The more you invest in reliability, the more complex your system becomes, and the more effort it takes to maintain that reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security and compliance never stand still
&lt;/h2&gt;

&lt;p&gt;Security is another dimension where the hidden tax becomes clear. Threats evolve constantly. Best practices change. Compliance requirements grow more stringent.&lt;/p&gt;

&lt;p&gt;When you run your own infrastructure, you are responsible for staying ahead of these changes. This includes patching systems, managing access controls, encrypting data, auditing logs, and responding to vulnerabilities.&lt;/p&gt;

&lt;p&gt;Even small gaps can have serious consequences. A misconfigured permission, an outdated dependency, or an exposed endpoint can lead to breaches. The cost of prevention is an ongoing effort. The cost of failure can be catastrophic.&lt;/p&gt;

&lt;p&gt;Compliance adds another layer. For teams in regulated industries, infrastructure must meet specific standards. This often requires documentation, audits, and controls that go beyond basic security practices.&lt;/p&gt;

&lt;p&gt;All of this work is necessary, but it does not directly contribute to your product’s value. It is part of the hidden tax you pay for owning infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The illusion of control
&lt;/h2&gt;

&lt;p&gt;One of the main reasons teams continue to manage their own infrastructure is the belief that it gives them control. They can customise everything. They can optimise for their specific needs. They are not dependent on external platforms.&lt;/p&gt;

&lt;p&gt;While this is true in theory, in practice, the level of control is often overstated. Most teams do not need deep customisation at the infrastructure level. They need reliability, scalability, and predictable behaviour.&lt;/p&gt;

&lt;p&gt;The control you gain comes at the cost of responsibility. Every customisation must be maintained. Every optimisation must be monitored. Every deviation from standard patterns increases the risk of issues.&lt;/p&gt;

&lt;p&gt;In many cases, teams end up recreating capabilities that are already available in managed platforms. They build internal tooling for deployment, scaling, and monitoring, only to maintain it indefinitely.&lt;/p&gt;

&lt;p&gt;The question is not whether you can manage your own infrastructure. It is whether you should. Most small to mid-sized teams should not be managing infrastructure at all. If it is not your competitive advantage, it is a distraction.&lt;/p&gt;

&lt;h2&gt;
  
  
  The rise of PaaS as an alternative
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-paas" rel="noopener noreferrer"&gt;Platform-as-a-Service&lt;/a&gt;, or PaaS, changes the equation. Instead of managing infrastructure directly, teams deploy applications to a platform that handles the underlying complexity.&lt;/p&gt;

&lt;p&gt;With PaaS, concerns like provisioning, scaling, load balancing, and patching are abstracted away. Engineers focus on code and configuration, not on servers and networks.&lt;/p&gt;

&lt;p&gt;This does not eliminate all operational work, but it shifts the responsibility. The platform provider handles the heavy lifting. Your team benefits from standardised, battle-tested infrastructure without having to build and maintain it.&lt;/p&gt;

&lt;p&gt;PaaS also reduces cognitive load. Developers interact with a simpler interface. Deployments become more predictable. Observability is often built in. This allows teams to move faster and with greater confidence.&lt;/p&gt;

&lt;p&gt;Importantly, PaaS aligns infrastructure with application needs. Instead of designing infrastructure first and fitting applications into it, teams define what their application requires, and the platform provides it.&lt;/p&gt;

&lt;p&gt;Heroku was the first to bring PaaS mainstream. Since Heroku is shutting down, I moved to &lt;a href="https://sevalla.com/" rel="noopener noreferrer"&gt;Sevalla&lt;/a&gt; for its simplicity and the speed with which new features, especially agentic tools, are introduced. Here is a &lt;a href="https://www.freecodecamp.org/news/top-heroku-alternatives-for-deployment/" rel="noopener noreferrer"&gt;list of alternatives.&lt;br&gt;
&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Speed is a competitive advantage
&lt;/h2&gt;

&lt;p&gt;In most markets, speed matters. The ability to ship features quickly, respond to feedback, and iterate on ideas is a key competitive advantage.&lt;/p&gt;

&lt;p&gt;Infrastructure management can slow this down. Changes require coordination. Deployments carry risk. Debugging issues takes time away from development.&lt;/p&gt;

&lt;p&gt;By reducing the infrastructure burden, PaaS enables faster delivery. Teams can deploy changes more frequently. They can experiment with new ideas without worrying about underlying systems. They can recover from failures more quickly.&lt;/p&gt;

&lt;p&gt;This is not just about engineering efficiency. It has a direct impact on business outcomes. Faster delivery leads to better products, happier customers, and a stronger market position.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost is more than the cloud bills
&lt;/h2&gt;

&lt;p&gt;When teams evaluate infrastructure strategies, they often focus on direct costs. Cloud bills, reserved instances, and resource utilisation are measured and optimised.&lt;/p&gt;

&lt;p&gt;But the hidden tax of infrastructure is mostly indirect. It includes engineering time spent on maintenance, the opportunity cost of delayed features, and the risk of outages and security incidents.&lt;/p&gt;

&lt;p&gt;These costs are harder to quantify, but they are often larger than the direct costs. A single incident can consume days of engineering time. A delayed feature can impact revenue. A security breach can damage a reputation.&lt;/p&gt;

&lt;p&gt;PaaS may appear more expensive on paper, but it often reduces total cost when you account for these hidden factors. It shifts spending from operational overhead to product development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rethinking ownership
&lt;/h2&gt;

&lt;p&gt;The core question is not about tools or technologies. It is about ownership. What should your team own, and what should it delegate?&lt;/p&gt;

&lt;p&gt;Your product is your core asset. It is what differentiates you in the market. Infrastructure, while critical, is a means to support that product.&lt;/p&gt;

&lt;p&gt;By continuing to manage infrastructure, teams take on responsibilities that do not directly contribute to their goals. They pay the hidden tax in time, focus, and risk.&lt;/p&gt;

&lt;p&gt;PaaS offers a way to rebalance this. It allows teams to delegate infrastructure concerns and focus on building value.&lt;/p&gt;

&lt;p&gt;The shift is not always easy. It requires changes in mindset, tooling, and processes. But for many teams, it is a necessary step.&lt;/p&gt;

&lt;p&gt;Because the real cost of infrastructure is not what you pay your cloud provider. It is what you give up to run it yourself.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>devops</category>
      <category>architecture</category>
    </item>
    <item>
      <title>From Metrics to Meaning: How PaaS Helps Developers Understand Production</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Mon, 20 Apr 2026 08:25:47 +0000</pubDate>
      <link>https://dev.to/manishmshiva/from-metrics-to-meaning-how-paas-helps-developers-understand-production-2hal</link>
      <guid>https://dev.to/manishmshiva/from-metrics-to-meaning-how-paas-helps-developers-understand-production-2hal</guid>
      <description>&lt;p&gt;Modern production systems generate more data than most developers can realistically process.&lt;/p&gt;

&lt;p&gt;Every request emits logs. Every service exports metrics. Every dependency introduces another layer of signals.&lt;/p&gt;

&lt;p&gt;In theory, this should make systems easier to understand. In practice, it does the opposite.&lt;/p&gt;

&lt;p&gt;Dashboards become dense, alerts become noisy, and when something breaks, the same questions still come up. What is actually wrong? Who is affected? Where do you even start?&lt;/p&gt;

&lt;p&gt;The problem is not observability. It is interpretation.&lt;/p&gt;

&lt;p&gt;Most teams are not short on metrics. They are short on meaning.&lt;/p&gt;

&lt;p&gt;And that gap exists because developers are often forced to reason about infrastructure when they should be focused on application behaviour.&lt;/p&gt;

&lt;p&gt;Metrics exist to describe systems, but without the right level of abstraction, they become another layer of complexity.&lt;/p&gt;

&lt;p&gt;This is where modern PaaS platforms change the equation.&lt;/p&gt;

&lt;p&gt;They do not remove metrics. They turn them into signals that developers can actually use.&lt;/p&gt;

&lt;p&gt;This article breaks down five metrics that consistently matter in production systems. More importantly, it shows how a PaaS helps translate these metrics into something actionable, without requiring developers to act as infrastructure operators.&lt;/p&gt;

&lt;p&gt;I’ll be using the &lt;a href="https://sevalla.com/" rel="noopener noreferrer"&gt;Sevalla &lt;/a&gt;dashboard to explain these metrics but other platforms like Railway and Render will have similar metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Latency Becomes a Clear Performance Signal
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjalkgsdcdm9e25vjfax.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjalkgsdcdm9e25vjfax.webp" alt=" " width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Latency is the most direct representation of user experience. It tells you how long your system takes to respond.&lt;/p&gt;

&lt;p&gt;When latency increases, users feel it immediately. Pages slow down. APIs become unreliable. Even small delays impact engagement.&lt;/p&gt;

&lt;p&gt;Most developers know to look at percentiles like p95 or p99 instead of averages. The slowest requests are what define perceived performance.&lt;/p&gt;

&lt;p&gt;But in many environments, understanding latency is not straightforward.&lt;/p&gt;

&lt;p&gt;A spike could come from inefficient code. Or from cold starts. Or from scaling delays. Or from network routing issues. Developers are forced to investigate layers they did not build.&lt;/p&gt;

&lt;p&gt;This is where a PaaS changes the role of latency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzds4xoeftfim0i85n0v.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzds4xoeftfim0i85n0v.webp" alt=" " width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instead of being a starting point for infrastructure debugging, latency becomes a clean signal of application performance. Scaling, routing, and resource allocation are handled by the platform. What remains is a clearer relationship between code and outcome.&lt;/p&gt;

&lt;p&gt;When latency increases, developers can focus on what they actually control. Queries, logic, dependencies.&lt;/p&gt;

&lt;p&gt;The metric stays the same. The meaning becomes clearer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Error Rate Becomes a Reliable Indicator of Failure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqigv61z6qawcacqll3p.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqigv61z6qawcacqll3p.webp" alt=" " width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Error rate answers a simple question. Is the system working or not?&lt;/p&gt;

&lt;p&gt;It is usually measured as the percentage of requests that fail due to server-side issues. These are failures users cannot recover from. A broken checkout flow or a failed API call directly impacts trust.&lt;/p&gt;

&lt;p&gt;In theory, error rate should be one of the easiest metrics to act on.&lt;/p&gt;

&lt;p&gt;In practice, it rarely is.&lt;/p&gt;

&lt;p&gt;Errors can come from application bugs, but also from timeouts, resource limits, failed deployments, or unstable instances. Developers end up correlating errors with infrastructure events just to understand what happened.&lt;/p&gt;

&lt;p&gt;This slows everything down.&lt;/p&gt;

&lt;p&gt;A PaaS reduces this ambiguity.&lt;/p&gt;

&lt;p&gt;Failures caused by scaling, instance crashes, or transient infrastructure issues are handled at the platform level. Retries, isolation, and recovery mechanisms are built in.&lt;/p&gt;

&lt;p&gt;What remains is a tighter link between error rate and application correctness.&lt;/p&gt;

&lt;p&gt;When the error rate increases, it is far more likely to be something in the code or a dependency, not an invisible infrastructure issue.&lt;/p&gt;

&lt;p&gt;This shifts the error rate from a noisy metric into a reliable signal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Throughput Becomes Context Instead of a Problem
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5q7mfa0r14htshel2jo.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5q7mfa0r14htshel2jo.webp" alt=" " width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Throughput measures how many requests your system handles over time.&lt;/p&gt;

&lt;p&gt;It provides context for everything else. Latency and error rate only make sense when you know how much traffic the system is handling.&lt;/p&gt;

&lt;p&gt;A spike in latency during high traffic is expected. The same spike during low traffic is a warning sign.&lt;/p&gt;

&lt;p&gt;But in many systems, throughput introduces operational complexity.&lt;/p&gt;

&lt;p&gt;Traffic changes require scaling decisions. Teams define autoscaling rules, tune thresholds, and try to predict demand. When things go wrong, they revisit those decisions.&lt;/p&gt;

&lt;p&gt;Developers end up thinking about capacity instead of behaviour.&lt;/p&gt;

&lt;p&gt;A PaaS shifts this responsibility.&lt;/p&gt;

&lt;p&gt;Scaling is automatic. Traffic spikes are absorbed by the platform. Developers do not need to decide how many instances should be running or when to scale.&lt;/p&gt;

&lt;p&gt;Throughput becomes what it should be. Context.&lt;/p&gt;

&lt;p&gt;It helps explain what is happening, without forcing developers to manage how the system adapts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resource Utilisation Moves Out of the Critical Path
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjl1jfac1hyhe2w0126j6.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjl1jfac1hyhe2w0126j6.webp" alt=" " width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Resource utilization measures how much CPU, memory, and I/O your system consumes.&lt;/p&gt;

&lt;p&gt;Traditionally, this has been central to operating systems. High CPU or memory usage signals potential issues. Teams monitor these metrics to avoid failures and plan scaling.&lt;/p&gt;

&lt;p&gt;But for most developers, resource utilization is not where value is created.&lt;/p&gt;

&lt;p&gt;Yet in many environments, developers are still responsible for interpreting these signals. They tune memory limits, investigate CPU spikes, and try to optimise resource usage to keep systems stable.&lt;/p&gt;

&lt;p&gt;This is operational work.&lt;/p&gt;

&lt;p&gt;A PaaS changes the role of these metrics.&lt;/p&gt;

&lt;p&gt;Resource management is handled by the platform. Allocation, scaling, and isolation happen automatically. Developers do not need to constantly watch CPU graphs or memory charts to keep the system running.&lt;/p&gt;

&lt;p&gt;These metrics still exist, but they move into the background.&lt;/p&gt;

&lt;p&gt;They become diagnostic tools rather than primary signals.&lt;/p&gt;

&lt;p&gt;Developers can focus on performance at the application level, instead of managing how infrastructure behaves under load.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instance Health Becomes Invisible by Design
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetx4sz2vtu9thhq74sko.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetx4sz2vtu9thhq74sko.webp" alt=" " width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instance health tracks restarts, crashes, and lifecycle events.&lt;/p&gt;

&lt;p&gt;In many systems, this is a critical metric. Frequent restarts indicate instability. Memory leaks, crashes, or resource exhaustion often show up here first.&lt;/p&gt;

&lt;p&gt;Teams monitor instance health to catch issues early and prevent cascading failures.&lt;/p&gt;

&lt;p&gt;But this also reveals something important.&lt;/p&gt;

&lt;p&gt;Developers are aware of, and responsible for, the lifecycle of infrastructure.&lt;/p&gt;

&lt;p&gt;They track restarts, investigate crashes, and try to stabilise the system manually.&lt;/p&gt;

&lt;p&gt;A PaaS removes this responsibility.&lt;/p&gt;

&lt;p&gt;Unhealthy instances are restarted automatically. Load is redistributed. Capacity is maintained without manual intervention.&lt;/p&gt;

&lt;p&gt;Instance health does not disappear, but it no longer requires constant attention.&lt;/p&gt;

&lt;p&gt;It becomes part of the platform’s internal behaviour, not something developers need to actively manage.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Metrics to Meaning
&lt;/h2&gt;

&lt;p&gt;These five metrics have not changed.&lt;/p&gt;

&lt;p&gt;Latency still reflects performance. Error rate still reflects correctness. Throughput still reflects demand. Resource utilization still reflects efficiency. Instance health still reflects stability.&lt;/p&gt;

&lt;p&gt;What changes is how much work it takes to interpret them.&lt;/p&gt;

&lt;p&gt;In lower-level environments, developers have to connect these signals themselves. A latency spike leads to checking throughput, then resource usage, then instance behaviour. Each step requires context, assumptions, and time.&lt;/p&gt;

&lt;p&gt;This is where complexity accumulates.&lt;/p&gt;

&lt;p&gt;A PaaS reduces that gap.&lt;/p&gt;

&lt;p&gt;It handles scaling, recovery, and resource management so that metrics map more directly to application behaviour. The signals become easier to interpret because fewer variables are exposed.&lt;/p&gt;

&lt;p&gt;Instead of asking multiple questions across layers, developers can move more directly from symptom to cause.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters for Developers
&lt;/h2&gt;

&lt;p&gt;Most developers do not want to manage infrastructure.&lt;/p&gt;

&lt;p&gt;They want to build features, ship improvements, and respond to user needs.&lt;/p&gt;

&lt;p&gt;But as systems grow, operational responsibility expands. Monitoring becomes more complex. Debugging requires more context. A significant portion of time shifts from building to maintaining.&lt;/p&gt;

&lt;p&gt;Metrics are part of this shift.&lt;/p&gt;

&lt;p&gt;They are necessary, but they also reflect how much of the system you are responsible for understanding.&lt;/p&gt;

&lt;p&gt;A PaaS does not eliminate metrics. It reduces the effort required to make sense of them.&lt;/p&gt;

&lt;p&gt;It ensures that when something changes in production, the signals developers see are closer to the reality they care about.&lt;/p&gt;

&lt;p&gt;Application behaviour. User experience. System correctness.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Advantage Is Clarity
&lt;/h2&gt;

&lt;p&gt;The goal is not to have fewer metrics.&lt;/p&gt;

&lt;p&gt;It is to have metrics that mean something without requiring deep infrastructure reasoning.&lt;/p&gt;

&lt;p&gt;These five metrics form a complete picture of system health. But their real value depends on how directly they map to what developers control.&lt;/p&gt;

&lt;p&gt;The more layers you have to think about, the harder mapping becomes.&lt;/p&gt;

&lt;p&gt;A good PaaS removes those layers.&lt;/p&gt;

&lt;p&gt;It turns metrics from raw data into usable signals.&lt;/p&gt;

&lt;p&gt;And that shift from metrics to meaning is what allows developers to understand production systems without being buried under them.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>architecture</category>
    </item>
    <item>
      <title>From Prompt Engineer to Agent Engineer: The 7 Skills You Need to Build AI Agents</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Wed, 15 Apr 2026 20:12:20 +0000</pubDate>
      <link>https://dev.to/manishmshiva/from-prompt-engineer-to-agent-engineer-the-7-skills-you-need-to-build-ai-agents-33o7</link>
      <guid>https://dev.to/manishmshiva/from-prompt-engineer-to-agent-engineer-the-7-skills-you-need-to-build-ai-agents-33o7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Discover the key skills you need to build AI agents that thrive in real-world environments, moving beyond crafting prompts to engineering robust systems.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The world of artificial intelligence is rapidly evolving. Just a few years ago, being a “prompt engineer” was about crafting clever instructions for a language model.&lt;/p&gt;

&lt;p&gt;But times have changed. Today, building AI agents that function in the real world requires much more.&lt;/p&gt;

&lt;p&gt;The role is far broader and demands a diverse set of skills. This transition from a focus on crafting prompts to engineering sophisticated systems is like moving from following a recipe to becoming a chef.&lt;/p&gt;

&lt;p&gt;As we delve into these seven essential skills, you’ll see exactly where to focus your efforts to become a successful “agent engineer.”&lt;/p&gt;

&lt;h2&gt;
  
  
  The Changing Landscape of AI Engineering
&lt;/h2&gt;

&lt;p&gt;There’s an identity shift happening in technology today. What once was the realm of prompt engineers is now evolving into something much broader—agent engineering.&lt;/p&gt;

&lt;p&gt;In the past, crafting well-designed prompts was enough when working with general-purpose AI models like GPT. However, today’s AI agents are not just responding to questions; they’re performing actions, making decisions, interacting with databases, and much more. This means the skills required have expanded significantly.&lt;/p&gt;

&lt;p&gt;When building AI systems that perform real functions, like booking flights or processing refunds, writing effective prompts is just a starting point. The real challenge lies in engineering systems that can function seamlessly and handle unexpected situations.&lt;/p&gt;

&lt;p&gt;It’s like moving from being a cook who follows recipes to becoming a chef who understands all aspects of culinary creation. A chef knows about ingredients, techniques, and workflows, and this is the mindset you need to become an agent engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  System Design: The Foundation of AI Agents
&lt;/h2&gt;

&lt;p&gt;Effective &lt;a href="https://www.freecodecamp.org/news/learn-system-design-principles/" rel="noopener noreferrer"&gt;system design&lt;/a&gt; is the cornerstone of building reliable AI agents.&lt;/p&gt;

&lt;p&gt;When constructing an agent, you’re creating a complex system with multiple components that must work together harmoniously. This involves an architecture in which data flows smoothly, and every component understands its role. You might have a language model making decisions, tools executing actions, and databases storing states. Like an orchestra, these elements must harmonise without stepping on each other’s toes.&lt;/p&gt;

&lt;p&gt;Thinking of it like designing a complex software backend can be helpful. You’ll deal with situations where one component may fail and must handle requests that require coordination between several parts. If you have experience with system design, this might sound familiar. If not, it is crucial to start learning, as software systems, like AI agents, require solid structure and thoughtful orchestration.&lt;/p&gt;

&lt;p&gt;Here is a wonderful resource put together on System Design - &lt;a href="https://github.com/karanpratapsingh/system-design" rel="noopener noreferrer"&gt;https://github.com/karanpratapsingh/system-design&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tool and Contract Design: Establishing Clear Communication
&lt;/h2&gt;

&lt;p&gt;Agents interact with the world through tools, and each tool operates on a contract. A contract is a set of clear expectations about inputs and outputs.&lt;/p&gt;

&lt;p&gt;The importance of precise tool design cannot be overstated. Vague contracts lead agents to make assumptions, which can be catastrophic, especially in critical tasks like financial transactions.&lt;/p&gt;

&lt;p&gt;For example, if a tool’s input schema says “user ID is a string,” the agent might interpret it in various unintended ways. But by specifying a pattern that must be matched, you guide the agent toward consistent, error-free operation.&lt;/p&gt;

&lt;p&gt;Clear tool contracts are like the terms of a handshake agreement. When both sides know exactly what’s expected, operations run smoothly, reducing room for ambiguity and errors. This precision in design ensures that your agents function effectively without resorting to guesswork or imagination, qualities that are less than ideal in automated systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mastering Retrieval Engineering
&lt;/h2&gt;

&lt;p&gt;Retrieval Engineering, specifically &lt;a href="https://manishmshiva.substack.com/p/how-to-chat-with-your-pdf-using-retrieval" rel="noopener noreferrer"&gt;Retrieval Augmented Generation (RAG)&lt;/a&gt;, is a critical component in enhancing an agent’s performance. Instead of relying solely on pre-trained knowledge, RAG involves fetching relevant documents to enrich the model’s context. The quality of these retrieved documents directly affects the agent’s output, making this a complex yet essential skill.&lt;/p&gt;

&lt;p&gt;Achieving optimal retrieval involves several factors. Documents must be split into appropriately-sized chunks—large enough to maintain context but small enough to avoid obscuring important details.&lt;/p&gt;

&lt;p&gt;Additionally, embeddings, which the model uses to represent similar concepts, must be accurately aligned to ensure meaningful context. Finally, re-ranking mechanisms ensure the most relevant documents are prioritised. This deep discipline requires careful attention, but understanding its basics can significantly enhance your agent’s performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reliability Engineering: Ensuring Consistent Agent Performance
&lt;/h2&gt;

&lt;p&gt;Reliability is a non-negotiable aspect of agent engineering.&lt;/p&gt;

&lt;p&gt;APIs can fail, networks can time out, and external services may go down unexpectedly. These situations can render your agent ineffective or stuck, trying to execute an unachievable task. Therefore, reliability engineering principles like implementing retry logic with back-off, setting timeouts to prevent indefinite hang-ups, and creating fallback paths are critical.&lt;/p&gt;

&lt;p&gt;Think of these techniques as proactive measures to protect your system from cascading failures and ensure your agent can maintain a high level of performance, even under less-than-ideal conditions. While these concepts may be familiar to those with a background in backend development, they are crucial for any aspiring agent engineer who wishes to build robust and resilient systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security and Safety: Protecting Your AI Systems
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.freecodecamp.org/news/how-to-use-strix-the-open-source-ai-agent-for-security-testing/" rel="noopener noreferrer"&gt;Security&lt;/a&gt; is a crucial concern in agent engineering.&lt;/p&gt;

&lt;p&gt;Agents can be targets for attacks, such as prompt injections, where malicious instructions are embedded in user input to mislead the system. Without proper defences, an agent might inadvertently comply with harmful requests. Thus, it’s essential to apply security engineering principles to a new kind of system.&lt;/p&gt;

&lt;p&gt;This involves implementing input validation to filter malicious requests, output filters to ensure responses adhere to policy, and permission boundaries to limit the agent’s actions. These measures protect your system from unauthorised manipulation and ensure the agent functions within safe and compliant parameters.&lt;/p&gt;

&lt;p&gt;In this sense, security engineering is about anticipating potential vulnerabilities and reinforcing your system to prevent misuse.&lt;/p&gt;

&lt;h2&gt;
  
  
  Evaluation, Observability, and Product Thinking
&lt;/h2&gt;

&lt;p&gt;An agent’s effectiveness can only be improved if its performance is well-evaluated. &lt;a href="https://www.ibm.com/think/topics/ai-agent-evaluation" rel="noopener noreferrer"&gt;Techniques for evaluation&lt;/a&gt;, along with observability tools, allow you to track your agent’s actions, understand why decisions were made, and identify areas for improvement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.langchain.com/langsmith/observability" rel="noopener noreferrer"&gt;Tracing&lt;/a&gt; every decision, logging each tool interaction, and keeping a comprehensive timeline are essential practices for effective debugging and enhancing performance.&lt;/p&gt;

&lt;p&gt;Beyond technical prowess, product thinking emphasises the human aspect of agent engineering. Agents should align with user expectations, offering clear feedback when confident or uncertain, and handle errors gracefully. Product thinking involves designing user-friendly systems that build trust and encourage use, even when unpredictable AI behaviour is involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Transitioning from a prompt engineer to a full-fledged agent engineer involves mastering a diverse skill set, much like a chef mastering the culinary arts.&lt;/p&gt;

&lt;p&gt;By understanding system architecture, designing precise tool contracts, optimising information retrieval, ensuring reliable operations, fortifying security, and integrating evaluation and product thinking, you’re well on your way to building AI agents that perform seamlessly in the real world.&lt;/p&gt;

&lt;p&gt;These seven skills are your recipe for success, paving the way for creating robust, reliable, and human-friendly AI systems. As the expectations for AI systems evolve, so too must our skills. The future belongs to those who adapt and grow with it.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>career</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>Getting Started with Terraform: From Zero to Production</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Mon, 13 Apr 2026 09:28:54 +0000</pubDate>
      <link>https://dev.to/manishmshiva/getting-started-with-terraform-from-zero-to-production-13m</link>
      <guid>https://dev.to/manishmshiva/getting-started-with-terraform-from-zero-to-production-13m</guid>
      <description>&lt;p&gt;Infrastructure has undergone a fundamental shift over the past decade.&lt;/p&gt;

&lt;p&gt;What was once configured manually through dashboards and shell access is now defined declaratively in code. This shift is not just about convenience. It is about repeatability, auditability, and control.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.hashicorp.com/terraform" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; sits at the center of this transformation. It allows you to define infrastructure using configuration files, apply those configurations consistently across environments, and evolve systems safely over time.&lt;/p&gt;

&lt;p&gt;For teams building modern applications, especially on platform abstractions, Terraform becomes the control plane for everything from application deployment to databases and networking. &lt;/p&gt;

&lt;p&gt;The Terraform provider from &lt;a href="https://sevalla.com/" rel="noopener noreferrer"&gt;Sevalla&lt;/a&gt; extends this model by allowing teams to manage the entire application platform as code, not just underlying infrastructure. It enables you to define applications, databases, networking, storage, and deployment workflows in a single, unified configuration. &lt;/p&gt;

&lt;p&gt;Instead of stitching together multiple tools or relying on manual setup, everything from code deployment to traffic routing and environment configuration can be expressed declaratively. This creates a consistent, repeatable system where environments can be replicated easily, changes are version-controlled, and production setups can evolve safely over time.&lt;br&gt;
This article walks through how to go from zero to a production-ready setup using Terraform and the &lt;a href="https://github.com/sevalla-hosting/terraform-provider-sevalla/" rel="noopener noreferrer"&gt;Sevalla Terraform Provider&lt;/a&gt;, focusing on practical concepts rather than theory.&lt;/p&gt;
&lt;h2&gt;
  
  
  What Terraform Actually Does
&lt;/h2&gt;

&lt;p&gt;Terraform is an infrastructure-as-code tool that translates configuration files into real infrastructure. You describe the desired state of your system, and Terraform figures out how to achieve it.&lt;br&gt;
At a high level, Terraform operates in three phases.&lt;br&gt;
First, it initializes the working directory and downloads required providers. Providers are plugins that allow Terraform to interact with specific platforms.&lt;br&gt;
Next, it creates an execution plan. This plan shows what resources will be created, modified, or destroyed to match your configuration.&lt;br&gt;
Finally, it applies the plan, making the necessary API calls to bring your infrastructure into the desired state.&lt;br&gt;
The key idea is that Terraform is declarative. You define what you want, not how to do it. Terraform handles the orchestration.&lt;br&gt;
This abstraction becomes extremely powerful as systems grow more complex.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setting Up Terraform for the First Time
&lt;/h2&gt;

&lt;p&gt;Getting started with Terraform requires very little setup. You install the CLI, create a working directory, and define a basic configuration.&lt;br&gt;
A &lt;a href="https://developer.hashicorp.com/terraform/language/syntax/configuration" rel="noopener noreferrer"&gt;Terraform configuration&lt;/a&gt; is written in HCL, a domain-specific language designed to be human-readable. Even a simple configuration establishes the core concepts.&lt;br&gt;
You define the required provider, configure authentication, and declare resources.&lt;br&gt;
Here is a minimal example that provisions an application using a managed platform provider.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;sevalla&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"sevalla-hosting/sevalla"&lt;/span&gt;
     &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 1.0"&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"sevalla"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"sevalla_clusters"&lt;/span&gt; &lt;span class="s2"&gt;"all"&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"sevalla_application"&lt;/span&gt; &lt;span class="s2"&gt;"web"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;display_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-web-app"&lt;/span&gt;
 &lt;span class="nx"&gt;cluster_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sevalla_clusters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;all&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;clusters&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
 &lt;span class="nx"&gt;source&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"publicGit"&lt;/span&gt;
 &lt;span class="nx"&gt;repo_url&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/example/app"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration does several things.&lt;br&gt;
It declares the provider, which tells Terraform how to communicate with the platform. It fetches available clusters using a data source. It defines an application resource that points to a Git repository.&lt;br&gt;
Even at this stage, you are already defining infrastructure in a reproducible way.&lt;br&gt;
To execute this configuration, you run three commands.&lt;br&gt;
You initialize the project, generate a plan, and apply it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SEVALLA_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-api-key"&lt;/span&gt;
terraform init
terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After applying, your application is deployed without manual steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Providers, Resources, and Data Sources
&lt;/h2&gt;

&lt;p&gt;Terraform revolves around three core constructs.&lt;br&gt;
Providers act as the bridge between Terraform and external systems. They expose APIs in a structured way that Terraform can use.&lt;br&gt;
Resources represent the infrastructure you want to create. These are the building blocks of your system. Applications, databases, load balancers, and storage buckets are all modeled as resources.&lt;br&gt;
Data sources allow you to query existing infrastructure. Instead of creating something new, you retrieve information that can be used elsewhere in your configuration.&lt;br&gt;
The combination of these constructs allows you to build flexible and composable systems.&lt;br&gt;
For example, you can fetch a list of available clusters using a data source and then dynamically assign your application to one of them. This reduces hardcoding and improves portability.&lt;br&gt;
As your configuration grows, these abstractions help you maintain clarity and structure.&lt;/p&gt;
&lt;h2&gt;
  
  
  Building a Real Application Stack
&lt;/h2&gt;

&lt;p&gt;A production system is rarely just a single application. It typically includes multiple components that need to work together.&lt;br&gt;
With Terraform, you can define the entire stack in one place.&lt;br&gt;
You might start with an application, then add a managed database, connect them internally, and expose the application through a load balancer.&lt;br&gt;
A simplified flow looks like this.&lt;br&gt;
You define the application resource that pulls code from a repository. You provision a database resource, such as PostgreSQL or Redis. You establish an internal connection between the application and the database. You configure environment variables for credentials. You optionally add a custom domain or routing layer.&lt;br&gt;
Each of these components is a resource, and Terraform ensures they are created in the correct order.&lt;br&gt;
This approach eliminates configuration drift. Instead of manually setting up each component, everything is defined in code and version-controlled.&lt;br&gt;
It also makes environments consistent. Your staging and production setups can be identical except for a few variables.&lt;/p&gt;
&lt;h2&gt;
  
  
  Managing Configuration and Secrets
&lt;/h2&gt;

&lt;p&gt;Production systems require configuration. This includes environment variables, API keys, and connection strings.&lt;br&gt;
Terraform provides multiple ways to handle this.&lt;br&gt;
You can define variables in your configuration and pass values at runtime. Sensitive values, such as API keys, are typically injected via environment variables.&lt;br&gt;
For example, authentication is handled through an API key that can be set as an environment variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SEVALLA_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-api-key"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This avoids hardcoding credentials in configuration files.&lt;br&gt;
You can also define environment variables as part of your infrastructure. This allows you to configure applications consistently across environments.&lt;br&gt;
The important principle is separation of concerns. Infrastructure definitions should remain clean, while sensitive data is managed securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling and Process Configuration
&lt;/h2&gt;

&lt;p&gt;Modern applications often consist of multiple processes. A web server handles incoming requests, background workers process jobs, and scheduled tasks run periodically.&lt;br&gt;
Terraform allows you to define these processes explicitly.&lt;br&gt;
You can configure different process types, allocate resources, and scale them independently. This is particularly useful for handling variable workloads.&lt;br&gt;
For example, you might scale web processes based on incoming traffic while keeping background workers at a steady level.&lt;br&gt;
By defining this in code, scaling becomes predictable and repeatable.&lt;br&gt;
You avoid manual intervention and ensure that your system behaves consistently under load.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Networking and Traffic Management
&lt;/h2&gt;

&lt;p&gt;As systems grow, managing traffic becomes more important.&lt;br&gt;
Terraform enables you to define networking components such as load balancers and routing rules. You can map domains to applications, distribute traffic across multiple services, and control access.&lt;br&gt;
This is essential for production readiness.&lt;br&gt;
A load balancer can improve availability by distributing traffic across instances. Domain configuration ensures that users can access your application through a stable endpoint.&lt;br&gt;
You can also define restrictions, such as IP allowlists, to enhance security.&lt;br&gt;
All of this is managed declaratively, which reduces the risk of misconfiguration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipelines and Continuous Deployment
&lt;/h2&gt;

&lt;p&gt;Production systems require reliable deployment workflows.&lt;br&gt;
Terraform can be used to define deployment pipelines and stages. This allows you to model how code moves from development to production.&lt;br&gt;
You can define multiple stages, associate applications with each stage, and control how deployments are triggered.&lt;br&gt;
This brings infrastructure and deployment logic into a single system.&lt;br&gt;
Instead of relying on external scripts or manual processes, everything is defined in a structured and version-controlled way.&lt;br&gt;
It also improves traceability. You can see exactly how a system is configured and how changes are applied over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Configuration to Production
&lt;/h2&gt;

&lt;p&gt;Moving from a simple setup to production involves more than just adding resources. It requires discipline in how you manage infrastructure.&lt;br&gt;
Version control becomes critical. Every change to your infrastructure should go through code review. This reduces the risk of introducing breaking changes.&lt;br&gt;
State management is another key aspect. Terraform keeps track of the current state of your infrastructure. This state must be stored securely and consistently, especially in team environments.&lt;br&gt;
You also need to think about environment separation. Development, staging, and production should be isolated but defined using similar configurations.&lt;br&gt;
Finally, observability should be integrated from the start. While Terraform provisions infrastructure, you need monitoring and logging to understand how it behaves in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Terraform Scales with You
&lt;/h2&gt;

&lt;p&gt;Terraform works well for small projects, but its real value becomes apparent as systems grow.&lt;br&gt;
As you add more services, environments, and dependencies, manual management becomes unsustainable. Terraform provides a structured way to manage this complexity.&lt;br&gt;
It enforces consistency. It enables automation. It creates a single source of truth for your infrastructure.&lt;br&gt;
Most importantly, it allows teams to move faster without sacrificing reliability.&lt;br&gt;
By defining infrastructure as code, you reduce ambiguity. You make systems easier to understand, easier to debug, and easier to evolve.&lt;br&gt;
That is what takes you from zero to production in a way that actually scales.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building AI Agents That Can Control Cloud Infrastructure</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Thu, 26 Mar 2026 05:36:48 +0000</pubDate>
      <link>https://dev.to/manishmshiva/building-ai-agents-that-can-control-cloud-infrastructure-2852</link>
      <guid>https://dev.to/manishmshiva/building-ai-agents-that-can-control-cloud-infrastructure-2852</guid>
      <description>&lt;p&gt;Cloud infrastructure has become deeply programmable over the past decade.&lt;/p&gt;

&lt;p&gt;Nearly every platform exposes APIs that allow developers to create applications, provision databases, configure networking, and retrieve metrics.&lt;/p&gt;

&lt;p&gt;This shift enabled automation via Infrastructure as Code and CI/CD pipelines, allowing teams to manage systems through scripts rather than dashboards.&lt;/p&gt;

&lt;p&gt;Now another layer of automation is emerging. AI agents are starting to participate directly in development workflows. These agents can read codebases, generate implementations, run terminal commands, and help debug systems. The next logical step is to allow them to interact with the infrastructure itself.&lt;/p&gt;

&lt;p&gt;Instead of manually inspecting dashboards or remembering complex command-line syntax, developers can ask an AI agent to check system state, deploy services, or retrieve metrics. The agent performs these tasks by interacting with cloud APIs on behalf of the user.&lt;/p&gt;

&lt;p&gt;This capability opens the door to a new type of workflow where infrastructure becomes conversational, programmable, and deeply integrated into development environments.&lt;/p&gt;

&lt;p&gt;In this article, we will explore how AI agents can interact with cloud infrastructure through APIs, the challenges of exposing large APIs to AI systems, and how architectures like MCP make it possible for agents to discover and execute infrastructure operations safely. &lt;/p&gt;

&lt;p&gt;We will also look at a practical example of connecting an AI agent to a cloud platform like Sevalla using the search-and-execute pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;AI Agents Are Becoming Part of the Development Environment&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Modern developer tools increasingly embed AI assistants directly inside coding environments. Editors such as Cursor, Windsurf, and Claude Code allow developers to ask questions about their projects, generate new code, and execute commands without leaving the editor.&lt;/p&gt;

&lt;p&gt;Instead of manually navigating documentation or writing boilerplate code, developers can simply describe what they want. The AI interprets the request and produces the necessary actions.&lt;/p&gt;

&lt;p&gt;This approach is already common for tasks like writing functions, refactoring code, or debugging errors. However, infrastructure management is still largely handled through dashboards, terminal commands, or external tooling.&lt;/p&gt;

&lt;p&gt;If AI agents are going to assist developers effectively, they need access to the same systems developers interact with every day. That means accessing APIs that manage applications, databases, deployments, and other infrastructure resources.&lt;/p&gt;

&lt;p&gt;The challenge is providing that access in a structured and scalable way.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Connecting AI Agents to External Systems&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI agents do not inherently know how to interact with external services. They need a framework that allows them to call tools and access data safely.&lt;/p&gt;

&lt;p&gt;Model Context Protocol, or MCP, provides one such framework. MCP is designed to let AI assistants connect to external tools in a standardized way.&lt;/p&gt;

&lt;p&gt;An MCP server exposes tools that an AI agent can call when it needs information or wants to act. These tools might retrieve data from a database, query logs, interact with APIs, or execute commands on a remote system.&lt;/p&gt;

&lt;p&gt;When the AI agent receives a request from the user, it determines which tool to call and executes that tool through the MCP server. The results are returned to the agent, which can then continue reasoning about the problem.&lt;/p&gt;

&lt;p&gt;This architecture allows AI assistants to interact with complex systems while maintaining a clear boundary between the agent and the external environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Challenge of Large Cloud APIs&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While MCP enables connecting AI agents to infrastructure systems, cloud platforms introduce an additional challenge.&lt;/p&gt;

&lt;p&gt;Most cloud platforms expose large APIs with many endpoints. A typical platform might include endpoints for managing applications, databases, storage, networking, domains, metrics, logs, and deployment pipelines.&lt;/p&gt;

&lt;p&gt;If an MCP server exposes each endpoint as a separate tool, the number of tools can quickly grow into the hundreds.&lt;/p&gt;

&lt;p&gt;This creates several problems. First, the AI agent must understand the purpose and parameters of every available tool before deciding which one to use. This increases the amount of context required for the agent to operate effectively.&lt;/p&gt;

&lt;p&gt;Second, maintaining hundreds of tools becomes difficult for developers who build and maintain the MCP server.&lt;/p&gt;

&lt;p&gt;Third, the system becomes rigid. Every time a new API endpoint is added, a new tool must also be created and documented.&lt;/p&gt;

&lt;p&gt;For large APIs, this approach quickly becomes impractical.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A Simpler Pattern for API Access&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A different architecture solves this problem by dramatically reducing the number of tools exposed to the AI.&lt;/p&gt;

&lt;p&gt;Instead of providing a separate tool for every API endpoint, the MCP server exposes only two capabilities.&lt;/p&gt;

&lt;p&gt;The first capability allows the agent to search the API specification. This lets the agent discover available endpoints, understand parameters, and inspect request or response schemas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22cii6v118q2iq2cx5fl.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22cii6v118q2iq2cx5fl.webp" alt=" " width="480" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second capability allows the agent to execute code that calls the API.&lt;/p&gt;

&lt;p&gt;In this model, the AI agent dynamically generates the code required to call the API. Because the agent can search the specification and write its own API calls, the MCP server does not need to define individual tools for every endpoint.&lt;/p&gt;

&lt;p&gt;This pattern drastically reduces the complexity of the integration while still giving the agent full access to the underlying platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Sandboxed Code Execution Is Important&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Allowing AI agents to generate and execute code raises important security considerations.&lt;/p&gt;

&lt;p&gt;If the generated code runs unrestricted, it could potentially access sensitive parts of the system or perform unintended operations. To prevent this, the execution environment must be carefully controlled.&lt;/p&gt;

&lt;p&gt;A common solution is running the generated code inside a sandboxed environment. In this setup, the code runs in an isolated runtime with limited permissions. The environment exposes only specific functions that allow interaction with the platform’s API.&lt;/p&gt;

&lt;p&gt;Because the code cannot access the host system directly, the risk of unintended behavior is greatly reduced. At the same time, the AI agent retains the flexibility to generate custom API calls as needed.&lt;/p&gt;

&lt;p&gt;This combination of dynamic code generation and sandboxed execution makes it possible for AI agents to interact with complex APIs safely.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Practical Example with Sevalla&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A practical implementation of this architecture can be seen in the Sevalla MCP server, which exposes a cloud platform’s API to AI agents through the search-and-execute pattern.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sevalla.com" rel="noopener noreferrer"&gt;Sevalla &lt;/a&gt;is a PaaS provider designed for developers shipping production applications. It offers app hosting, database, object storage, and static site hosting for your projects. We also have other options, such as AWS and Azure, that come with their own MCP tools.&lt;/p&gt;

&lt;p&gt;Instead of registering hundreds of tools for every API endpoint, the server provides only two tools that allow the AI agent to explore and interact with the entire platform. Find the full documentation for &lt;a href="https://github.com/sevalla-hosting/mcp" rel="noopener noreferrer"&gt;Sevalla’s MCP server&lt;/a&gt; here.&lt;/p&gt;

&lt;p&gt;The first tool, search, allows the agent to query the platform’s OpenAPI specification. Through this interface the agent can discover available endpoints, understand parameters, and inspect response schemas.&lt;/p&gt;

&lt;p&gt;Because the API specification is searchable, the agent does not need to know the structure of the platform’s API in advance. It can explore the API dynamically based on the task it needs to perform.&lt;/p&gt;

&lt;p&gt;For example, if the user asks the agent to list all applications running in their account, the agent can begin by searching the API specification.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;const endpoints = await sevalla.search("list all applications")&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The result returns the relevant API definitions, including the correct path and parameters required for the request. Once the agent understands which endpoint to use, it can generate the necessary API call.&lt;/p&gt;

&lt;p&gt;The second tool, execute, runs JavaScript inside a sandboxed V8 environment. Within this environment the agent can call the API using a helper function provided by the platform.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;const apps = await sevalla.request({&lt;br&gt;
  method: "GET",&lt;br&gt;
  path: "/applications"&lt;br&gt;
})&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Because the code runs inside an isolated V8 sandbox, the generated script cannot access the host system. The only permitted interaction is through the API helper function. This ensures that the AI agent can perform infrastructure operations safely while still retaining the flexibility to generate dynamic API calls.&lt;/p&gt;

&lt;p&gt;This approach allows an agent to discover and interact with many parts of the platform without requiring predefined tools for each capability. After discovering endpoints through the API specification, the agent can retrieve application data, inspect deployments, query metrics, or manage infrastructure resources through generated API calls.&lt;/p&gt;

&lt;p&gt;The design also significantly reduces context usage. Traditional MCP integrations might require hundreds of tools to represent every endpoint of a large API. In contrast, the search-and-execute pattern allows the entire API surface to be accessed through just two tools.&lt;/p&gt;

&lt;p&gt;For developers connecting AI assistants to infrastructure platforms, this architecture provides a practical way to expose large APIs while keeping the integration simple and efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What This Means for Developers&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Allowing AI agents to interact with infrastructure APIs changes how developers manage systems.&lt;/p&gt;

&lt;p&gt;Instead of manually navigating dashboards or writing long sequences of commands, developers can describe what they want in natural language. The AI agent can interpret the request, discover the relevant API endpoints, and execute the required operations.&lt;/p&gt;

&lt;p&gt;This approach also improves observability and debugging. When something goes wrong, the agent can query logs, inspect metrics, and retrieve system state without requiring the developer to manually gather information.&lt;/p&gt;

&lt;p&gt;Over time, this type of integration could significantly reduce the friction involved in managing complex cloud systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Next Evolution of Infrastructure Automation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Infrastructure automation has evolved through several stages. Early cloud systems relied heavily on manual configuration through web interfaces. Infrastructure as Code later allowed teams to define infrastructure using scripts and configuration files.&lt;/p&gt;

&lt;p&gt;CI/CD pipelines then automated the process of deploying and updating systems.&lt;/p&gt;

&lt;p&gt;AI agents represent the next step in this progression. By combining APIs, MCP integrations, and sandboxed execution environments, developers can allow intelligent systems to reason about infrastructure and interact with it safely.&lt;/p&gt;

&lt;p&gt;Instead of static integrations, agents can dynamically discover and call APIs as needed. This makes infrastructure management more flexible and accessible while maintaining the reliability of programmable systems.&lt;/p&gt;

&lt;p&gt;As AI tools become more deeply embedded in development environments, the ability for agents to understand and control infrastructure will likely become a standard capability for modern platforms.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Infrastructure as Code with APIs: Automating Cloud Resources the Developer Way</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Fri, 20 Mar 2026 04:09:01 +0000</pubDate>
      <link>https://dev.to/manishmshiva/infrastructure-as-code-with-apis-automating-cloud-resources-the-developer-way-3gop</link>
      <guid>https://dev.to/manishmshiva/infrastructure-as-code-with-apis-automating-cloud-resources-the-developer-way-3gop</guid>
      <description>&lt;p&gt;Modern software development moves fast. Teams deploy code many times a day. New environments appear and disappear constantly. In this world, manual infrastructure setup simply does not scale.&lt;/p&gt;

&lt;p&gt;For years, developers logged into dashboards, clicked through forms, and configured servers by hand. This worked for small projects, but it quickly became fragile. Every manual step increased the chance of mistakes. Environments drifted apart. Reproducing the same setup became difficult.&lt;/p&gt;

&lt;p&gt;Infrastructure as Code (IaC) solves this problem. Instead of clicking through interfaces, developers define infrastructure using code. This approach makes infrastructure predictable, repeatable, and easy to automate.&lt;/p&gt;

&lt;p&gt;In recent years, another approach has become popular alongside traditional IaC tools: using cloud APIs directly to create and manage infrastructure. This gives developers full control over how resources are provisioned and integrated into workflows.&lt;/p&gt;

&lt;p&gt;This article explains what Infrastructure as Code means, why APIs are a powerful way to implement it, and how developers can automate cloud resources using simple scripts.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Infrastructure as Code?
&lt;/h2&gt;

&lt;p&gt;Infrastructure as Code means managing infrastructure using code instead of manual processes.&lt;/p&gt;

&lt;p&gt;Instead of setting up servers, databases, and networks by hand, you define them in scripts or configuration files. These files describe the desired state of your infrastructure. A tool or script then creates and maintains that state automatically.&lt;/p&gt;

&lt;p&gt;For example, instead of manually creating a database, you might define it in code like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;database:&lt;br&gt;
  name: app_db&lt;br&gt;
  engine: postgres&lt;br&gt;
  version: 16&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Once the code runs, the database is created automatically.&lt;/p&gt;

&lt;p&gt;This approach provides several key benefits.&lt;/p&gt;

&lt;p&gt;First, it improves consistency. Every environment is created from the same definition. Development, staging, and production environments stay aligned.&lt;/p&gt;

&lt;p&gt;Second, it improves repeatability. If infrastructure fails, it can be recreated from code in minutes.&lt;/p&gt;

&lt;p&gt;Third, it improves version control. Infrastructure definitions live in the same repositories as application code. Teams can review, track, and roll back changes.&lt;/p&gt;

&lt;p&gt;Finally, it enables automation. Infrastructure can be created during deployments, tests, or CI/CD pipelines.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Limits of Manual Infrastructure
&lt;/h2&gt;

&lt;p&gt;Before IaC became common, infrastructure management relied heavily on dashboards and manual configuration.&lt;/p&gt;

&lt;p&gt;A developer would open a cloud console and perform steps like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a server&lt;/li&gt;
&lt;li&gt;Attach storage&lt;/li&gt;
&lt;li&gt;Configure environment variables&lt;/li&gt;
&lt;li&gt;Connect a database&lt;/li&gt;
&lt;li&gt;Add a domain&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These steps worked, but they introduced problems.&lt;/p&gt;

&lt;p&gt;Manual configuration is hard to document. Even if teams write guides, small details are often missed. Over time, environments drift apart.&lt;/p&gt;

&lt;p&gt;Manual processes also slow down development. Spinning up a new environment may take hours instead of seconds.&lt;/p&gt;

&lt;p&gt;Even worse, manual infrastructure cannot easily be tested. If something breaks, reproducing the same conditions becomes difficult.&lt;/p&gt;

&lt;p&gt;Infrastructure as Code removes these problems by turning infrastructure into something that can be scripted, tested, and automated.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why APIs Are a Powerful IaC Tool
&lt;/h2&gt;

&lt;p&gt;Many people associate Infrastructure as Code with tools like Terraform or CloudFormation. These tools are powerful, but they are not the only option.&lt;/p&gt;

&lt;p&gt;Every modern cloud platform exposes an API. That API allows developers to create resources programmatically.&lt;/p&gt;

&lt;p&gt;This means infrastructure can be controlled directly from code using HTTP requests or command-line interfaces.&lt;/p&gt;

&lt;p&gt;Using APIs for IaC has several advantages.&lt;/p&gt;

&lt;p&gt;First, it offers maximum flexibility. Developers can integrate infrastructure creation directly into applications, deployment scripts, or internal tools.&lt;/p&gt;

&lt;p&gt;Second, it reduces tooling complexity. Instead of learning a specialized IaC language, teams can use languages they already know, such as Python, JavaScript, or Bash.&lt;/p&gt;

&lt;p&gt;Third, it enables dynamic infrastructure. Scripts can create resources only when needed, scale them automatically, and remove them when work is complete.&lt;/p&gt;

&lt;p&gt;For example, a test suite could automatically create a database, run tests, and delete the database afterwards. This keeps environments clean and reduces costs.&lt;/p&gt;

&lt;p&gt;APIs essentially turn the cloud into a programmable platform.&lt;/p&gt;
&lt;h2&gt;
  
  
  Automating Infrastructure with Scripts
&lt;/h2&gt;

&lt;p&gt;Using APIs for infrastructure automation usually follows a simple workflow.&lt;/p&gt;

&lt;p&gt;First, a script authenticates with the cloud platform using an API token or credentials.&lt;/p&gt;

&lt;p&gt;Second, the script sends requests to create or modify resources such as applications, databases, or storage.&lt;/p&gt;

&lt;p&gt;Third, the script captures identifiers or configuration values from the response.&lt;/p&gt;

&lt;p&gt;Finally, those values are used in later steps, such as deployments or integrations.&lt;/p&gt;

&lt;p&gt;Because these steps run in code, they can easily be included in CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;A typical pipeline might do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create infrastructure&lt;/li&gt;
&lt;li&gt;Deploy the application&lt;/li&gt;
&lt;li&gt;Run tests&lt;/li&gt;
&lt;li&gt;Collect metrics&lt;/li&gt;
&lt;li&gt;Destroy temporary environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach ensures every deployment follows the same process.&lt;/p&gt;
&lt;h2&gt;
  
  
  Practical example with Sevalla
&lt;/h2&gt;

&lt;p&gt;A practical way to apply Infrastructure as Code through APIs is to use a command-line interface that directly interacts with a cloud platform’s API. This allows developers to automate infrastructure creation using scripts rather than dashboards.&lt;/p&gt;

&lt;p&gt;One example is the Sevalla CLI, which exposes infrastructure operations as terminal commands that can be executed manually or inside automation pipelines.&lt;/p&gt;

&lt;p&gt;Sevalla is a developer-centric PaaS designed to simplify your workflow. They provide high-performance application hosting, managed databases, object storage, and static sites in one unified platform. Alternate options include AWS and Azure, which require complex CLI tools and heavy DevOps overhead compared to Sevalla’s simplicity and ease of use.&lt;/p&gt;

&lt;p&gt;You can install the CLI using the following shell command.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -fsSL https://raw.githubusercontent.com/sevalla-hosting/cli/main/install.sh)&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Once installed, you can view the list of all available commands using the help command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoanp7r87jj1eyp3ioei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoanp7r87jj1eyp3ioei.png" alt=" " width="800" height="584"&gt;&lt;/a&gt;&lt;br&gt;
The first step is authentication. Make sure you have an account on Sevalla before using the CLI.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sevalla login&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;For automated environments such as CI/CD pipelines, authentication can be done with an API token. The token is stored in an environment variable so scripts can run without user interaction.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export SEVALLA_API_TOKEN="your-api-token"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Once authenticated, you can quickly view a list of your apps using sevalla apps list&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6y2iam0yxz1sbhc0namz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6y2iam0yxz1sbhc0namz.png" alt=" " width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your infrastructure can now be created directly from the command line. For example, a developer might start by creating an application service that will run the backend code.&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sevalla apps create --name myapp --source privateGit --cluster &amp;lt;id&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;This command provisions a new application resource on the platform. Instead of navigating through a web interface and filling out forms, the entire setup is performed through a single command.&lt;/p&gt;

&lt;p&gt;Because the command can be stored in scripts or configuration files, it becomes part of the project’s infrastructure definition.&lt;/p&gt;

&lt;p&gt;After creating the application, developers often need a database. That can also be provisioned programmatically.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sevalla databases create \&lt;br&gt;
  --name mydb \&lt;br&gt;
  --type postgresql \&lt;br&gt;
  --db-version 16 \&lt;br&gt;
  --cluster &amp;lt;id&amp;gt; \&lt;br&gt;
  --resource-type &amp;lt;id&amp;gt; \&lt;br&gt;
  --db-name mydb \&lt;br&gt;
  --db-password secret&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This creates a PostgreSQL database with a defined version and credentials. In an automated workflow, the database creation step could run during environment setup for staging or testing.&lt;/p&gt;

&lt;p&gt;Once the application and database exist, the next step might be configuring environment variables so the application can connect to the database.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sevalla apps env-vars create &amp;lt;app-id&amp;gt; --key DATABASE_URL --value "postgres://..."&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
These configuration values can be injected during deployments, ensuring the application always receives the correct settings.&lt;/p&gt;

&lt;p&gt;Deployment automation is another key part of Infrastructure as Code. Instead of manually triggering deployments, a script can deploy new code whenever a repository is updated.&lt;/p&gt;

&lt;p&gt;sevalla apps deployments trigger  --branch main&lt;br&gt;
This allows CI/CD systems to deploy new versions of the application automatically after tests pass.&lt;/p&gt;

&lt;p&gt;Infrastructure automation also includes scaling and monitoring. For example, if an application needs more instances to handle traffic, the number of running processes can be updated programmatically.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sevalla apps processes update &amp;lt;process-id&amp;gt; --app-id &amp;lt;app-id&amp;gt; --instances 3&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Metrics can also be retrieved through the CLI. This allows monitoring tools or scripts to analyze system performance.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sevalla apps processes metrics cpu-usage &amp;lt;app-id&amp;gt; &amp;lt;process-id&amp;gt;&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Similarly, application metrics such as response time or request rates can be queried to detect performance issues.&lt;/p&gt;

&lt;p&gt;Another common step in infrastructure automation is configuring domains. Instead of manually linking domains to applications, a script can add them during environment setup.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sevalla apps domains add &amp;lt;app-id&amp;gt; --name example.com&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
With these commands combined in scripts or pipelines, developers can fully automate the lifecycle of their infrastructure. A CI pipeline could create an application, provision a database, configure environment variables, deploy code, attach a domain, and monitor performance — all without human intervention.&lt;/p&gt;

&lt;p&gt;Because every command supports JSON output, scripts can also capture values returned by the platform and reuse them in later steps. For example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;APP_ID=$(sevalla apps list --json | jq -r '.[0].id')&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
This ability to chain commands together makes it easy to build powerful automation workflows.&lt;/p&gt;

&lt;p&gt;In practice, teams often place these commands inside deployment scripts or pipeline steps. Whenever code is pushed to a repository, the pipeline automatically provisions or updates the infrastructure needed to run the application.&lt;/p&gt;

&lt;p&gt;This approach demonstrates how APIs and automation tools can turn infrastructure into something developers manage the same way they manage application code, through scripts, version control, and automated workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code Improves Developer Productivity
&lt;/h2&gt;

&lt;p&gt;One of the biggest benefits of Infrastructure as Code is developer productivity.&lt;/p&gt;

&lt;p&gt;Developers no longer need to wait for infrastructure changes or manually configure environments.&lt;/p&gt;

&lt;p&gt;Instead, infrastructure becomes part of the development workflow.&lt;/p&gt;

&lt;p&gt;When a new feature requires a service, the developer simply adds the infrastructure definition to the repository. The pipeline then creates it automatically.&lt;/p&gt;

&lt;p&gt;This reduces delays and keeps development moving quickly.&lt;/p&gt;

&lt;p&gt;It also makes onboarding easier. New team members can spin up a full environment with a single command.&lt;/p&gt;

&lt;p&gt;The Future of Infrastructure&lt;br&gt;
Cloud infrastructure continues to evolve toward automation and programmability.&lt;/p&gt;

&lt;p&gt;Platforms increasingly expose APIs that allow every resource to be created, configured, and monitored through code.&lt;/p&gt;

&lt;p&gt;This trend aligns naturally with the way developers already work.&lt;/p&gt;

&lt;p&gt;Applications are built with code. Deployments are automated with code. It makes sense that infrastructure should also be defined with code.&lt;/p&gt;

&lt;p&gt;Infrastructure as Code with APIs takes this idea even further. It allows infrastructure to be embedded directly into development workflows, pipelines, and internal tools.&lt;/p&gt;

&lt;p&gt;The result is faster development, fewer configuration errors, and more reliable systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Infrastructure as Code has transformed how teams manage cloud environments.&lt;/p&gt;

&lt;p&gt;By replacing manual configuration with code, organizations gain consistency, automation, and repeatability.&lt;/p&gt;

&lt;p&gt;Using APIs to control infrastructure adds another level of flexibility. Developers can integrate infrastructure directly into scripts, pipelines, and applications.&lt;/p&gt;

&lt;p&gt;This approach turns the cloud into a programmable platform.&lt;/p&gt;

&lt;p&gt;As systems grow more complex and deployment cycles accelerate, the ability to automate infrastructure will only become more important.&lt;/p&gt;

&lt;p&gt;For modern development teams, treating infrastructure as code is no longer optional. It is the foundation of reliable and scalable software delivery.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Deploy Your Own Agent using OpenClaw</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Mon, 09 Mar 2026 04:27:19 +0000</pubDate>
      <link>https://dev.to/manishmshiva/how-to-deploy-your-own-agent-using-openclaw-1e0c</link>
      <guid>https://dev.to/manishmshiva/how-to-deploy-your-own-agent-using-openclaw-1e0c</guid>
      <description>&lt;p&gt;&lt;strong&gt;OpenClaw lets you run a powerful AI assistant on your own infrastructure, and this guide walks you through deploying it reliably from setup to production.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://openclaw.ai/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; is a self-hosted AI assistant designed to run under your control instead of inside a hosted SaaS platform.&lt;/p&gt;

&lt;p&gt;It can connect to messaging interfaces, local tools, and model providers while keeping execution and data closer to your own infrastructure.&lt;/p&gt;

&lt;p&gt;The project is actively developed, and the current ecosystem revolves around a CLI-driven setup flow, onboarding wizard, and multiple deployment paths ranging from local installs to containerised or cloud-hosted setups.&lt;/p&gt;

&lt;p&gt;This article explains how to deploy your own instance of OpenClaw from a practical systems perspective. We will look at how to deploy it on your local machine as well as a PaaS provider like Sevalla.&lt;/p&gt;

&lt;p&gt;The goal is not just to “make it run,” but to understand deployment choices, architecture implications, and operational tradeoffs so you can run a stable instance long term.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: It is dangerous to give an AI system full control of your system. Make sure you &lt;a href="https://www.kaspersky.co.in/blog/moltbot-enterprise-risk-management/30218/" rel="noopener noreferrer"&gt;understand the risks&lt;/a&gt; before running it on your machine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Understanding What You Are Deploying
&lt;/h2&gt;

&lt;p&gt;Before touching installation commands, it helps to understand the runtime model.&lt;/p&gt;

&lt;p&gt;OpenClaw is essentially a local-first AI assistant that runs as a service and exposes interaction through chat interfaces and a &lt;a href="https://docs.openclaw.ai/concepts/architecture" rel="noopener noreferrer"&gt;gateway architecture.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The gateway acts as the operational core, handling communication between messaging platforms, models, and local capabilities.&lt;/p&gt;

&lt;p&gt;In practical terms, deploying OpenClaw means deploying three layers.&lt;/p&gt;

&lt;p&gt;The first layer is the CLI and runtime, which launches and manages the assistant.&lt;/p&gt;

&lt;p&gt;The second layer is configuration and onboarding, where you select model providers and integrations.&lt;/p&gt;

&lt;p&gt;The third layer is persistence and execution context, which determines whether OpenClaw runs on your laptop, a VPS, or inside a container.&lt;/p&gt;

&lt;p&gt;Because OpenClaw runs with access to local resources, deployment decisions are not only about convenience but also about security boundaries. Treat it as an administrative system, not just a chatbot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying on a Local Machine
&lt;/h2&gt;

&lt;p&gt;OpenClaw supports multiple deployment approaches, and the right one depends on your goals.&lt;/p&gt;

&lt;p&gt;The simplest route is to install it directly on a local machine. This is ideal for experimentation, private workflows, or development because onboarding is fast and maintenance is minimal.&lt;/p&gt;

&lt;p&gt;The installer script handles environment detection, dependency setup, and launching the onboarding wizard.&lt;/p&gt;

&lt;p&gt;The fastest way to install OpenClaw is via the official installer script. The installer downloads the CLI, installs it globally through npm, and launches onboarding automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://openclaw.ai/install.cmd -o install.cmd &amp;amp;&amp;amp; install.cmd &amp;amp;&amp;amp; del install.cmd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method abstracts away most environmental complexity and is recommended for first-time deployments.&lt;/p&gt;

&lt;p&gt;If you already maintain a Node environment, you can install it directly using npm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i -g openclaw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CLI is then used to run onboarding and optionally install a daemon for persistent background execution. This approach gives you more control over versioning and update cadence.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openclaw onboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Regardless of installation path, verify that the CLI is discoverable in your shell. Environment path issues are common when global npm packages are installed under custom Node managers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Onboarding Process
&lt;/h2&gt;

&lt;p&gt;Once installed, OpenClaw relies heavily on onboarding to bootstrap configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ogrdj5lnk190gx4fkzc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ogrdj5lnk190gx4fkzc.webp" alt=" " width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During onboarding you will select an AI provider, configure authentication, and choose how you want to interact with the assistant. This process establishes the core runtime state and generates local configuration files used by the gateway.&lt;/p&gt;

&lt;p&gt;Onboarding also allows you to connect messaging channels such as Telegram or Discord. These integrations transform OpenClaw from a local CLI tool into an always-accessible assistant.&lt;/p&gt;

&lt;p&gt;From a deployment perspective, this is the moment where availability requirements change. If you connect external chat platforms, your instance must remain online consistently.&lt;/p&gt;

&lt;p&gt;You can skip certain onboarding steps and configure integrations later, but for production deployments it is better to complete the initial configuration so you can validate end-to-end functionality immediately.&lt;/p&gt;

&lt;p&gt;Once you add an OpenAI API key or Claude key, you can choose to open the web UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct04q66qzispis25a4h8.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct04q66qzispis25a4h8.webp" alt=" " width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;code&gt;localhost:18789&lt;/code&gt; to interact with OpenClaw.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying on the Cloud using Sevalla
&lt;/h2&gt;

&lt;p&gt;A second approach is to deploy to a VPS or cloud instance. This model gives you always-on availability and makes it possible to interact with OpenClaw from anywhere.&lt;/p&gt;

&lt;p&gt;A third approach is containerised deployment using Docker or similar tooling. This provides reproducibility and cleaner dependency isolation.&lt;/p&gt;

&lt;p&gt;Docker setups are particularly useful if you want predictable upgrades or easy migration between machines. OpenClaw’s repository includes scripts and compose configurations that support container execution workflows.&lt;/p&gt;

&lt;p&gt;I have set up a custom &lt;a href="https://hub.docker.com/r/manishmshiva/openclaw" rel="noopener noreferrer"&gt;Docker image&lt;/a&gt; to load OpenClaw into a PaaS platform like Sevalla.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sevalla.com/" rel="noopener noreferrer"&gt;Sevalla&lt;/a&gt; is a developer-friendly PaaS provider. It offers application hosting, database, object storage, and static site hosting for your projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.sevalla.com/" rel="noopener noreferrer"&gt;Log in&lt;/a&gt; to Sevalla and click “Create application”. Choose “Docker image” as the application source instead of a GitHub repository. Use &lt;code&gt;manishmshiva/openclaw&lt;/code&gt; as the Docker image, and it will be pulled automatically from &lt;a href="https://dockerhub.com/" rel="noopener noreferrer"&gt;DockerHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9ifsvvc485gy0ogg9pw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9ifsvvc485gy0ogg9pw.webp" alt=" " width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click “Create application” and go to the environment variables. Add an environment variable &lt;code&gt;ANTHROPIC_API_KEY&lt;/code&gt;. Then go to “Deployments” and click “Deploy now”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x0d4ralb53d96njhprc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x0d4ralb53d96njhprc.webp" alt=" " width="800" height="117"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the deployment is successful, you can click “Visit app” and interact with the UI with the sevalla provided url.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F743enysjaipec93mmyuu.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F743enysjaipec93mmyuu.webp" alt=" " width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Interacting with the Agent
&lt;/h2&gt;

&lt;p&gt;There are many ways to interact with the agent once you set up Openclaw. You can configure a &lt;a href="https://medium.com/chatfuel-blog/how-to-create-your-own-telegram-bot-who-answer-its-users-without-coding-996de337f019" rel="noopener noreferrer"&gt;Telegram bot&lt;/a&gt; to interact with your agent. Basically, the agent will (try to) do a task similar to a human assistant. Its capabilities depend on how much access you provide the agent.&lt;/p&gt;

&lt;p&gt;You can ask it to clean your inbox, watch a website for new articles, and perform many other tasks. Please note that providing OpenClaw access to your critical apps or files is not ideal or secure. This is still a system in its early stages, and the risk of it making a mistake or exposing your private information is high.&lt;/p&gt;

&lt;p&gt;Here are some of the ways &lt;a href="https://openclaw.ai/showcase" rel="noopener noreferrer"&gt;people are using OpenClaw.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Security and Operational Considerations
&lt;/h2&gt;

&lt;p&gt;Because OpenClaw can execute tasks and access system resources, deployment security is not optional. The safest baseline is to bind services to localhost and access them through secure tunnels when remote control is required. This significantly reduces exposure risk.&lt;/p&gt;

&lt;p&gt;When deploying on a VPS, harden the host like any administrative service. Use non-root users, keep packages updated, restrict inbound ports, and monitor logs. If you are integrating messaging channels, treat tokens and API keys as sensitive secrets and avoid storing them in plaintext configuration where possible.&lt;/p&gt;

&lt;p&gt;Containerization helps isolate dependencies but does not eliminate risk. The container still executes code on your host, so network and volume permissions should be carefully scoped.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating and Maintaining Your Instance
&lt;/h2&gt;

&lt;p&gt;OpenClaw evolves quickly, with frequent releases and feature changes. Keeping your instance updated is important not only for features but also for stability and compatibility with integrations.&lt;/p&gt;

&lt;p&gt;For npm-based installations, updates are straightforward, but you should test upgrades in a staging environment if your assistant handles important workflows. For source-based deployments, pull changes and rebuild consistently rather than mixing old build artifacts with new code.&lt;/p&gt;

&lt;p&gt;Monitoring is another overlooked aspect. Even simple log inspection can reveal integration failures early. If your deployment is mission-critical, consider external uptime checks or process supervisors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deploying your own OpenClaw agent is ultimately about taking control of how your AI assistant works, where it runs, and how it fits into your daily workflows. While the setup process is straightforward, the real value comes from understanding the choices you make along the way, whether you run it locally for privacy, host it in the cloud for constant availability, or use containers for consistency and portability.&lt;/p&gt;

&lt;p&gt;As the ecosystem around self-hosted AI continues to evolve, tools like OpenClaw make it possible to move beyond relying entirely on third-party platforms. Running your own agent gives you flexibility, ownership, and the freedom to shape the experience around your needs.&lt;/p&gt;

&lt;p&gt;Start small, experiment safely, and gradually build confidence in how your assistant operates. Over time, what begins as a simple deployment can become a dependable, personalized system that works the way you want , under your control.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A Vibe Coder’s Guide to Deployment using a PaaS</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Wed, 04 Mar 2026 12:47:50 +0000</pubDate>
      <link>https://dev.to/manishmshiva/a-vibe-coders-guide-to-deployment-using-a-paas-2hd9</link>
      <guid>https://dev.to/manishmshiva/a-vibe-coders-guide-to-deployment-using-a-paas-2hd9</guid>
      <description>&lt;p&gt;&lt;strong&gt;A practical, no-nonsense guide to getting your vibe-coded app live with a PaaS, without falling into DevOps rabbit holes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vibe coding is about momentum.&lt;/p&gt;

&lt;p&gt;You open your editor, prompt an AI, stitch pieces together, and suddenly you have something that works.&lt;/p&gt;

&lt;p&gt;Maybe it’s messy. Maybe the architecture is not perfect. But it’s alive, and that’s the point.&lt;/p&gt;

&lt;p&gt;Then comes deployment. This is where the vibe usually dies.&lt;/p&gt;

&lt;p&gt;Suddenly, you’re reading about containers, load balancers, CI/CD pipelines, infrastructure diagrams, and networking concepts you never asked for. You wanted to ship a thing. Instead, you’re learning accidental DevOps.&lt;/p&gt;

&lt;p&gt;The truth is simple. Most vibe-coded apps don’t need complex infrastructure. They just need a clean path from code → live URL.&lt;/p&gt;

&lt;p&gt;That’s where a Platform-as-a-Service fits in. It removes the infrastructure ceremony and lets deployment feel like a natural extension of building.&lt;/p&gt;

&lt;p&gt;This guide is not about perfect production architecture. It’s about shipping fast without losing momentum.&lt;/p&gt;

&lt;p&gt;In this article, we will look at how to deploy a simple vibe-coded app using &lt;a href="https://sevalla.com/" rel="noopener noreferrer"&gt;Sevalla&lt;/a&gt;. There are other options like Railway, render, etc., with similar features, and you can &lt;a href="https://www.freecodecamp.org/news/top-heroku-alternatives-for-deployment/" rel="noopener noreferrer"&gt;pick one from this list.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What “Vibe Deployment” Actually Means
&lt;/h2&gt;

&lt;p&gt;Traditional deployment advice assumes you’re building a long-term, heavily engineered system.&lt;/p&gt;

&lt;p&gt;Vibe coders operate differently. The goal is speed, feedback, and iteration.&lt;/p&gt;

&lt;p&gt;A vibe-friendly deployment workflow has a few core characteristics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Minimal configuration:&lt;/strong&gt; You shouldn’t spend hours setting up environments before seeing your app live.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast feedback loops:&lt;/strong&gt; Every push should quickly show you the result.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safe defaults:&lt;/strong&gt; You shouldn’t need deep infra knowledge to avoid obvious mistakes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, deployment shouldn’t be a “phase.” It should be part of the normal development loop.&lt;/p&gt;

&lt;p&gt;You build. You push. It updates. You keep going.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Typical Vibe-Coded App
&lt;/h2&gt;

&lt;p&gt;Most vibe-coded projects look similar under the hood.&lt;/p&gt;

&lt;p&gt;There’s usually a frontend generated or accelerated by AI using React, Next.js, Vue, or something equally modern. The backend might be a small API, sometimes written quickly without strict structure.&lt;/p&gt;

&lt;p&gt;Data lives in a managed database. Authentication might be glued together from a few libraries.&lt;/p&gt;

&lt;p&gt;The code evolves rapidly. Patterns change weekly. Files get renamed, rewritten, or deleted without ceremony.&lt;/p&gt;

&lt;p&gt;And that’s fine.&lt;/p&gt;

&lt;p&gt;The problem is that traditional deployment workflows assume stability and planning. They expect clean separation between environments, carefully defined build pipelines, and long-term operational thinking.&lt;/p&gt;

&lt;p&gt;Vibe-coded apps need the opposite: something that tolerates change and rewards experimentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The PaaS Mental Model
&lt;/h2&gt;

&lt;p&gt;The biggest shift with a PaaS is how you think about deployment.&lt;/p&gt;

&lt;p&gt;Instead of asking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which server should I use?&lt;/li&gt;
&lt;li&gt;How do I configure networking?&lt;/li&gt;
&lt;li&gt;What container setup do I need?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You think in terms of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect your repository.&lt;/li&gt;
&lt;li&gt;Configure the app once.&lt;/li&gt;
&lt;li&gt;Deploy automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A PaaS treats your project as a service that can be built and run. You don’t manage infrastructure; you define the minimum information needed to run your code.&lt;/p&gt;

&lt;p&gt;There are only a few concepts you really need to understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Services:&lt;/strong&gt; Each deployable unit of your app. A frontend or backend typically becomes a service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment variables:&lt;/strong&gt; Secrets and configuration that differ between local and production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto builds:&lt;/strong&gt; Every code push triggers a build and deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s it. The system handles the rest.&lt;/p&gt;

&lt;p&gt;The result is important: deployment stops being a separate discipline and becomes just another part of coding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shipping Your First App on Sevalla
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://sevalla.com/" rel="noopener noreferrer"&gt;Sevalla&lt;/a&gt; is a developer-friendly PaaS provider. It offers application hosting, database, object storage, and static site hosting for your projects.&lt;/p&gt;

&lt;p&gt;Let’s walk through what deployment actually looks like in practice. I have already written a few tutorials on both &lt;a href="https://www.freecodecamp.org/news/how-to-build-and-deploy-a-loganalyzer-agent-using-langchain/" rel="noopener noreferrer"&gt;Python&lt;/a&gt; and &lt;a href="https://www.freecodecamp.org/news/build-and-deploy-an-image-hosting-service-on-sevalla/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt; projects, building an app from scratch and deploying it on Sevalla.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Connect Your Repository
&lt;/h3&gt;

&lt;p&gt;The starting point is your Git repository. &lt;a href="https://app.sevalla.com/login" rel="noopener noreferrer"&gt;Log in&lt;/a&gt; to Sevalla using your GitHub account, or you can connect it after logging in with your email.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6h3ronlautdw96lhoy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6h3ronlautdw96lhoy1.png" alt=" " width="800" height="191"&gt;&lt;/a&gt;&lt;br&gt;
You connect your project to Sevalla and select the branch you want to deploy. This creates a direct link between your code and the live app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yto15lssr1ld54zinq5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yto15lssr1ld54zinq5.webp" alt=" " width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also enalbed “Automatic deployments”. Once you create an app, deployment becomes automatic. You push code, and Sevalla takes care of building and publishing.&lt;/p&gt;

&lt;p&gt;No manual uploads. No SSH sessions. No server setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Configure the Runtime
&lt;/h3&gt;

&lt;p&gt;Next, you define how your app runs.&lt;/p&gt;

&lt;p&gt;Most modern frameworks are detected automatically. If you’ve built something common, you usually won’t need to tweak much.&lt;/p&gt;

&lt;p&gt;This is where you add environment variables. API keys, database URLs, authentication secrets, and anything that shouldn’t live inside your codebase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatvalqrucmzptlbdkhoi.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatvalqrucmzptlbdkhoi.webp" alt=" " width="800" height="192"&gt;&lt;/a&gt;&lt;br&gt;
A simple rule for vibe coders: If it changes between local and production, make it an environment variable.&lt;/p&gt;

&lt;p&gt;Once set, you rarely need to touch this again.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Deploy
&lt;/h3&gt;

&lt;p&gt;Now you deploy.&lt;/p&gt;

&lt;p&gt;Sevalla builds the application, installs dependencies, and launches it. After a short wait, you get a live URL.&lt;/p&gt;

&lt;p&gt;This is the moment that matters. Your app is no longer a local experiment; it’s something real people can use.&lt;/p&gt;

&lt;p&gt;And importantly, you didn’t need to make infrastructure decisions to get there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Iterate Like a Vibe Coder
&lt;/h3&gt;

&lt;p&gt;Now your workflow shines!&lt;/p&gt;

&lt;p&gt;You make a change locally. Commit. Push.&lt;/p&gt;

&lt;p&gt;Sevalla rebuilds and redeploys automatically.&lt;/p&gt;

&lt;p&gt;Your deployment process becomes invisible, just part of your normal coding rhythm.&lt;/p&gt;

&lt;p&gt;This matters more than most people realise. When deployment is effortless, you ship more often. When you ship more often, you learn faster.&lt;/p&gt;

&lt;p&gt;And fast learning is the real advantage of vibe coding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things Vibe Coders Usually Break (and How PaaS Helps)
&lt;/h2&gt;

&lt;p&gt;Even simple deployment workflows can go wrong. Some patterns show up repeatedly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Missing environment variables:&lt;/strong&gt; The app works locally but crashes in production. A PaaS surfaces configuration clearly, making it easier to spot.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Localhost assumptions:&lt;/strong&gt; Hardcoded URLs or local file paths break once deployed. Using environment configuration fixes this early.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File storage confusion:&lt;/strong&gt; Local files disappear between deployments. Treat storage as external from day one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ignoring logs:&lt;/strong&gt; Many developers only look at logs after panic sets in. Sevalla’s centralised logs make debugging faster when something inevitably fails.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rqhcrv2iirrjaoqyniq.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rqhcrv2iirrjaoqyniq.webp" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;br&gt;
The important point: these aren’t advanced problems. They’re beginner deployment mistakes, and the platform’s defaults help you avoid most of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Minimal Production Checklist
&lt;/h2&gt;

&lt;p&gt;Before you call something “live,” run through a quick checklist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Environment variables are set correctly.&lt;/li&gt;
&lt;li&gt;Database is external, not local.&lt;/li&gt;
&lt;li&gt;Logs are enabled and readable.&lt;/li&gt;
&lt;li&gt;Custom domain is connected if needed.&lt;/li&gt;
&lt;li&gt;You know how to roll back to a previous version.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s enough for most early-stage projects.&lt;/p&gt;

&lt;p&gt;You don’t need complex monitoring stacks or multi-region infrastructure to start learning from real users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Workflow Works for Vibe Builders
&lt;/h2&gt;

&lt;p&gt;Indie builders and vibe coders succeed by maintaining velocity. The highest hidden cost in software isn’t infrastructure, it’s context switching.&lt;/p&gt;

&lt;p&gt;Every time you stop building to become a part-time DevOps engineer, momentum drops.&lt;/p&gt;

&lt;p&gt;A PaaS system’s biggest advantage isn’t technical sophistication. It’s psychological. You stay in the builder mindset.&lt;/p&gt;

&lt;p&gt;You focus on product decisions instead of infrastructure decisions.&lt;/p&gt;

&lt;p&gt;And because deployment feels safe, you ship more frequently. Small releases reduce risk, reduce anxiety, and make experimentation normal.&lt;/p&gt;

&lt;p&gt;This is exactly the environment where small projects grow into real products.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The best deployment system is one you barely think about.&lt;/p&gt;

&lt;p&gt;For vibe coders, deployment shouldn’t be a scary milestone or a weekend project. It should feel like pressing save, just another step in the creative loop.&lt;/p&gt;

&lt;p&gt;Build something. Push it live. Learn from users. Repeat.&lt;/p&gt;

&lt;p&gt;That’s the real goal.&lt;/p&gt;

&lt;p&gt;And when deployment stops being a bottleneck, the vibe stays alive.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cloud</category>
      <category>tutorial</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>Top 5 Heroku Alternatives for Deployment in 2026</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Thu, 12 Feb 2026 04:46:32 +0000</pubDate>
      <link>https://dev.to/manishmshiva/top-5-heroku-alternatives-for-deployment-in-2026-6pe</link>
      <guid>https://dev.to/manishmshiva/top-5-heroku-alternatives-for-deployment-in-2026-6pe</guid>
      <description>&lt;p&gt;For more than a decade, &lt;a href="https://www.heroku.com/" rel="noopener noreferrer"&gt;Heroku&lt;/a&gt; defined what “developer-friendly deployment” meant. Push code, forget servers, and focus on shipping features.&lt;/p&gt;

&lt;p&gt;That promise shaped an entire generation of platform-as-a-service products. In 2026, that landscape is changing.&lt;/p&gt;

&lt;p&gt;Heroku has &lt;a href="https://www.heroku.com/blog/an-update-on-heroku/" rel="noopener noreferrer"&gt;clearly stated&lt;/a&gt; that it is moving into a sustaining engineering model. The platform remains stable, secure, and supported, but active innovation is no longer the focus.&lt;/p&gt;

&lt;p&gt;For many teams, this is acceptable. For others, especially startups and product teams planning three to five years ahead, it raises an important question: where should new applications live?&lt;/p&gt;

&lt;p&gt;In this article, we will look at five strong Heroku alternatives that are well-positioned for 2026. Each platform approaches deployment differently, but all aim to preserve what developers loved about Heroku while improving on cost, flexibility, or modern workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we will Cover
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Why Teams Are Looking Beyond Heroku&lt;/li&gt;
&lt;li&gt;Sevalla: The Closest Successor to Classic Heroku&lt;/li&gt;
&lt;li&gt;Render: A Broad Platform for Growing Teams&lt;/li&gt;
&lt;li&gt;Fly.io: Global-First Deployment for Latency-Sensitive Apps&lt;/li&gt;
&lt;li&gt;Upsun: Enterprise-Grade Control Without Losing Structure&lt;/li&gt;
&lt;li&gt;Vercel: The Frontend-Native Deployment Platform&lt;/li&gt;
&lt;li&gt;Choosing the Right Heroku Alternative in 2026&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Teams Are Looking Beyond Heroku
&lt;/h2&gt;

&lt;p&gt;Heroku’s shift toward maintenance over expansion signals maturity, not failure. However, modern teams expect faster iteration, deeper infrastructure control, and tighter integration with cloud-native tooling.&lt;/p&gt;

&lt;p&gt;AI workloads, edge computing, and global latency expectations are also reshaping deployment needs.&lt;/p&gt;

&lt;p&gt;As a result, teams want platforms that feel simple on day one but do not become limiting as scale and complexity grow.&lt;/p&gt;

&lt;p&gt;The alternatives discussed here are not identical replacements. Each represents a different philosophy about how applications should be built and operated in 2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sevalla: The Closest Successor to Classic Heroku
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhtmp08zljje1szr6npf.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhtmp08zljje1szr6npf.jpeg" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sevalla.com/heroku-alternative/" rel="noopener noreferrer"&gt;Sevalla&lt;/a&gt; has quietly positioned itself as one of the most Heroku-like platforms available today. The core idea is familiar. You deploy applications without managing servers, environments are predictable, and the platform stays out of your way.&lt;/p&gt;

&lt;p&gt;What makes Sevalla compelling in 2026 is its balance between simplicity and control. It keeps the developer experience tight while avoiding the opaque pricing and rigid abstractions that frustrated many Heroku users over time. Deployments are fast, logs are easy to access, and scaling feels intuitive rather than magical.&lt;/p&gt;

&lt;p&gt;Sevalla is particularly attractive for mid-sized teams and also for enterprises that want a clean path from prototype to production. It supports modern application stacks without forcing you into complex infrastructure decisions too early. For teams migrating directly from Heroku, Sevalla often feels like the least disruptive transition.&lt;/p&gt;

&lt;p&gt;The platform’s biggest strength is restraint. It does not try to be everything at once. Instead, it focuses on being a reliable home for long-running services, APIs, and background workers. In 2026, that clarity is refreshing.&lt;/p&gt;

&lt;p&gt;Built for the enterprise, Sevalla meets the highest standards of security. They are fully compliant with SOC2, ISO 27017, and ISO 27001:2022, ensuring your data stays protected and your requirements are met.&lt;/p&gt;

&lt;h2&gt;
  
  
  Render: A Broad Platform for Growing Teams
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F047se9z3k3qc4vk6s81c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F047se9z3k3qc4vk6s81c.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://render.com/docs/render-vs-heroku-comparison" rel="noopener noreferrer"&gt;Render&lt;/a&gt; takes a more expansive approach. While it is often compared to Heroku, Render aims to cover a wider range of use cases, from simple web services to complex microservice architectures.&lt;/p&gt;

&lt;p&gt;Render stands out because it blends platform simplicity with infrastructure transparency. You still get managed databases, background jobs, and zero-downtime deploys, but you also gain more visibility into how resources are allocated. This makes it easier to reason about cost and performance as systems grow.&lt;/p&gt;

&lt;p&gt;For teams that expect to scale steadily, Render offers a comfortable middle ground. It removes much of the operational burden while allowing deeper configuration when needed. Many engineering teams appreciate that Render feels less restrictive than Heroku without pushing them into full DevOps territory.&lt;/p&gt;

&lt;p&gt;In 2026, Render is especially popular with SaaS companies that have outgrown entry-level platforms but are not ready to manage Kubernetes clusters themselves. It supports modern CI/CD workflows and integrates well with common developer tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fly.io: Global-First Deployment for Latency-Sensitive Apps
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftho8lgafujs1ake4498n.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftho8lgafujs1ake4498n.jpeg" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://getdeploying.com/flyio-vs-heroku" rel="noopener noreferrer"&gt;Fly.io&lt;/a&gt; represents a different philosophy entirely. Instead of abstracting infrastructure away, Fly.io embraces it, but makes it programmable and developer-friendly.&lt;/p&gt;

&lt;p&gt;Fly.io allows applications to run close to users by deploying workloads across multiple regions by default. This makes it ideal for applications where latency matters, such as real-time collaboration tools, gaming backends, or global APIs.&lt;/p&gt;

&lt;p&gt;Unlike Heroku, Fly.io expects developers to understand a bit more about how their application runs. You interact with virtual machines rather than dynos, and configuration is more explicit. However, this added complexity comes with real power.&lt;/p&gt;

&lt;p&gt;In 2026, Fly.io appeals strongly to experienced teams that want performance and control without adopting heavy orchestration systems. It is not always the easiest option, but it is one of the most flexible. For teams willing to invest in understanding the platform, Fly.io can outperform traditional PaaS solutions in both speed and cost efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Upsun: Enterprise-Grade Control Without Losing Structure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsoz7tcfq5t7i5rowkvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsoz7tcfq5t7i5rowkvj.png" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://upsun.com/blog/a-heroku-alternative/" rel="noopener noreferrer"&gt;Upsun&lt;/a&gt;, previously known as Platform.sh, brings a more opinionated, enterprise-oriented model to application deployment. It is designed for teams that care deeply about environment parity, reproducibility, and long-term maintainability.&lt;/p&gt;

&lt;p&gt;Upsun treats infrastructure as part of the application. Environments are versioned alongside code, and deployments are deterministic. This approach reduces surprises and makes complex systems easier to reason about over time.&lt;/p&gt;

&lt;p&gt;For organizations with compliance requirements or multi-environment workflows, Upsun offers a level of rigor that Heroku never aimed to provide. At the same time, it abstracts away much of the operational burden that typically comes with such control.&lt;/p&gt;

&lt;p&gt;In 2026, Upsun is particularly well-suited for regulated industries, large content platforms, and teams with multiple long-lived environments. It is less about rapid experimentation and more about predictable, repeatable delivery at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vercel: The Frontend-Native Deployment Platform
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmn3pyiweic3hd373078e.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmn3pyiweic3hd373078e.webp" alt=" " width="672" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://vercel.com/" rel="noopener noreferrer"&gt;Vercel&lt;/a&gt; is often discussed in a different category, but it deserves inclusion in any modern deployment conversation. Vercel is optimized for frontend applications, serverless functions, and edge workloads.&lt;/p&gt;

&lt;p&gt;If Heroku excelled at hosting monolithic web apps, Vercel excels at composable, frontend-driven architectures. It integrates deeply with modern frameworks and makes global deployment nearly effortless.&lt;/p&gt;

&lt;p&gt;In 2026, many applications are frontend-heavy, with APIs split into smaller services or serverless functions. For these use cases, Vercel offers a developer experience that feels faster and more modern than traditional PaaS platforms.&lt;/p&gt;

&lt;p&gt;However, Vercel is not a full replacement for Heroku in every scenario. Long-running background jobs and stateful services often live elsewhere. Still, for teams building modern web products, Vercel frequently becomes the centerpiece of their deployment strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Right Heroku Alternative in 2026
&lt;/h2&gt;

&lt;p&gt;There is no single “best” Heroku replacement. The right choice depends on how your application behaves, how your team works, and how much control you want over infrastructure.&lt;/p&gt;

&lt;p&gt;Sevalla is ideal for teams that want familiarity and minimal friction. Render suits growing teams that need flexibility without chaos. Fly.io is powerful for global, performance-sensitive systems. Upsun excels in structured, enterprise environments. Vercel dominates frontend-centric architectures.&lt;/p&gt;

&lt;p&gt;The common thread is that deployment in 2026 is no longer one-size-fits-all. Heroku set the standard, but the ecosystem has evolved. Today’s platforms offer sharper trade-offs, clearer philosophies, and better alignment with modern development patterns.&lt;/p&gt;

&lt;p&gt;For teams starting new projects, the opportunity is clear. You can choose a platform that matches your future, not just your present.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Heroku is not disappearing, and for many existing workloads, it will continue to run reliably for years. However, its shift toward a sustaining engineering model makes one thing clear: teams building new products in 2026 should think carefully about where they place their long-term bets.&lt;/p&gt;

&lt;p&gt;Deployment platforms are no longer just hosting choices. They shape how fast teams move, how systems scale, and how painful future migrations become.&lt;/p&gt;

&lt;p&gt;In 2026, the strongest deployment strategy is intentional, not inherited. Heroku showed the industry what was possible. Its successors are now defining what comes next.&lt;/p&gt;

&lt;p&gt;Hope you enjoyed this article. Learn more about me by &lt;a href="https://manishshivanandhan.com/" rel="noopener noreferrer"&gt;visiting my website.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Build Your First AI Agent Deploy it to Sevalla</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Wed, 07 Jan 2026 11:02:56 +0000</pubDate>
      <link>https://dev.to/manishmshiva/how-to-build-your-first-ai-agent-deploy-it-to-sevalla-2hm6</link>
      <guid>https://dev.to/manishmshiva/how-to-build-your-first-ai-agent-deploy-it-to-sevalla-2hm6</guid>
      <description>&lt;p&gt;Artificial intelligence is changing how we build software.&lt;/p&gt;

&lt;p&gt;Just a few years ago, writing code that could talk, decide, or use external data felt hard.&lt;/p&gt;

&lt;p&gt;Today, thanks to new tools, developers can build smart agents that read messages, reason about them, and call functions on their own.&lt;/p&gt;

&lt;p&gt;One such platform that makes this easy is &lt;a href="https://github.com/langchain-ai/langchain" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;. With LangChain, you can link language models, tools, and apps together. You can also wrap your agent inside a FastAPI server, then push it to a cloud platform for deployment.&lt;/p&gt;

&lt;p&gt;This article will walk you through building your first AI agent. You will learn what LangChain is, how to build an agent, how to serve it through FastAPI, and how to deploy it on Sevalla.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is LangChain&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;LangChain is a framework for working with large language models. It helps you build apps that think, reason, and act.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdr81cgnp5g6rfuqcp92d.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdr81cgnp5g6rfuqcp92d.jpg" alt=" " width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A model on its own only gives text replies, but LangChain lets it do more. It lets a model call functions, use tools, connect with databases, and follow workflows.&lt;/p&gt;

&lt;p&gt;Think of LangChain as a bridge. On one side is the language model. On the other side are your tools, data sources, and business logic. LangChain tells the model what tools exist, when to use them, and how to reply. This makes it ideal for building agents that answer questions, automate tasks, or handle complex flows.&lt;/p&gt;

&lt;p&gt;Many developers use LangChain because it is flexible. It supports many AI models. It fits well with Python.&lt;/p&gt;

&lt;p&gt;Langchain also makes it easier to move from prototype to production. Once you learn how to create an agent, you can reuse the pattern for more advanced use cases.&lt;/p&gt;

&lt;p&gt;I have recently published a detailed &lt;a href="https://www.turingtalks.ai/p/langchain-tutorial" rel="noopener noreferrer"&gt;langchain tutorial&lt;/a&gt; here.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Building Your First Agent with LangChain&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let us make our first agent. It will respond to user questions and &lt;a href="https://www.freecodecamp.org/news/how-to-build-your-first-mcp-server-using-fastmcp/" rel="noopener noreferrer"&gt;call a tool&lt;/a&gt; when needed.&lt;/p&gt;

&lt;p&gt;We will give it a simple weather tool, then ask it about the weather in a city. Before this, create a file called .env and add your openai api key. Langchain will automatically use it when making requests to Openai.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OPENAI_API_KEY=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the code for our agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.agents import create_agent
from dotenv import load_dotenv

# load environment variables
load_dotenv()

# defining the tool that LLM can call
def get_weather(city: str) -&amp;gt; str:
 """Get weather for a given city."""
 return f"It's always sunny in {city}!"

# Creating an agent
agent = create_agent(model="gpt-4o",tools=[get_weather],   system_prompt="You are a helpful assistant")

result = agent.invoke({"messages":[{"role":"user","content":"What is the weather in san francisco?"}]})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This small program shows the power of LangChain agents.&lt;/p&gt;

&lt;p&gt;First, we importcreate_agent, which helps us build the agent. Then we write a function called get_weather. It takes a city name and returns a friendly sentence.&lt;/p&gt;

&lt;p&gt;The function acts as our tool. A tool is something the agent can use. In real projects, tools might fetch prices, store notes, or call APIs.&lt;/p&gt;

&lt;p&gt;Next, we call create_agent. We give it three things. We pass the model we want to use. We list the tools we want it to call. And we give a system prompt. The system prompt tells the agent who it is and how it should behave.&lt;/p&gt;

&lt;p&gt;Finally, we run the agent. We call invoke with a message.&lt;/p&gt;

&lt;p&gt;The user asks for the weather in San Francisco. The agent reads this message. It sees that the question needs the weather function. So it calls our tool get_weather, passes the city, and returns an answer.&lt;/p&gt;

&lt;p&gt;Even though this example is tiny, it captures the main idea. The agent reads natural language, figures out what tool to use, and sends a reply.&lt;/p&gt;

&lt;p&gt;Later, you can add more tools or replace the weather function with one that connects to a real API. But this is enough for us to wrap and deploy.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Wrapping Your Agent with FastAPI&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The next step is to serve our agent. &lt;a href="https://fastapi.tiangolo.com/" rel="noopener noreferrer"&gt;FastAPI&lt;/a&gt; helps us expose our agent through an HTTP endpoint. That way, users and systems can call it through a URL, send messages, and get replies.&lt;/p&gt;

&lt;p&gt;To begin, you install FastAPI and write a simple file like main.py. Inside it, you import FastAPI, load the agent, and write a route.&lt;/p&gt;

&lt;p&gt;When someone posts a question, the api forwards it to the agent and returns the answer. The flow is simple.&lt;/p&gt;

&lt;p&gt;The user talks to FastAPI. FastAPI talks to your agent. The agent thinks and replies.&lt;/p&gt;

&lt;p&gt;Here is the FAST api wrapper for your agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI
from pydantic import BaseModel
import uvicorn
from langchain.agents import create_agent
from dotenv import load_dotenv
import os

load_dotenv()

# defining the tool that LLM can call
def get_weather(city: str) -&amp;gt; str:
    """Get weather for a given city."""
    return f"It's always sunny in {city}!"

# Creating an agent
agent = create_agent(
    model="gpt-4o",
    tools=[get_weather],
    system_prompt="You are a helpful assistant",
)

app = FastAPI()

class ChatRequest(BaseModel):
    message: str

@app.get("/")
def root():
    return {"message": "Welcome to your first agent"}

@app.post("/chat")
def chat(request: ChatRequest):
    result = agent.invoke({"messages":[{"role":"user","content":request.message}]})
    return {"reply": result["messages"][-1].content}

def main():
    port = int(os.getenv("PORT", 8000))
    uvicorn.run(app, host="0.0.0.0", port=port)

if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, FastAPI defines a /chat endpoint. When someone sends a message, the server calls our agent. The agent processes it as before. Then FastAPI returns a clean JSON reply. The API layer hides the complexity inside a simple interface.&lt;/p&gt;

&lt;p&gt;At this point, you have a working agent server. You can run it on your machine, call it with Postman or cURL, and check responses. When this works, you are ready to deploy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66zujrvw6iu0u95qdrif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66zujrvw6iu0u95qdrif.png" alt=" " width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Deployment to Sevalla&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You can choose any cloud provider, like AWS, DigitalOcean, or others to host your agent. I will be using Sevalla for this example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://sevalla.com/" rel="noopener noreferrer"&gt;Sevalla&lt;/a&gt; is a developer-friendly PaaS provider. It offers application hosting, database, object storage, and static site hosting for your projects.&lt;/p&gt;

&lt;p&gt;Every platform will charge you for creating a cloud resource. Sevalla comes with a $50 credit for us to use, so we won’t incur any costs for this example.&lt;/p&gt;

&lt;p&gt;Let’s push this project to GitHub so that we can connect our repository to Sevalla. We can also enable auto-deployments so that any new change to the repository is automatically deployed.&lt;/p&gt;

&lt;p&gt;You can also &lt;a href="https://github.com/manishmshiva/first-agent-with-fastapi" rel="noopener noreferrer"&gt;fork my repository&lt;/a&gt; from here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.sevalla.com/login" rel="noopener noreferrer"&gt;Log in&lt;/a&gt; to Sevalla and click on Applications -&amp;gt; Create new application. You can see the option to link your GitHub repository to create a new application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdqgulzvwzccp36c201g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdqgulzvwzccp36c201g.png" alt=" " width="800" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the default settings. Click “Create application”. Now we have to add our openai api key to the environment variables. Click on the “Environment variables” section once the application is created, and save the OPENAI_API_KEY value as an environment variable.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrgv82cchr7a9omut81f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrgv82cchr7a9omut81f.png" alt=" " width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are ready to deploy our application. Click on “Deployments” and click “Deploy now”. It will take 2–3 minutes for the deployment to complete.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauvz7b31q155bdrfflp9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauvz7b31q155bdrfflp9.png" alt=" " width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once done, click on “Visit app”. You will see the application served via a url ending with sevalla.app . This is your new root url. You can replace localhost:8000 with this url and test in Postman.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yrtzrtaziglb6qzv498.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yrtzrtaziglb6qzv498.png" alt=" " width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congrats! Your first AI agent with tool calling is now live. You can extend this by adding more tools and other capabilities, and push your code to GitHub, and Sevalla will automatically deploy your application to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Building AI agents is no longer a task for experts. With LangChain, you can write a few lines and create reasoning tools that respond to users and call functions on their own.&lt;/p&gt;

&lt;p&gt;By wrapping the agent with FastAPI, you give it a doorway that apps and users can access. Finally, Sevalla makes it easy to push your agent live, monitor it, and run it in production.&lt;/p&gt;

&lt;p&gt;This journey from agent idea to deployed service shows what modern AI development looks like. You start small. You explore tools. You wrap them and deploy them.&lt;/p&gt;

&lt;p&gt;Then you iterate, add more capability, improve logic, and plug in real tools. Before long, you have a smart, living agent online. That is the power of this new wave of technology.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to Build AI Workflows Without Code Using Activepieces and Sevalla</title>
      <dc:creator>Manish Shivanandhan</dc:creator>
      <pubDate>Thu, 04 Dec 2025 11:31:44 +0000</pubDate>
      <link>https://dev.to/manishmshiva/how-to-build-ai-workflows-without-code-using-activepieces-and-sevalla-4jaa</link>
      <guid>https://dev.to/manishmshiva/how-to-build-ai-workflows-without-code-using-activepieces-and-sevalla-4jaa</guid>
      <description>&lt;p&gt;&lt;strong&gt;Let’s learn how to use Activepieces to build smart AI workflows with a simple visual builder. No coding or complex setup needed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Artificial intelligence is now part of daily work for many teams. People use it to write content, analyse data, answer support requests, and guide business decisions.&lt;/p&gt;

&lt;p&gt;But building AI workflows is still hard for many users. Most tools need code, a complex setup, or long training.&lt;/p&gt;

&lt;p&gt;Activepieces makes this much easier. It's an open source tool that lets anyone create smart workflows with a simple visual builder.&lt;/p&gt;

&lt;p&gt;You can mix AI models, data sources, and systems without writing code. This makes automation more open to teams that want to work faster and cut manual effort.&lt;/p&gt;

&lt;p&gt;In this guide, we will learn what ActivePieces is, how to work with it and how to deploy our own version to the cloud using Sevalla.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ActivePieces?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/activepieces/activepieces" rel="noopener noreferrer"&gt;Activepieces&lt;/a&gt; is an open-source automation platform that focuses on ease of use.&lt;/p&gt;

&lt;p&gt;You can host it on your own server or use it in the cloud. The platform uses a clean flow builder where each block represents a step. These blocks are called pieces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff666l5nno5hpnv9tyxb6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff666l5nno5hpnv9tyxb6.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;br&gt;
A piece may call an API, connect to a tool like Google Sheets, run an AI model, or wait for human input. By linking pieces together, you can build workflows that act like agents.&lt;/p&gt;

&lt;p&gt;They can listen to events, run analysis, create content, evaluate data, or push results into other tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Activepieces ecosystem
&lt;/h2&gt;

&lt;p&gt;The main goal of Activepieces is to let both technical and non-technical users build workflows that include AI. It gives a simple visual interface but also has a strong developer layer under the hood.&lt;/p&gt;

&lt;p&gt;Developers can build new pieces in TypeScript. These custom pieces then appear in the visual builder for anyone to use. This keeps advanced logic invisible behind a friendly interface.&lt;/p&gt;

&lt;p&gt;The platform has a growing library of over two hundred pieces. Many come from the community.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxdusirab077nc3nqvvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxdusirab077nc3nqvvw.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;br&gt;
They include common tools like email, Slack, Google Workspace, OpenAI, and Notion. There are also pieces for reading links, parsing text, calling webhooks, or waiting for timed events.&lt;/p&gt;

&lt;p&gt;The library grows fast because anyone can contribute new pieces. Each piece is an npm package, so it fits well into the wider JavaScript ecosystem.&lt;/p&gt;

&lt;p&gt;Activepieces also supports human input. For example, a workflow can pause and wait for someone to review a message before sending it. It can also collect answers from a form.&lt;/p&gt;

&lt;p&gt;These options make it possible to build flows that mix automation with human judgment. This is useful in tasks where risk or correctness matters, such as compliance checks or approval flows.&lt;/p&gt;

&lt;p&gt;A major part of the platform is its AI-first design. It includes native support for popular AI providers. You can build agents that analyse text, rewrite messages, classify content, extract fields, or make decisions.&lt;/p&gt;

&lt;p&gt;You can even ask the AI to clean data inside a flow, without needing code. This makes it easy to use AI to speed up work and remove repetitive steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a workflow in Activepieces
&lt;/h2&gt;

&lt;p&gt;Every workflow begins with a trigger. A trigger is an action that starts the flow.&lt;/p&gt;

&lt;p&gt;It may be a new message, a new file, a web request, or a timed schedule. After the trigger fires, the flow runs step by step. Each step is a piece you choose from the library.&lt;/p&gt;

&lt;p&gt;The builder shows the flow in a simple vertical layout. You can add branches, loops, retries, and data mapping.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuq0eksnk8z5qc8juir4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuq0eksnk8z5qc8juir4b.png" alt=" " width="800" height="352"&gt;&lt;/a&gt;&lt;br&gt;
Data mapping is the process of telling the flow how to pass information from one step to another. It uses a simple interface where you pick fields from earlier steps and connect them to new ones.&lt;/p&gt;

&lt;p&gt;When AI pieces are added, the workflow becomes more powerful. For example, you can pass text from a form to an AI model and get a summary.&lt;/p&gt;

&lt;p&gt;You can pass a document link and extract the main points. You can ask the AI to answer a question or decide if a message fits a category. These results then move to the next step, where they can be stored or sent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying ActivePieces on the Cloud using Sevalla
&lt;/h2&gt;

&lt;p&gt;To use Activepieces, you can either install it on your computer (not recommended due to the complex setup), &lt;a href="https://www.activepieces.com/" rel="noopener noreferrer"&gt;buy a cloud subscription&lt;/a&gt; or self-host it.&lt;/p&gt;

&lt;p&gt;If you prefer to install it on your computer, &lt;a href="https://www.activepieces.com/docs/install/options/docker" rel="noopener noreferrer"&gt;here are the instructions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Self-hosting gives you full control and is usually preferred by technical teams who want to keep sensitive data in-house.&lt;/p&gt;

&lt;p&gt;You can choose any cloud provider, like AWS, DigitalOcean, or others to set up ActivePieces. But I will be using Sevalla.&lt;/p&gt;

&lt;p&gt;Sevalla is a PaaS provider designed for developers and dev teams shipping features and updates constantly in the most efficient way. It offers application hosting, database, object storage, and static site hosting for your projects.&lt;/p&gt;

&lt;p&gt;I am using Sevalla for two reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every platform will charge you for creating a cloud resource. Sevalla comes with a $50 credit for us to use, so we won’t incur any costs for this example.&lt;/li&gt;
&lt;li&gt;Sevalla has a &lt;a href="https://docs.sevalla.com/templates/overview" rel="noopener noreferrer"&gt;template for ActivePieces&lt;/a&gt;, so it simplifies the manual installation and setup for each resource you will need for installation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://app.sevalla.com/login" rel="noopener noreferrer"&gt;Log in&lt;/a&gt; to Sevalla and click on Templates. You can see ActivePieces as one of the templates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcai48f9pjrccdwgjcv4f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcai48f9pjrccdwgjcv4f.png" alt=" " width="800" height="278"&gt;&lt;/a&gt;&lt;br&gt;
Click on the “ActivePieces” template. You will see the resources needed to provision the application. Click on “Deploy Template”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feiamh4ji6ggjomn5b2bu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feiamh4ji6ggjomn5b2bu.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;br&gt;
You can see the resource being provisioned. Once the deployment is complete, go to the ActivePieces application and click on “Visit app”. Enter your name, email and password, and you will be taken to the dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre788uxi69wsrlqbrinp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre788uxi69wsrlqbrinp.png" alt=" " width="800" height="392"&gt;&lt;/a&gt;&lt;br&gt;
Click on “New Flow”. You can either create a flow from scratch, or choose one of the many templates ActivePieces offers.&lt;/p&gt;

&lt;p&gt;Let's pick the “LinkedIn content idea generator” template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcg16ppbytxc7xjf9xz0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmcg16ppbytxc7xjf9xz0.png" alt=" " width="800" height="552"&gt;&lt;/a&gt;&lt;br&gt;
Click on “Use template”. You will see the workflow generated for you. You can also add/remove components based on your requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs4yasv8f1kser0c2qi6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs4yasv8f1kser0c2qi6.png" alt=" " width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;
You will see the option to update each block of the workflow. You can create connections to your email, Google Sheets, etc., to integrate them into the blocks.&lt;/p&gt;

&lt;p&gt;In the rank news block, it will ask you to choose a model and add your api key. For example, you can find your &lt;a href="https://platform.openai.com/settings/organization/api-keys" rel="noopener noreferrer"&gt;OpenAI api key here&lt;/a&gt;. You will also see a pre-built prompt template ready for you to use with your workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqj2e7x1x324h769uczd9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqj2e7x1x324h769uczd9.png" alt=" " width="754" height="746"&gt;&lt;/a&gt;&lt;br&gt;
Great! You now have a production-grade ActivePieces server running on the cloud. You can use this to set up all your workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-world examples
&lt;/h2&gt;

&lt;p&gt;A sales team can automate lead enrichment by passing new leads through an AI model. The AI extracts company size, industry, and intent. The results go to a CRM. The team saves hours of manual research.&lt;/p&gt;

&lt;p&gt;A content team can create a writing assistant. It gathers ideas from a form, generates outlines using an AI model, and stores drafts in Google Docs. Editors then refine the text.&lt;/p&gt;

&lt;p&gt;A compliance team can process long documents. They upload a file, an AI model extracts key rules, and the workflow sends a summary to reviewers. This makes it easier to track changes in regulations.&lt;/p&gt;

&lt;p&gt;An operations team can watch for new tickets in a helpdesk system. AI summarises the ticket. The workflow checks severity and sends it to the right team. This speeds up response times.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The idea behind Activepieces is simple. Automate work that slows you down. Mix AI with your tools. Build flows visually. Let both technical and non-technical users create automation. This helps teams move faster, reduce errors, and stay focused on meaningful work.&lt;/p&gt;

&lt;p&gt;The rise of AI means teams will use more specialised models. They will also need smooth ways to link these models with their daily tools.&lt;/p&gt;

&lt;p&gt;No-code platforms like Activepieces give teams control and speed without asking them to learn programming. The platform keeps improving with new pieces and stronger AI features. As the community grows, the number of available integrations will rise.&lt;/p&gt;

&lt;p&gt;Hope you enjoyed this article. Find me on &lt;a href="https://linkedin.com/in/manishmshiva" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; or &lt;a href="https://manishshivanandhan.com/" rel="noopener noreferrer"&gt;visit my website&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>architecture</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
