Salesforce Agent Script: Your First Steps
If you've been building Agentforce agents using nothing but natural language prompts, you already know the frustration. The agent works perfectly in your demo, then does something completely unexpected when a real customer throws a curveball. That's exactly the problem Agent Script was built to solve - and honestly, it's one of the most exciting things Salesforce has shipped in a while.
Agent Script is a new scripting language that shipped with the Spring '26 release as part of Agentforce Builder. It lets you blend conversational AI with hard-coded business logic, so your agents can still chat naturally while following strict rules when it matters. Think of it as giving your AI agent a personality AND a rulebook.
Why Prompts Alone Weren't Cutting It
Here's something I've noticed working with Agentforce since it launched: natural language instructions are great for the "soft" parts of a conversation. Greeting a customer, understanding their intent, showing empathy - LLMs handle all of that beautifully. But when you need your agent to always verify identity before accessing an account, or never offer a discount above 15%, prompts get shaky.
You'd write something like "Always verify the customer's identity before proceeding" and most of the time it worked. But "most of the time" isn't good enough when you're handling real customer data or financial transactions.
Agent Script fixes this by introducing what Salesforce calls "hybrid reasoning." You keep the natural language pieces for conversational flexibility, but wrap them in deterministic logic - if/else conditions, required action sequences, and controlled topic transitions that execute exactly as written, every single time.
If you're new to some of this terminology, salesforcedictionary.com is a solid reference for looking up Salesforce-specific terms like topics, actions, and agents in the Agentforce context.
How Agent Script Actually Works
The language itself is surprisingly approachable. If you've ever written YAML or Python, you'll feel right at home. It uses indentation-based structure (three spaces per level is the standard), key-value pairs, and a handful of special operators that are easy to pick up.
Here are the ones you'll use constantly:
The pipe operator (|) marks text that goes to the LLM as part of the prompt. This is where your conversational instructions live. Everything outside the pipes is deterministic logic that runs exactly as specified.
The arrow operator (->) switches a block from declarative to procedural mode. When you need step-by-step logic with conditionals and variable assignments, this is your friend.
The @ symbol references other resources - variables, actions, topics, and outputs. So @actions.get_order calls an action you've defined, and @topic.verify_identity delegates to another topic.
Template expressions ({! }) inject variable values at runtime. Write {!customer_name} and it resolves to whatever that variable holds during the conversation.
The structure follows a logical flow: you define your agent config, set up variables, write system-level instructions, create a start_agent entry point, then build out your topics with their actions and reasoning blocks.
Building Your First Agent Script: A Practical Walkthrough
Let's say you're building a customer service agent that handles order inquiries. In the old prompt-only world, you'd write a big block of instructions and hope for the best. With Agent Script, you can be precise.
Your start_agent block fires with every user message and routes to the right topic. Each topic - like "check order status" or "process return" - has its own actions, variables, and reasoning logic.
Inside a topic, you define actions in two ways. Deterministic actions use the run keyword and execute immediately when conditions are met. You specify inputs with with and capture outputs with set. No LLM judgment involved - it just runs.
LLM-driven actions go in your reasoning.actions block. Here, you expose tools to the LLM and let it decide when to call them based on conversation context. The slot-filling operator (...) tells the LLM to extract values from what the user said. So order_id: ... means "figure out the order ID from the conversation."
The real power shows up when you combine both. Your deterministic logic handles the non-negotiable stuff - identity verification, data validation, compliance checks. The LLM handles the conversational flow, empathy, and edge cases that would be impossible to hard-code.
One thing worth noting: else if isn't supported yet, so you'll need separate if statements for multiple conditions. It's a minor quirk, but it catches people off guard.
Three Authoring Methods (Pick Your Comfort Level)
Salesforce clearly thought about different skill levels when designing this. You've got three ways to create Agent Script:
Conversational authoring is the lowest barrier. You literally describe what you want your agent to do in plain English, and Agentforce Builder converts it into proper script with topics, actions, and expressions. It's not perfect, but it's a fantastic starting point.
The visual canvas gives you a block-based editor where script appears as summarized, expandable blocks. Type / for common expression patterns and @ to reference resources. It's the sweet spot for admins who want more control without writing raw script.
Pro-code with Agentforce DX is for developers who want to work in VS Code with the Salesforce CLI. The Agentforce DX VS Code Extension supports the full Agent Script language with autocomplete, syntax highlighting, and all the standard code editing features you'd expect.
For more Salesforce developer terminology and concepts, salesforcedictionary.com keeps an updated glossary that covers everything from Apex basics to the latest Agentforce features.
Real-World Use Cases That Make Sense
Where Agent Script really shines is in processes that need both consistency and flexibility. I've been thinking about this a lot, and these are the scenarios where it makes the biggest difference:
Identity verification flows. The agent needs to collect an email, send a verification code, and validate the response. These steps must happen in order, every time, with no shortcuts. But the conversational wrapper around those steps can be natural and friendly.
Field service intake. An energy company could use Agent Script to ensure their agent always confirms the asset and location, captures the issue type and urgency, runs through diagnostic questions, creates the work order with correct priority, and escalates based on safety criteria. The sequence is locked down, but the agent can still ask clarifying questions naturally.
Financial services compliance. When agents handle banking inquiries, certain disclosures and verification steps are legally required. Agent Script's after_reasoning block is perfect here - it runs deterministically after the LLM finishes, acting as a guardrail for mandatory compliance steps.
Sales qualification. Your agent can follow a strict BANT or MEDDIC qualification framework while still having a natural conversation. The deterministic logic ensures no qualifying question gets skipped, while the LLM keeps things from feeling like an interrogation.
Getting Started Without Losing Your Mind
If you're itching to try Agent Script, here's my honest advice: start small. Don't try to rebuild your entire service agent from scratch. Pick one topic - maybe a simple FAQ handler or an order status checker - and build that out with Agent Script.
Use the conversational authoring first to generate a baseline, then switch to the canvas or script view to refine the logic. Pay close attention to your available when conditions on topics, because that's what controls when the agent can transition between conversation areas.
Test aggressively. The enhanced preview in Agentforce Builder now shows detailed tracing and reasoning summaries for every message, so you can see exactly why your agent made each decision. That alone is worth the upgrade from the old prompt-only approach.
And treat your agents as part of your operating model, not as experiments. Define your required inputs and decision paths before you start building. The more upfront planning you do, the cleaner your script will be.
What This Means for the Salesforce Ecosystem
Agent Script represents a real shift in how we think about AI agents in Salesforce. It's not just another feature - it's an acknowledgment that production AI needs structure. The days of crossing your fingers and hoping your prompt covers every edge case are ending.
For admins, this is an opportunity to own more of the agent development process. You don't need to be a developer to use conversational authoring or the visual canvas. For developers, Agent Script gives you the precision and version control you've been asking for since Agentforce launched.
If you're studying for Salesforce certifications or trying to stay current with the platform, Agent Script is definitely something to add to your learning list. Resources like salesforcedictionary.com and the official Agent Script Recipes on the Salesforce Developers site are great places to start building your knowledge.
I'd love to hear what you're building with Agent Script. Drop a comment below if you've tried it out or if you have questions about getting started - always happy to talk shop.
Top comments (0)