The Rahsi Framework™
Read Complete Article | https://www.aakashrahsi.online/post/from-chatbot-to-workflow-engine
From Chatbot to Workflow Engine.
Quietly, Microsoft has already given us everything we need to turn Copilot + Claude into governed workflow engines – not just chat UIs.
This article is my attempt to lay that out, calmly and precisely:
From Chatbot to Workflow Engine | Safely Orchestrating Copilot & Claude with Power Automate, Logic Apps, and Durable Functions | The Rahsi Framework™
The goal is simple:
- Treat the whole stack as one execution context inside a clear trust boundary
- Align with Microsoft’s designed behavior, not fight it
- Stay consistent with how Copilot honors labels in practice when it calls functions, touches data, and triggers actions in your estate
1. The core move: AI as signal router, not logic host
The Rahsi framing starts with a hard line:
Copilot / Claude are signal routers that choose tools.
Your workflow logic, state, and guarantees live in Azure.
In practice, that means:
-
Azure OpenAI / Azure AI Foundry function calling
- Functions describe the allowed actions and data surfaces
- The model chooses which function to invoke, with what arguments
- You treat this as a typed intent coming from an LLM, not as a place to bury business rules
-
Assistant functions (classic Foundry assistants)
- Multi-step tools, tool calls, and responses are treated as a stream of structured signals
- You design this as a governance surface, not a playground
The execution context lives behind the trust boundary:
- Durable Functions carry state, timers, and compensations
- Logic Apps wire in systems-of-record and external SaaS
- Power Automate governs approvals and human decisions
- Connectors and Copilot Studio define what the AI is eligible to touch
Copilot and Claude become the front-door narrators of a workflow engine that is already Azure-native.
2. Designed behavior as one execution context
The article treats this stack as a single, coherent execution context:
- Trust boundary → Azure subscription / resource group / VNet / hybrid boundaries where your durable logic lives
- Execution context → Durable orchestrations, Logic Apps flows, Power Automate flows, and connectors that implement the actual behavior
- AI front-door → Copilot, Claude, and Azure OpenAI function calling translating natural language into structured, governed actions
When you work this way, you are not “teaching the model business logic.”
You are:
- Declaring a palette of safe, typed actions
- Allowing the LLM to route between them
- Keeping state, commitments, and guarantees in Azure services designed for that purpose
This aligns tightly with Microsoft’s philosophy:
AI should call the platform; it should not be the platform.
3. The moving parts (and how they fit)
Here is the core assembly, as used in the Rahsi Framework™.
3.1 Azure OpenAI / Foundry function calling
Function calling and assistant functions are where intent turns into structured input:
- System and tool definitions describe what is possible
- The model chooses a function and arguments
- You verify, log, and hand off to downstream services
Instead of an ad-hoc prompt like “create an approval workflow for X”, you move to:
-
createApprovalWorkflow(scope, approvers, sla, reason) -
enqueueDurableInstance(instanceId, workflowType, payload) submitBusinessApproval(requestId, decision, justification)
The model’s “job” becomes:
- Interpret user text
- Map to the right tool with the right parameters
- Respect the trust boundary and labels you’ve defined
3.2 Durable Functions: the workflow engine behind the scenes
Durable Functions are where long-running, CVE-tempo-aware workflows live:
-
Orchestrator functions
- Define the workflow steps
- Coordinate fan-out/fan-in
- Call activities, wait for events, and schedule durable timers
-
Activity functions
- Perform idempotent units of work
- Talk to your systems-of-record, queues, APIs, or Logic Apps
-
Patterns that matter here
- Fan-out/fan-in for parallel work
- Human interaction patterns (wait for external event)
- Durable timers for SLAs and reminders
- Compensation patterns when downstream actions need to be rolled back or adjusted
The orchestrator becomes the execution context for any AI-triggered workflow:
- Copilot or Claude calls a function
- The function enqueues a Durable Functions instance
- The orchestrator owns the lifecycle of that request under CVE tempo
3.3 Logic Apps: integration fabric and connectors
Logic Apps sit beside Durable Functions as the integration backbone:
- Managed connectors to SaaS, on-prem, and B2B protocols
- B2B scenarios (EDI, AS2), enterprise messaging, and APIs
- Built-in triggers and actions for events and schedules
You can:
- Call Logic Apps from Durable Functions (for integration-heavy segments)
- Expose Logic Apps as APIs that the orchestrator can invoke
The Logic App layer respects the same trust boundary and execution context, but is optimized for:
- Configurable connectors
- Visual integration logic
- Line-of-business integration patterns
3.4 Power Automate: approvals and business-facing workflows
Power Automate modern approvals give you an enterprise-grade approvals engine:
- Sequential approvals
- Parallel approvals
- Re-assignments and escalation patterns
- Business approvals templates and setup guidance
You plug Copilot / Claude into this world by:
- Letting the model draft and contextualize approval requests
- Routing those requests into modern approvals
- Feeding back decision outcomes into Durable Functions or Logic Apps
AI becomes:
- A narrator and drafter of approvals
- A consumer and explainer of approval state
The actual decision stays with humans, managed by Power Automate.
3.5 Connectors and Copilot Studio: “what Copilot can touch”
The last piece is eligibility:
- Standard connectors (Graph, SharePoint, Dataverse, Dynamics, etc.)
- Custom connectors for internal APIs
- Copilot Studio advanced, knowledge, and custom connectors
Together, they define:
- Which systems are reachable
- Under which identity
- With what shape of input and output
This is where how Copilot honors labels in practice becomes concrete:
- Labels and permissions constrain what is eligible for grounding and actions
- Connectors and Copilot Studio configurations turn those constraints into explicit design, not assumptions
4. Mapping the stack as a governance surface (table)
Here is a compact view of how the pieces align, in one markdown table.
| Layer | Role in Execution Context | Key Microsoft Services | Governance Focus |
|---|---|---|---|
| AI Front-Door | Interprets intent, selects tools, narrates outcomes | Azure OpenAI function calling, Assistant functions, Copilot, Claude | System prompts, tool schemas, label-aware behavior, logging of tool calls |
| Durable Workflow Engine | Owns long-running workflows and CVE-tempo windows | Azure Functions (Durable Functions: orchestrator + activities) | State, timers, compensation, SLAs, human interaction, idempotency |
| Integration Fabric | Connects systems-of-record and SaaS | Azure Logic Apps, API Management | Connectors, retry policies, error handling semantics, boundaries around external systems |
| Approvals and Human Decisions | Governs business approvals and decision points | Power Automate modern approvals, business approvals templates | Approval chains, escalation, timeouts, audit trails |
| Connectors and Data Access | Defines what AI is eligible to touch | Power Platform connectors, Logic Apps connectors, Copilot Studio advanced and knowledge connectors | Data boundaries, identities, scopes, “allowed to touch” inventory |
| Observability and Evidence | Makes the whole execution context narratable under CVE tempo | Azure Monitor, Application Insights, Log Analytics, Azure Storage, diagnostic settings | Traces, logs, metrics, correlation IDs, durable instance histories, evidence windows |
| Governance Narrative (Rahsi Framework™) | Describes the whole thing as one trust boundary and execution context, in human language | Architecture diagrams, design docs, runbooks, evidence packs | Designed behavior, stable language, consistency with how Copilot honors labels in practice across collaboration |
5. CVE-tempo windows and “evidence-ready” workflows
The Rahsi perspective borrows from governance work:
- A CVE-tempo window is a period where the pressure on your estate increases (new CVE, new regulation, high-attention event).
- The move is not to panic; the move is to make behavior more deterministic and evidence-ready.
In this world, a “good” window looks like this:
-
Intent arrives via AI front-door
- Copilot or Claude receives a request
- Function calling selects the appropriate tool
-
Durable Functions orchestrator takes over
- Starts or resumes a workflow instance
- Fans out work, waits for humans, or sleeps via durable timers
-
Logic Apps and connectors perform integration work
- Systems-of-record remain the source of truth
- Actions are logged and observable
-
Power Automate governs approvals
- Business decisions are recorded with clear context
- Outcomes are fed back into the orchestrator
-
Observability builds an evidence window
- Traces, logs, approval records, and state histories are correlated
- You can reconstruct “what happened” and “who decided” for that window
-
Narrative remains aligned with Microsoft’s designed behavior
- You are using the platform the way it is meant to be used
- The story you tell in an architecture review matches what the platform is actually doing
This is what it looks like when designed behavior is not just a slogan, but a shape you can draw.
6. A simple mental model: three rings
When explaining this in a review, I like to use three quiet rings:
-
Inner ring – Trust boundary
- Durable Functions, Logic Apps, Power Automate, connectors, data stores
- Where state, commitments, and guarantees live
-
Middle ring – Execution context
- How these services choreograph: workflows, integrations, approvals, observability
- How you define patterns that respond to CVE-tempo pressure without rewriting everything
-
Outer ring – AI narrators
- Copilot, Claude, ChatGPT, and front-ends
- They speak for the inner rings, but they do not replace them
If the outer ring disappeared tomorrow, your execution context would still exist.
AI is an accelerator and interpreter for a workflow engine you already own.
7. A small, concrete pattern
To make this less abstract, here is a minimal pattern you can build today.
-
Define tools for Copilot / Claude
-
create_security_approval(request_payload) -
check_approval_status(request_id)
-
-
Back those tools with Durable Functions + Power Automate
-
create_security_approvalstarts a Durable Functions orchestrator instance - The orchestrator calls a Logic App or Power Automate flow to create a modern approval
- When the approval is complete, Power Automate posts back to the orchestrator via a durable external event
-
-
Expose a read-only status tool
-
check_approval_statusreads orchestrator state or a queryable store - Copilot / Claude narrates the status back to the user
-
-
Add observability
- Correlate: tool call ID, orchestrator instance ID, approval ID, and logs
- You now have an evidence-ready trace of the entire approval lifecycle
The result:
- Users feel like they are “just chatting” with Copilot or Claude.
- Architects and security leaders see a governed workflow engine running inside Azure services they recognize and trust.
8. Why this matters for Azure leaders
If you already run:
- Azure OpenAI / Azure AI Foundry function calling
- Durable Functions workloads
- Logic Apps or Power Automate flows
- Copilot Studio or M365 Copilot
…then you already have all the ingredients for this pattern.
This article is not asking you to adopt a new product.
It is offering a way to:
- See Copilot and Claude as narrators and orchestrators of your existing execution context
- Keep AI inside a clearly drawn trust boundary
- Ensure your stories about designed behavior match what the platform is actually doing
- Stay consistent with how Copilot honors labels in practice when data and actions are involved
Quiet, humble, technically sharp.
That is the energy I wanted The Rahsi Framework™ to carry into this space.
If you build or adapt this pattern, I would love to see how you shape it for your own estate: different CVE-tempo windows, connector sets, or industries.
Comments, architectures, or diagrams are very welcome.
I’m especially interested in how you keep your execution context narratable when the tempo rises.
Top comments (0)