<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Marc Gille-Sepehri</title>
    <description>The latest articles on DEV Community by Marc Gille-Sepehri (@marcgillesepehri).</description>
    <link>https://dev.to/marcgillesepehri</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/marcgillesepehri"/>
    <language>en</language>
    <item>
      <title>Why We Keep Process Data Outside the Engine — and Why It Changes Everything for Agentic BPM</title>
      <dc:creator>Marc Gille-Sepehri</dc:creator>
      <pubDate>Sat, 11 Apr 2026 11:19:37 +0000</pubDate>
      <link>https://dev.to/marcgillesepehri/why-we-keep-process-data-outside-the-engine-and-why-it-changes-everything-for-agentic-bpm-27p4</link>
      <guid>https://dev.to/marcgillesepehri/why-we-keep-process-data-outside-the-engine-and-why-it-changes-everything-for-agentic-bpm-27p4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8bkkfmys0gtxlpdxoy6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8bkkfmys0gtxlpdxoy6.png" alt=" " width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is a design decision at the heart of &lt;a href="https://github.com/The-Real-Insight/in-concert" rel="noopener noreferrer"&gt;in-concert&lt;/a&gt; that surprises people when they first encounter it: &lt;strong&gt;the engine knows nothing about your data&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;No domain objects stored inside the engine. No variable interpolation in BPMN expressions. No built-in scripting that reaches into your database. When a process instance is running, the engine holds exactly one thing that belongs to you: an &lt;code&gt;instanceId&lt;/code&gt;. Everything else — documents, application state, business context, AI responses — lives in your systems, bound to that id.&lt;/p&gt;

&lt;p&gt;This is not an oversight. It is the central architectural choice, and it shapes everything else about how in-concert works. And this is the  beginning of #agenticbpm.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Data-Coupled Engines
&lt;/h2&gt;

&lt;p&gt;Traditional BPM engines — Camunda, Flowable, Activiti — manage process variables alongside process state. You deploy a BPMN, you pass in variables, and the engine stores them, interpolates them into conditions, and threads them through the execution. It is convenient at first. Then reality arrives.&lt;/p&gt;

&lt;p&gt;Your process needs to evaluate a condition against data that lives in your ERP. Or the "variable" is actually a 40-page document. Or the service task needs to call an LLM with context assembled from five different sources. Or your security model requires that PII never leaves your own database.&lt;/p&gt;

&lt;p&gt;Suddenly the engine is not a neutral orchestrator. It has become a data store you did not ask for, a security boundary you have to manage, and an integration point that does not understand the shape of your actual domain.&lt;/p&gt;

&lt;p&gt;We built in-concert after running into exactly these problems. The solution was radical simplicity: &lt;strong&gt;the engine does not store your data, period&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three Touch Points — and Why They Are All Fuzzy
&lt;/h2&gt;

&lt;p&gt;In any BPMN process, there are three places where execution intersects with the outside world. In a traditional engine, these are handled by scripting, expression languages, and built-in connectors. In in-concert, they are handled by your code — deliberately, explicitly, and with full access to everything you know.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Service Tasks
&lt;/h3&gt;

&lt;p&gt;A service task means "call something external and continue." In a classic engine, you write a connector or a script that runs inside the engine's JVM or Node.js process. The engine manages the call, handles the result, and stores output variables.&lt;/p&gt;

&lt;p&gt;In in-concert, the engine calls your handler and waits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;onServiceCall&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;instanceId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Full control. Assemble context from anywhere.&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;myDataStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getContextFor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;instanceId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;myLLM&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;extensions&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;toolId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;completeExternalTask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;instanceId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;workItemId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The handler receives the &lt;code&gt;instanceId&lt;/code&gt; and whatever metadata you put on the BPMN node (&lt;code&gt;tri:toolId&lt;/code&gt;, &lt;code&gt;tri:toolType&lt;/code&gt;, custom extensions). It completes when it is done — whether that is 50ms or 50 minutes later, whether via a direct response, a message queue reply, or a webhook. The engine waits. It does not care how long.&lt;/p&gt;

&lt;p&gt;But "call an LLM" understates what the handler can actually do. Consider what happens in a real agentic workflow:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The LLM as context mapper.&lt;/strong&gt; Your process node carries a &lt;code&gt;tri:toolId&lt;/code&gt; that identifies an MCP tool — say, &lt;code&gt;search-crm&lt;/code&gt; or &lt;code&gt;generate-proposal&lt;/code&gt;. The tool has a defined input schema. Your application data has its own shape. The LLM's job here is not to answer a question — it is to map your domain objects into the tool's input format, invoke the tool, and map the structured output back into whatever your process needs next.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;onServiceCall&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;instanceId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;toolId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;extensions&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;toolId&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;       &lt;span class="c1"&gt;// e.g. "search-crm"&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;mcpClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getTool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;toolId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;           &lt;span class="c1"&gt;// get tool + schema&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;myDataStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getContextFor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;instanceId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// LLM maps context → tool input schema&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;toolInput&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapToToolInput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;inputSchema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Invoke the MCP tool&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;toolOutput&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;mcpClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;toolId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;toolInput&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// LLM maps tool output → domain result&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mapFromToolOutput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;outputSchema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;toolOutput&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;completeExternalTask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;instanceId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;workItemId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern — &lt;strong&gt;LLM as the mapping and reasoning layer around structured tool calls&lt;/strong&gt; — is where BPMN and agentic AI (i.e. #agenticbpm) meet most naturally. The process definition models the flow and the intent. The BPMN node identifies which tool to use. The LLM handles the fuzzy work of bridging between your data model and the tool's contract. And because all of this happens in your code, you can swap models, adjust prompts, and iterate on the mapping logic without touching the process definition at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The handler is also where long-running integration lives.&lt;/strong&gt; Publish a job to a queue, return immediately, and complete the task when the consumer acknowledges. Poll an external system until it is ready. Wait for a webhook. None of this requires anything special from the engine — it simply waits until your handler calls &lt;code&gt;completeExternalTask&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Transition Conditions — the XOR Gateway
&lt;/h3&gt;

&lt;p&gt;This is where the fuzziness gets interesting. In a BPMN XOR gateway, one of several outgoing flows is selected based on a condition. In a traditional engine, you write an expression: &lt;code&gt;${amount &amp;gt; 1000}&lt;/code&gt;, or a FEEL expression, or a Groovy script. The engine evaluates it against stored variables.&lt;/p&gt;

&lt;p&gt;But what if the condition is not a clean boolean expression? What if it is "does this application look fraudulent?" or "is this document complete enough to proceed?" or "based on the conversation so far, which department should handle this?"&lt;/p&gt;

&lt;p&gt;These are not expressions. They are judgements — and judgements require context, and often require an LLM.&lt;/p&gt;

&lt;p&gt;In in-concert, gateway decisions are routed to your handler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;onDecision&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;instanceId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// payload.transitions is the list of outgoing flows with names and conditions&lt;/span&gt;
    &lt;span class="c1"&gt;// You evaluate — using your data, your rules, your LLM&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;myDataStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getContextFor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;instanceId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;selected&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;myRouter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;evaluate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;transitions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;submitDecision&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;instanceId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;decisionId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;selectedFlowIds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;selected&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;flowId&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The engine gives you the transition options. You choose. The evaluation logic — however simple or sophisticated — belongs to you. You can use a simple &lt;code&gt;if/else&lt;/code&gt;. You can call an LLM with the full application context. You can run a rules engine. The engine does not prescribe how you decide; it only records that you did.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Human Tasks — the Worklist
&lt;/h3&gt;

&lt;p&gt;User interaction is the most obviously fuzzy of the three. A human task is not deterministic. The user brings judgement, context, domain knowledge, and occasionally the wrong answer. The task might be "review this contract," "approve this expense," or "assess whether this customer qualifies."&lt;/p&gt;

&lt;p&gt;In in-concert, human tasks are projected to a queryable worklist. Your UI queries it, filtered by role, by claimed status, by instance. The user sees the task, opens your application where the full document and context live, makes a decision, and your code calls &lt;code&gt;completeUserTask()&lt;/code&gt; with the result.&lt;/p&gt;

&lt;p&gt;The engine never renders a form. It never stores the contract. It never knows what the user saw. It only knows that a human task at a given node in a given process instance was completed with a given result — and it advances accordingly.&lt;/p&gt;

&lt;p&gt;This lets you build any interaction model: cherry-picking worklists, supervisor assignment, AI-assisted pre-screening before human review. The engine is the backbone, not the bottleneck.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Unlocks for Agentic BPM
&lt;/h2&gt;

&lt;p&gt;Here is the part that we find genuinely exciting.&lt;/p&gt;

&lt;p&gt;AI agents need orchestration. A single LLM call is not a workflow — it is a function. Useful, but limited. Real agentic systems involve sequences of steps, parallel branches, human checkpoints, error handling, retries, long-running waits. They need state across time. They need the ability to hand off between AI and human. They need audit trails.&lt;/p&gt;

&lt;p&gt;BPMN is a remarkably good fit for this. It has been modelling complex, long-running processes for decades. It handles parallelism, subprocesses, boundary events, timers, and message correlation out of the box. And it is visual — a BPMN diagram is something a business analyst and a developer can read together.&lt;/p&gt;

&lt;p&gt;in-concert brings BPMN to agentic systems with a clean separation: &lt;strong&gt;the engine handles the orchestration; your code handles the intelligence&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Service tasks become LLM invocations. Gateway decisions become LLM evaluations against your domain context. Human tasks become the checkpoints where a person reviews or overrides what the AI decided. And because all data and logic live outside the engine, you can iterate on your prompts, your models, and your routing logic without touching the process definition.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;instanceId&lt;/code&gt; is the thread that holds it together. Every LLM call, every database query, every human task can be correlated to a specific process instance. You know exactly where in the process you are, what decisions were made, and what the audit trail looks like — because in-concert records all of that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;in-concert is open source, MIT-licensed (with attribution), and published on npm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @the-real-insight/in-concert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The SDK works in two modes. For microservice deployments, run the engine as a standalone service and connect via REST and WebSocket. For embedded or test use, run it directly in-process against MongoDB — same API, no server needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;BpmnEngineClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@the-real-insight/in-concert/sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// REST mode&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BpmnEngineClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;rest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;baseUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http://localhost:3000&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Local / embedded mode&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BpmnEngineClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;local&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full quick start, API reference, and BPMN conformance matrix are in the &lt;a href="https://github.com/The-Real-Insight/in-concert" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;. The README documents every SDK method with accurate type signatures pulled directly from the package.&lt;/p&gt;

&lt;h2&gt;
  
  
  Come Build With Us
&lt;/h2&gt;

&lt;p&gt;in-concert is early. The BPMN subset is intentionally focused — we implement what production workflows actually need, and we fail loudly on anything we do not support yet. There is meaningful work to be done on the conformance surface, the developer experience, and the agentic integration patterns.&lt;/p&gt;

&lt;p&gt;If this resonates with you — if you have built on BPM engines and felt the friction of data-coupled orchestration, or if you are thinking about how to bring structure to agentic AI workflows — we would love to have you involved.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Star the repo.&lt;/strong&gt; Try it on a real process. Open an issue. Submit a PR. The contribution guide is in &lt;code&gt;docs/contributing.md&lt;/code&gt; and there are &lt;code&gt;good first issue&lt;/code&gt; labels for anyone who wants to start small.&lt;/p&gt;

&lt;p&gt;We are The Real Insight GmbH, and we are building the engine layer for #agenticbpm. This is just the beginning.&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://github.com/The-Real-Insight/in-concert" rel="noopener noreferrer"&gt;github.com/The-Real-Insight/in-concert&lt;/a&gt;&lt;br&gt;
→ &lt;a href="https://www.npmjs.com/package/@the-real-insight/in-concert" rel="noopener noreferrer"&gt;npmjs.com/package/@the-real-insight/in-concert&lt;/a&gt;&lt;br&gt;
→ &lt;a href="https://the-real-insight.com" rel="noopener noreferrer"&gt;the-real-insight.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Powered by The Real Insight GmbH BPMN Engine — &lt;a href="https://the-real-insight.com" rel="noopener noreferrer"&gt;the-real-insight.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>bpmn</category>
      <category>aiagents</category>
      <category>node</category>
      <category>agenticbpm</category>
    </item>
  </channel>
</rss>
