DEV Community

Cover image for Stop putting business logic in your integration layer — here's the pattern we built
Thiago Paz
Thiago Paz

Posted on

Stop putting business logic in your integration layer — here's the pattern we built

TL;DR — We had business logic scattered across an integration service. Debugging was painful, onboarding was slow, and every new flow was a gamble. We fixed it by moving all orchestration into our platform via a standardized job pattern. This post walks through the architecture, the full end-to-end flow, and a copy-paste recipe for creating new jobs.

The problem nobody wants to admit

You know the feeling. You open a service called ExternalApi expecting to find... API calls. Instead you find business rules, conditional flows, client-specific logic, and three years of "temporary" fixes.

That was us.

Every integration with our external system was its own snowflake. Different conventions, different error handling, different logging strategies — or no logging at all. Onboarding a new developer meant a two-hour walkthrough just to explain why a single job worked the way it did.

We needed to fix this. Not with a rewrite — with a pattern.


The principle: your platform owns the rules

The core decision was simple but had big consequences:

The integration service handles transport. Our platform owns the business logic.

This means:

  • The external service calls an endpoint and gets back a result
  • All orchestration, validation, rule application, and logging lives in our platform
  • The integration job becomes a dumb but reliable pipe

With this shift, every new integration follows the same skeleton. The domain knowledge stays where it belongs — inside the product.


Architecture overview

Here's how the layers map out:

Scheduler / Orchestrator
        │
        ▼
JobsController              ← route + auth (query key)
        │
        ▼
ClientUpdateApp             ← orchestration + business rules
        │
        ├── JobConfiguracao      ← job config per client (JSON)
        │
        ├── ExternalSystemApi    ← external data fetch + ack
        │
        └── Domain (Workflow, Scheduling, Notifications)
Enter fullscreen mode Exit fullscreen mode

Each layer has a single responsibility. The controller authenticates and delegates. The app orchestrates and applies rules. The infra layer talks to the outside world.


The full flow: JobSyncCompletedRecords

Let's walk through a real job. This one syncs completed records from the external system back into our platform.

Endpoint

GET /api/Jobs/JobSyncCompletedRecords?key=<job-key>
Enter fullscreen mode Exit fullscreen mode
Response When
200 OK Job completed successfully
401 Unauthorized Invalid or missing key
500 Internal Server Error Unhandled exception (logged automatically)

Step by step

1. Scheduler fires the route

An external scheduler (cron, Azure Timer, whatever fits your stack) calls the endpoint. No payload needed — configuration lives in the database.

2. Controller validates and delegates

[HttpGet]
[Route("JobSyncCompletedRecords")]
public async Task<HttpResponseMessage> JobSyncCompletedRecords(string key)
{
    return await ExecuteJobWithKeyAsync(
        key,
        () => _clientUpdateApp.JobSyncCompletedRecordsAsync(),
        "Records synced successfully"
    );
}
Enter fullscreen mode Exit fullscreen mode

ExecuteJobWithKeyAsync is a shared helper that handles key validation, wraps execution in try/catch, and returns the appropriate HTTP response. Every job uses it — zero boilerplate per route.

3. App loads configuration

public async Task JobSyncCompletedRecordsAsync()
{
    await ExecuteJobPerClientAsync(
        JobTypeEnum.SyncCompletedRecords,
        ProcessCompletedRecordsAsync
    );
}
Enter fullscreen mode Exit fullscreen mode

ExecuteJobPerClientAsync fetches the job config from APP_JOB_CONFIG by job type, deserializes the client list from the Configuration JSON field, and calls the action for each client.

4. Per-client processing (parallel, bounded)

private async Task ProcessCompletedRecordsAsync(ClientJobConfigDTO item)
{
    try
    {
        // 1. Fetch completed records from external system
        var records = await _externalApi.GetCompletedRecordsAsync(
            item.ClientId, item.GroupId, item.Route, item.Script
        );

        // 2. Process in parallel (max 6 concurrent)
        await Parallel.ForEachAsync(records, new ParallelOptions { MaxDegreeOfParallelism = 6 },
            async (record, _) =>
            {
                await UpdateRecordStatusAsync(record);
                await RegisterWorkflowTriggerAsync(record, "COMPLETED");
                await ClearPendingItemsAsync(record);
            });

        // 3. Acknowledge back to external system
        await _externalApi.AcknowledgeRecordsAsync(records, item);

        // 4. Persist execution log
        await _requestLogApp.SaveAsync(item, records.Count);
    }
    catch (Exception ex)
    {
        // Item-level error: log and continue — never abort the batch
        _appLogService.LogException(ex, new { item.ClientId, item.GroupId });
    }
}
Enter fullscreen mode Exit fullscreen mode

Two things worth highlighting here:

  • Bounded parallelism: MaxDegreeOfParallelism = 6 prevents thread explosion on large client batches
  • Item-level resilience: a failure on one record is logged but never stops the rest of the batch

The configuration contract

Every job reads its client list from a single table:

Table: APP_JOB_CONFIG

Column Description
JobType Enum value identifying the job
Route Endpoint path for external system calls
Configuration JSON array of client configs
CreatedByUserId Technical user for execution context

Configuration JSON structure:

[
  {
    "ClientId": 1,
    "GroupId": 10,
    "Route": "/api/records",
    "Script": "GET_COMPLETED",
    "Days": 7,
    "Year": 2025,
    "SendFrequency": 30
  }
]
Enter fullscreen mode Exit fullscreen mode

This means enabling a new client is a database insert — no code change, no deploy.


The recipe: creating a new job in 8 steps

Follow this and you'll have a production-ready job in under two hours.

Step 1 — Add the enum value

public enum JobTypeEnum
{
    SyncCompletedRecords = 1,
    YourNewJob = 2  // 👈 add here
}
Enter fullscreen mode Exit fullscreen mode

Step 2 — Expose the route

[HttpGet]
[Route("YourNewJob")]
public async Task<HttpResponseMessage> YourNewJob(string key)
{
    return await ExecuteJobWithKeyAsync(
        key,
        () => _clientUpdateApp.YourNewJobAsync(),
        "Job executed successfully"
    );
}
Enter fullscreen mode Exit fullscreen mode

Step 3 — Create the entry method

public async Task YourNewJobAsync()
{
    await ExecuteJobPerClientAsync(
        JobTypeEnum.YourNewJob,
        ProcessYourNewJobAsync
    );
}
Enter fullscreen mode Exit fullscreen mode

Step 4 — Implement the per-client action

private async Task ProcessYourNewJobAsync(ClientJobConfigDTO item)
{
    try
    {
        // 1. Fetch data from external system
        var data = await _externalApi.GetYourDataAsync(item);

        // 2. Apply business rules
        foreach (var record in data)
        {
            await ApplyBusinessRuleAsync(record);
        }

        // 3. Acknowledge back to external system
        await _externalApi.AcknowledgeAsync(data, item);

        // 4. Log the execution
        await _requestLogApp.SaveAsync(item, data.Count);
    }
    catch (Exception ex)
    {
        _appLogService.LogException(ex, new { item.ClientId, item.GroupId });
    }
}
Enter fullscreen mode Exit fullscreen mode

Step 5 — Add external system integration methods

In ExternalSystemApi.cs, add the two methods: one for fetching (Get...) and one for acknowledging (Acknowledge...).

Step 6 — Insert the DB config

INSERT INTO APP_JOB_CONFIG (JobType, Route, Configuration, CreatedByUserId)
VALUES (
    2,
    '/api/Jobs/YourNewJob',
    '[{"ClientId": 1, "GroupId": 10}]',
    999
);
Enter fullscreen mode Exit fullscreen mode

Step 7 — Observability is not optional

Your job must emit at minimum:

  • Entry log: job started, which type, timestamp
  • Per-item error log: client id, group id, item id, exception
  • Completion log: total received, total processed, total errors, duration

Without this, you're flying blind in production.

Step 8 — Rollout by config

Enable one pilot client. Monitor two full cycles. Validate with the functional team. Then expand. Never enable all clients at once on a first deploy.


The observability checklist

Metric Why it matters
Records received per job/client Detects external system issues early
Records processed successfully Your primary success signal
Records with errors Triggers alert if above threshold
Total job duration Catches performance regressions
Average time per item Diagnoses bottlenecks at scale

Alert rules we recommend:

  • 🔴 3+ consecutive failures for the same client → page the team
  • 🟡 Processed/received ratio below 95% → investigate before next cycle

The checklist before you ship

Technical

  • [ ] JobTypeEnum updated
  • [ ] Route created in JobsController
  • [ ] Interface + implementation in ClientUpdateApp
  • [ ] Per-item exception handling in place
  • [ ] External system fetch + ack methods added
  • [ ] APP_JOB_CONFIG record inserted
  • [ ] Request log and exception log covering success and error paths
  • [ ] Idempotency verified — no reprocessing on retry

Tests

  • [ ] Invalid key returns 401
  • [ ] Missing config doesn't crash execution
  • [ ] Empty record list completes with proper log
  • [ ] Single item error doesn't abort the batch
  • [ ] External system ack only fires for successfully processed items
  • [ ] Workflow trigger fires as expected

What we gained

After rolling this out across our first jobs:

  • A new developer can understand any job end-to-end in under 15 minutes
  • Adding a new client to an existing job is a single database insert
  • Debugging a production issue means checking one log table — not tracing across two services
  • Every new job starts from the same skeleton — zero architectural decisions at job creation time

That last one is underrated. Decision fatigue is real. When the pattern is clear, developers ship faster and make fewer mistakes.


What's next

We're looking at:

  • Moving the job key from a query param to a proper secret vault
  • Adding token-based service auth for the medium term
  • Building a lightweight dashboard to visualize job health across all clients in real time

If you've dealt with similar integration patterns — in health tech or elsewhere — I'd love to hear how you approached it. Drop a comment below.

Top comments (0)