DEV Community

Cover image for Integration Testing Checklist for Salesforce Integrations
beefed.ai
beefed.ai

Posted on • Originally published at beefed.ai

Integration Testing Checklist for Salesforce Integrations

  • How pre-test validation and contract testing prevent integration regressions
  • API and middleware test scenarios that catch silent failures
  • Data mapping, transformation, and reconciliation checks that protect your records
  • Designing error handling, retries, and performance tests that mirror production
  • Operational Runbook: step‑by‑step checklist and executable test cases
  • Sources

Most integration incidents are predictable: mismatched contracts, undocumented mapping rules, and untested error paths. You stop 70–80% of production breakages by codifying contracts, validating transformations, and treating integrations like testable products rather than one-off scripts.

Integration symptoms are rarely obvious: nightly upserts silently drop rows, duplicate accounts multiply because an external system sent two retries, or an OAuth refresh flow fails after a certificate rotation and your middleware queues pile up. You see business symptoms — missed renewals, wrong revenue numbers, angry support queues — while the root causes hide in schemas, transforms, token lifecycles, or throttling behavior.

How pre-test validation and contract testing prevent integration regressions

Start by shifting left: validate the API contract before any end-to-end wiring. Use a dual approach — schema validation (OpenAPI/WSDL) plus consumer-driven contract tests (contracts-by-example) — so that both the interface definition and the actual consumer expectations are executable artifacts. Pact-style consumer-driven contracts create a small, deterministic specification that the provider must satisfy; the consumer writes the interactions and publishes the contract for provider verification. This prevents interface-level regressions long before integration environments are required.

What that looks like in practice:

  • Capture an authoritative contract: OpenAPI for REST, WSDL for SOAP, or a Pact JSON for consumer examples.
  • Add a dry-run contract verification step in CI that rejects PRs which change request/response shapes relied on by consumers.
  • Version contracts with semantic rules (major = breaking, minor = additive); require a compatibility run for every major bump.

Practical contract example (Pact-style interaction snippet):

{
  "consumer": { "name": "BillingService" },
  "provider": { "name": "SalesforceAPI" },
  "interactions": [
    {
      "description": "create a contact for billing",
      "request": { "method": "POST", "path": "/contacts", "body": { "email": "user@example.com" } },
      "response": { "status": 201, "body": { "id": "003xx000..." } }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Run that contract in CI as unit tests for the consumer and as provider verification on the provider side to catch changes that would otherwise surface only during integration windows.

Important: Contracts are not a substitute for end-to-end tests. They isolate interface assumptions and reduce blast radius, but they won't catch data-quality problems that only appear when full-business-context flows run.

Key references and why they matter:

  • Use consumer-driven contracts to avoid version hell and test only the interactions actually used by consumers.
  • Validate API quotas, Limits headers, and limit-check mechanisms before load or production tests to avoid surprise throttling.

API and middleware test scenarios that catch silent failures

Build test scenarios that emulate real-world misbehavior, not just the happy path. Cover these families of tests and make each executable:

  1. Authentication and authorization flows

    • Validate OAuth 2.0 token refresh paths, certificate rotations, and expired token re-acquisition. Test what happens when refresh_token is revoked mid-flight.
    • Confirm least-privilege scopes do not break required operations.
  2. Connectivity, transient faults, and timeouts

    • Simulate network partitions, DNS failures, sluggish endpoints, and truncated responses.
    • Assert middleware handles partial responses and doesn't create half-objects.
  3. Rate limits and quota behavior

    • Hit the API with burst traffic to observe REQUEST_LIMIT_EXCEEDED / HTTP 403 semantics and how your middleware degrades gracefully. Use the REST limits resource to surface current consumption.
  4. Partial success and multi-status handling

    • For composite/batch endpoints, verify how mixed success/failure returns are surfaced and how rollback/compensation should run.
  5. Idempotency and duplicate handling

    • Re-run the same request (or replay a webhook) and assert no duplicate side effects; implement and test idempotency tokens where supported.
  6. Message ordering and concurrency

    • For asynchronous flows (Platform Events, bulk loads), test out‑of‑order delivery and concurrent writes to the same business key.
  7. Middleware-specific scenarios

    • Validate transformation rules (JSON→CSV→DTO), header propagation (traceparent, X-Correlation-ID), and error-code mapping (map third-party 422 → Salesforce-friendly 400).

Example Postman / Newman test snippet for validating a POST response:

pm.test("created contact", function () {
  pm.response.to.have.status(201);
  const body = pm.response.json();
  pm.expect(body).to.have.property("id");
  pm.expect(body.email).to.eql(pm.variables.get("email"));
});
Enter fullscreen mode Exit fullscreen mode

Automate these suites in CI and run them on environment promotion gates. Postman’s guidance on environment parity and automation is a practical place to start for structuring these tests.

Data mapping, transformation, and reconciliation checks that protect your records

Mapping breaks are the most dangerous failure mode because they silently poison production data. Treat mapping as code: document it, test it, and assert it with reconciliation.

Core elements of a mapping validation strategy:

  • A single source-of-truth mapping table (CSV or a Confluence page is fine early on) that lists: external field, source type, transformation rule, target sObject.field, data quality rules, business-key, and owner.
  • Unit tests for transformation logic (e.g., timezone normalization, currency conversion, rounding/truncation). Validate edge-cases like empty strings vs null, zero-values, and default dates.

Reconciliation tactics you can automate:

  • Count-based reconciliation: compare the source row count to Salesforce row count for the same time-window and business key scope.
  • Checksum validation: compute a deterministic hash (MD5 or SHA256) of normalized business fields on the source and the Salesforce record; compare mismatches.
  • Field-level sampling: nightly run that compares a sample of rows for critical fields and flags differences.

Example SOQL reconciliation query (compare count of new Opportunities in last 24 hours):

SELECT COUNT() FROM Opportunity WHERE CreatedDate = LAST_N_DAYS:1 AND Integration_Source__c = 'ERP'
Enter fullscreen mode Exit fullscreen mode

Automate a reconciliation job that runs after every bulk ingest or scheduled nightly; alert when counts diverge beyond a small threshold (for example, >0.1% or 10 records whichever is larger). Use business keys (external IDs) — never reconcile on Salesforce IDs alone.

Table: common mapping problems and test coverage

Mapping issue Symptom Test / Automation
Missing lookup resolution Orphaned child records Unit test: lookup resolves for sample payloads; nightly recon on orphan count
Timezone or DST shifts Dates off by hours leading to wrong SLA Transformation unit tests with DST boundary dates
Currency rounding Billing totals mismatch Reconcile aggregated sums and compare with source totals
Truncation of long strings Corrupted descriptions Boundary tests on max field lengths and error capture

When working with large volumes, prefer Bulk API 2.0 for ingest operations and design reconciliation to run incrementally for performance and lower API consumption. Bulk API 2.0 is the right fit for >2,000 records and uses asynchronous jobs; it changes processing guarantees (parallel batches, no strict ordering) so your reconciliation must tolerate eventual consistency.

Important: Reconcile on business keys and business totals, not on system-generated IDs.

Designing error handling, retries, and performance tests that mirror production

Resilience tests need two orthogonal approaches: correctness (is retry/idempotency logic safe?) and capacity (do you respect API limits and performance SLAs?).

Retry and backoff

  • Implement retries with exponential backoff and jitter to avoid synchronized retry storms; full-jitter is a pragmatic default. The AWS Architecture team documents patterns and trade-offs for full/equal/decorrelated jitter that reduce contention and server load.
  • For non-idempotent endpoints, prefer compensating transactions or queue-based durable processing instead of blind retries.

Example JavaScript retry with full jitter:

async function retryWithFullJitter(fn, maxAttempts = 5, base = 100) {
  for (let attempt = 1; attempt <= maxAttempts; attempt++) {
    try { return await fn(); }
    catch (err) {
      if (attempt === maxAttempts) throw err;
      const cap = Math.min(base * 2 ** attempt, 10000);
      const wait = Math.random() * cap;
      await new Promise(r => setTimeout(r, wait));
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Idempotency

  • Where feasible, create idempotency keys for create/upsert operations and enforce server-side idempotent behavior. Test by replaying requests and asserting single side-effects.

Performance testing

  • Design load profiles that reflect production: realistic concurrency, data-size distribution, and business-hour vs off-hour patterns. Simulate long-running composite calls and background bulk ingestion.
  • Respect org API limits: check Limits responses and use a dedicated integration user or token pool if needed to avoid exhausting a single user's API cursor limits.
  • Measure p50, p95, and p99 latencies and track error budgets. Execute load tests in a sandbox that closely mirrors production data volumes when possible; otherwise run smaller tests and extrapolate with caution.

Observability and correlation

  • Propagate trace headers (traceparent, tracestate) and/or X-Correlation-ID across HTTP and message boundaries; correlate logs, traces, and metrics to debug cross-system incidents. Adopting W3C Trace Context/OpenTelemetry for propagation makes cross-tool correlation reliable.
  • Ensure sufficient logging and sampling policy so you can debug sporadic failures without leaking PII.

Security and API hygiene

  • Test for API security weaknesses against the OWASP API Top 10: BOLA (Broken Object Level Authorization), broken auth, misconfigurations, and unsafe consumption of third-party APIs. Use these findings to design negative test cases and hardened validation in middleware.

Operational Runbook: step‑by‑step checklist and executable test cases

Below is an operational runbook you can copy into a CI job, runbook, or UAT package. Keep these checks short, automatable, and gated.

Pre-deployment validation (run in PR/CI)

  1. Contract validation: run consumer contracts and provider verification.
  2. Schema lint: validate OpenAPI/WSDL against expected shapes.
  3. Authentication smoke: request token, refresh token, validate scopes.
  4. Limits probe: query REST limits resource and assert expected quota visibility.

API & middleware automated test suite (CI)

  • Auth and token expiry tests (positive, negative).
  • Retry behavior tests with injected 5xx and network timeouts.
  • Idempotency test: replay request → assert one side-effect entry.
  • Transformation unit tests: feed edge-case payloads → assert normalized output.

Data reconciliation tasks (nightly)

  • Count reconciliation for critical objects (accounts, opportunities, invoices).
  • Checksum mismatches: surface rows with differing field-hash values.
  • Aggregated totals verification (revenue, quantity) with tolerance threshold alert.

Performance and capacity (pre-release / staging)

  • Run a scaled load that simulates typical peak concurrency for 30–60 minutes.
  • Validate Bulk API jobs: submit a parallel ingestion of representative payloads and validate job success, failure rates, and retries.
  • Evaluate p95/p99 latencies and error rates; ensure they meet SLO.

Incident drill (run quarterly)

  • Inject a token revocation and confirm recovery path.
  • Fail a downstream provider for 5 minutes and validate circuit breaker behavior and alerting.

Executable test case template (example)

Test Preconditions Steps Expected
Create contact end-to-end Sandbox contains empty Contact with external ID 1. POST sample payload; 2. Poll until Salesforce record exists; 3. Verify field mappings; 4. Run reconciliation Contact created once, fields match mapping, no partial writes

CI command examples

  • Run Newman (Postman) collection:
newman run collections/salesforce-integration.postman_collection.json -e env/staging.postman_environment.json --reporters cli,junit
Enter fullscreen mode Exit fullscreen mode
  • Run Pact provider verification:
pact-verifier --provider-base-url=http://localhost:8080 --broker-base-url=https://pact-broker.example
Enter fullscreen mode Exit fullscreen mode

Checklist table: test type, purpose, preferred tooling

Test Type Purpose Tooling
Contract tests Prevent interface breakage Pact + broker
API functional Validate endpoints and positive/negative flows Postman / Newman
Transformation unit tests Verify field-level transforms Unit test framework (Jest, pytest)
Bulk ingest validation Check large-volume behavior Bulk API 2.0 + custom verification scripts
Reconciliation Ensure data integrity SOQL + ETL scripts + monitoring alerts
Observability checks Correlate failures across systems OpenTelemetry / APM / Log aggregation

Operational rule: treat test results as first-class telemetry—store outcomes, timestamps, and run IDs so you can trend flaky endpoints and failing mappings over time.

Sources

Pact Documentation — Consumer and Provider Testing - Explains consumer-driven contract testing workflow, contract generation, and provider verification; used to justify contract-by-example and CI verification steps.

API Limits and Monitoring Your API Usage — Salesforce Developers Blog - Details Daily API Request Limits, Limits headers, and how to monitor API consumption; used to prescribe limit checks and quota-aware testing.

Integration Patterns — Salesforce Architects (Bulk API 2.0 guidance) - Describes integration patterns, when to use Bulk API 2.0, behavior of asynchronous bulk jobs, and idempotent design considerations; cited for Bulk API recommendations and reconciliation guidance.

Exponential Backoff And Jitter — AWS Architecture Blog - Defines jittered backoff strategies (Full/Equal/Decorrelated) and reasoning; used to recommend retry/backoff algorithms.

OWASP API Security Top 10 — 2023 edition - Catalog of API security risks (BOLA, Broken Auth, etc.); used to build negative test cases and security-focused integration checks.

Postman — What is API Testing? A Guide to Testing APIs - Practical guidance for API testing best practices, automation, and environment parity; referenced for structuring API/middleware test suites.

An Architect’s Guide to Event Monitoring — Salesforce Blog - Explains Event Log File, Event Log Objects, and real-time event monitoring; used to recommend observability and audit log sources for reconciliation and incident response.

W3C Trace Context / Distributed Tracing guidance (OpenTelemetry & standards) - Standards for propagating traceparent and tracestate headers and best practices for correlation across services; used to specify tracing and correlation-ID propagation strategies.

Top comments (0)