OmniSync: A Real-World Architecture for Syncing Salesforce, D365 and Fabric in Near Real-Time (Part 1)
A practical breakdown of building near real-time data sync across cloud CRMs and analytics using serverless Azure services.
Posts in this Series
- OmniSync: A Real-World Architecture for Syncing Salesforce, D365 and Fabric in Near Real-Time (Part 1) (This Post)
- OmniSync: Near Real-Time Lakehouse, Spark Streaming and Power BI in Microsoft Fabric (Part 2)
- OmniSync: Integrating Salesforce with Microsoft Fabric and Dynamics 365 (Part 3)
- OmniSync: Dynamics 365 Integration with Salesforce and Fabric (Part 4)
Introduction
OmniSync started as a side project for building a near real-time, multi-system data synchronization using mostly Azure-native tools. It connects systems like Salesforce , Dynamics 365 (Sales), and Microsoft Fabric , with future plans to bring in SAP S/4HANA.
OmniSync is meant to sync with clean, scalable patterns that avoid tight coupling and support near real-time updates.
OmniSync uses:
- Logic Apps to orchestrate
- Event Hub + Event Grid for messaging
- Azure Functions for lightweight transformation
- Power BI and Fabric for analytics
Use Case
To put OmniSync into context, let’s imagine a scenario based on a fictional enterprise: Contoso Inc.
They’re a global company with:
- Salesforce as the CRM for North America
- Dynamics 365 Sales running in EMEA
- SAP running backend finance in APAC
- A legacy SQL Server warehouse stitched together with SSIS
And have these issues:
- Different systems have different versions of the same customer
- Sales data is duplicated, delayed, or just plain wrong
- Nobody trusts the reports, especially executives
Contoso wants to fix this. So instead of rebuilding from scratch, the idea is to sync just enough data across platforms in near real time and centralize it for analytics.
Here’s what they’re aiming for:
- Near real-time sync between Salesforce and Dynamics
- Unification of customer and sales dat a across platforms
- Power BI dashboard s driven by Microsoft Fabric
Minimal coupling — each system should still work on its own
Architecture
OmniSync is built on a loosely coupled, event-driven architecture, systems communicate through events or APIs.
Here’s how it works:
Integration Layer
- Logic Apps handle orchestration. Each system has its own Logic App that listens to changes (via webhooks, APIs or CDC) and pushes updates to other systems
- Azure Event Hub sits in the middle to send events to Fabric
- Event Grid routes system-generated events like function triggers or retries
- Azure Functions take care of small, stateless transformation or filtering jobs
Data Platform
- Microsoft Fabric handles the analytics side. It ingests data , stores it in a Lakehouse , and exposes it through Power BI
- Fabric follows a medallion architecture :
✅ Bronze: Legacy SQL Server warehouse
✅ Silver: Cleaned business objects (Accounts, Orders, etc.)
✅ Gold: Dashboard-ready data with measures and KPIs
Identity & Security
- SSO on Azure and proper OAuth implementations
- Integration users in Salesforce and D365 prevent update loops
- Azure Key Vault stores secrets and credentials securely
Monitoring and observability
- Azure Monitor as the overall platform for Monitoring
- App Insights as detailed view for different applications
- Log Analytics to see pertinent Logs
Event-Driven
When syncing systems like Salesforce and Dynamics 365, the challenge isn’t just moving data, it’s doing it without collisions, loops, or delays. OmniSync handles this using a clean event-driven architecture (EDA) that combines:
- Change Data Capture (CDC) or webhook-based triggers
- Sync origin tracking to avoid loops
- Integration users to track synchronizations
- Light transformation logic in Azure Functions or inline Logic App steps
- Event Hub to route and decouple publishers from subscribers
Loop Prevention & Conflict Handling
When you’re syncing systems in both directions, the biggest problem is to avoid update loops and collisions between records that were never meant to sync.
OmniSync solves this in two key ways:
- Loop prevention using tracking
- Conflict detection using unique identifiers (like customer numbers)
How Loop Prevention Works
Every record that gets synced carries a hidden field:
- If Salesforce sends an update to D365, the Logic App tags it with the integration user.
- Then, when a Logic App monitoring D365 sees that record later, it checks if the user is the integration one . If that is true, the synchronization is skipped. This completely stops update loops without relying on timestamps or hashes.
Conflict Handling
But what if the same account gets created in both systems independently?
For instance, when a new Account is created in D365, the corresponding Logic App detects the change. Before inserting the record into Salesforce, a check is performed against the Fabric DataLake to determine if the record already exists. If a match is found, the sync_status field in D365 is updated to “Conflict”.
ETL
OmniSync uses a single ETL process to migrate legacy data from a SQL Server Data Warehouse into the new environment. This is done using Microsoft Fabric’s Dataflow Gen2 , a visual, user-friendly tool that makes it easy to extract and stage structured data.
From there, the data enters the Bronze layer of the Medallion architecture and flows through to the Golden Lakehouse layer.
This ETL step is just the starting point , once the historical data is loaded, all updates and sync operations are fully event-driven.
We’ll go deeper into this data flow in the follow-up article focused on Fabric + Medallion implementation.
Kappa & Lambda Architectures
Ideas from both architectures are taken, adapting them to fit on OmniSync.
Let’s break down what that means in practice.
Lambda Architecture
The Lambda architecture was designed to combine:
- A batch layer (slow, reliable processing)
- A speed layer (real-time but eventually consistent)
- A serving layer (where queries run)
It works well for big data pipelines — but it’s overkill for sync. Managing two parallel paths (real-time and batch) adds complexity and technical debt that OmniSync doesn’t need. Plus, it assumes you’re rebuilding and querying from large volumes of data — not syncing discrete, structured entities like Accounts and Orders.
Kappa Architecture
The Kappa model removes the batch layer and handles everything via streaming. You store events once, replay them as needed, and build everything on top of that.
In OmniSync, that’s closer to the truth , especially with Event Hub acting as the event source of record. But OmniSync still isn’t doing full stream processing, continuously transforming big scale data. We’re handling discrete change events with lightweight logic and routing.
OmniSync’s Actual Model: Event + Delta + Lakehouse
What OmniSync really does is combine:
- Event-based triggers (CDC, webhooks)
- Per-record transformation (Functions, Logic Apps)
- Push-based syncing across systems
- “Append-only” storage in Fabric Lakehouse
This gives us the best parts of:
- Kappa’s streaming mindset
- Lambda’s layered thinking (but simpler)
- Fabric’s native support for medallion architecture
And most importantly: it’s a model that works for business systems , not just analytics engines.
Medallion Architecture
Once data is flowing between systems like Salesforce and Dynamics 365 OmniSync uses Microsoft Fabric with a Lakehouse model and Power BI for its Data Analytics.
We’ll use Dataflows Gen2 for the initial load, Notebooks to move data across the Bronze, Silver, and Gold layers, and Streaming Spark for CDC to materialize entities in real time. Power BI then connects directly to the Gold layer to visualize data coming from Salesforce and Dynamics 365.
More on the Lakehouse structure and Fabric in the next article.
Scope Simplification
The project intentionally narrowed its scope to keep things manageable.
In a full-scale retail implementation, you’d have to deal with dozens of interconnected business processes, entities, dependencies, and flows.
But OmniSync is a PoC designed to explore new technologies and prove that near real-time, cross-system sync can work.
So instead of modeling everything, is focused on a smaller, representative set of entities that still reflects real-world complexity:
- Currencies
- Product Categories
- Products
- Stores
- Sales Orders
- Accounts
- Geography
Each of these entities was used to build and test sync flows between Salesforce, Dynamics 365, and Microsoft Fabric, with the goal of validating patterns, not rebuilding an entire enterprise data model.
API Usage
Most of the API interactions in OmniSync are handled indirectly. Thanks to Azure Logic Apps connectors and the built-in abstractions in Salesforce and Dynamics 365, there’s usually no need to manually call APIs or manage HTTP flows yourself.
Still there were a few cases where direct API calls were unavoidable:
Salesforce CDC Setup
Salesforce’s Change Data Capture requires manual setup steps via API specifically to:
- Create Streaming Channels
- Register Channel Members
A Postman REST collection has been used to setup those:
Salesforce CDC API Reference (Postman)
GeoApify: Geolocation for the Lakehouse
To enrich location-based data like Stores and Geography in the DataLake, OmniSync used GeoApify, which is a lightweight, reliable geolocation API that converts addresses or coordinates inputs into normalized location data.
Below you can see a sample of the JSON returned for the GeoCoding API passing a latitude and longitude parameters.
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {
"datasource": {
"sourcename": "openstreetmap",
"attribution": "© OpenStreetMap contributors",
"license": "Open Database License",
"url": "https://www.openstreetmap.org/copyright"
},
"country": "United Kingdom",
"country_code": "gb",
"state": "England",
"county": "Greater London",
"city": "London",
"postcode": "W1H 1LJ",
"suburb": "Marylebone",
"street": "Upper Montagu Street",
"housenumber": "38",
"iso3166_2": "GB-ENG",
"lon": -0.160306360235508,
"lat": 51.52016005,
"state_code": "ENG",
"result_type": "building",
"formatted": "38 Upper Montagu Street, London, W1H 1LJ, United Kingdom",
"address_line1": "38 Upper Montagu Street",
"address_line2": "London, W1H 1LJ, United Kingdom",
"category": "building.residential",
"timezone": {
"name": "Europe/London",
"offset_STD": "+00:00",
"offset_STD_seconds": 0,
"offset_DST": "+01:00",
"offset_DST_seconds": 3600,
"abbreviation_STD": "GMT",
"ab
Logic Apps as the Workflow Engine
OmniSync uses Azure Logic as the low-code orchestrator that ties everything together.
Every synchronization operation from listening to Salesforce events to pushing data into Dataverse runs through a Logic App.
Why Logic Apps?
- Low-code, but powerful: Easily hook into HTTP endpoints, Dataverse, Event Hub, SQL, and more
- Built-in error handling: Retry policies, scopes, and parallel branches make it easy to isolate failures
- Easy to understand: Non developers like Business analysts can see workflows without needing to read code
Use Cases in OmniSync
- Listening to Salesforce platform events or Dataverse change notifications
- Tranformations (XML to JSON…)
- Formatting data (datettimes…)
- Constraints on checking data (data already inserted…)
- Validation logic on synchronization cycles
- Enriching payloads with metadata (sync_status…)
- Routing data into Event Hub, Event Grid or Azure Functions
Logic Apps as a Low code tool made for orchestration is a great tool with flexibility and simplicity.
CI/CD
All source code used in OmniSync is stored in several public GitHub repositories and deployed using GitLab Actions across multiple projects. Each GitLab project is responsible for deploying a specific part of the architecture:
- Main infrastructure is provisioned via Bicep templates
- Azure Container Apps are deployed using Bicep as well
- Azure Functions are built and published from source using CI pipelines
This setup automates infrastructure and app deployment from code to cloud.
Fabric, SalesForce and Dynamics 365 source code can be deployed manually through CLI or for better use integrated and deployed in their respective Platforms since they provide connectors to GitHub on their side.
For the purposes of this PoC, we’re not managing separate Dev, Staging, and Production environments. However in a real-world scenario, a proper promotion model with multiple environments would be essential — and this architecture is designed to support that when needed.
Costs
OmniSync was designed to be cost-effective, especially on the Azure side.
Of course, you’ll still need valid licenses for Salesforce, Dynamics 365, Microsoft Fabric, and GeoApify, but most of these platforms offer free basic trials, which are more than enough to test and validate the system.
Azure Cost Optimizations
To keep Azure costs under control, OmniSync uses:
- Azure Consumption Logic Apps — pay only when a workflow runs
- Azure Consumption Functions — event-driven, billed per execution
- Azure Container Apps — deployed with minimum CPU and memory, and limited to one replica
- All supporting Azure services (Event Hub, App Insights, Key Vault…) are configured using Free or Basic SKUs
Even Microsoft Fabric’s free trial includes access to an F64 SKU, which far exceeds the compute needs of this PoC and allows you to explore the entire Lakehouse and Power BI setup at full scale for free.
Scalability
OmniSync is built entirely on serverless and event-driven components, scalability is not an issue by design. The architecture can grow automatically with demand, without needing any manual scaling logic, complex provisioning, or infrastructure refactoring.
On the Azure side, all core services scale seamlessly:
- Azure Logic Apps can run multiple parallel instances of the same workflow with built-in throttling and retry logic.
- Azure Functions automatically spin up additional compute to handle bursts of sync events or large payloads.
- Azure Container Apps are configured with a minimal baseline (1 replica, low CPU/memory)
- Azure Event Hub and Event Grid are designed for high-throughput, low-latency messaging and can absorb tens of thousands of events per second if needed.
Even though the PoC traffic is minimal, this same stack could support enterprise scale volumes without changes to the architecture.
On the Microsoft Fabric side, scaling is easy. Since it’s a fully managed SaaS platform, you don’t need to worry about infrastructure. If things get heavy, like high sync volume or refresh rate, you can just increase the capacity (e.g., F8 to F16) with a few clicks.
Reliability
Since the project prioritized cost over high availability, the reliability approach was intentionally kept lean, focusing only on zone redundancy only (not region wise).
Here’s how redundancy was handled across key Azure services:
- Azure Logic Apps: ❌ Not zone-redundant
- Azure Consumption Functions: ✅ Zone-redundant
- Azure Event Grid: ✅ Zone-redundant
- Azure Container Apps: ❌ Not zone-redundant
- Azure Event Hub: ✅ Zone-redundant
- Azure Storage Accounts: ✅ Zone-redundant
- Azure Container Registry: ✅ Zone-redundant
- Integration Account: ❌ Not zone-redundant
- Azure Key Vault: ✅ Zone-redundant
- Azure Monitor: ✅ Zone-redundant
- Log Analytics: ✅ Zone-redundant
- Application Insights: ✅ Zone-redundant
- Microsoft Entra (Azure AD): ✅ Globally available
Since zone redundancy was considered sufficient for the scope and scale of the PoC, no separate DR strategy was implemented.
For a production deployment, a more solid decision should be revisited and evaluate things like multi-region failover, backup strategies, and high-availability tiers.
Performance
While this project prioritized cost optimization, performance was never ignored. Even with the most basic SKUs across the Azure stack, OmniSync delivers solid real-world responsiveness thanks to its event-driven and auto-scaling architecture.
No formal performance or stress testing was conducted, but manual tests syncing entities across the system typically complete in just a few seconds or up to ~20 seconds in edge cases; depending on factors like cold starts (Functions, Container Apps), retry logic or latencies.
Even without tuning for performance, the architecture has proven to be fast enough for operational needs while keeping costs low.
Note: Fabric’s free F64 SKU was not considered as part of the performance baseline, since it vastly exceeds this project’s actual requirements.
If this were to move into production, full load testing and performance profiling would be the logical next step — but that’s outside the scope of this PoC.
Monitoring & Error Handling
OmniSync uses Azure-native monitoring tools to track activity, detect issues, and collect performance metrics across all key components — from Logic Apps to Container Apps to Microsoft Fabric.
Monitoring Setup
All apps and services are wired into Azure Monitor, with the following setup:
- Application Insights is enabled across Azure Functions and Container Apps for telemetry, dependency tracking, and live logs.
- A Log Analytics Workspace is used to centralize monitoring across services.
- Logic Apps Monitoring Solution is deployed and connected to the workspace for advanced Logic App logging beyond the default run history.
This provides visibility across every layer of the sync process: API calls, retries, internal events, and more.
Logs & Metrics
Different services push logs and metrics in different ways:
- Logic Apps: Errors and run history can be viewed directly in the Logic App blade or queried through Log Analytics via the monitoring solution.
- Container Apps: Logs are streamed into Azure Monitor and viewable via the Log stream or Log Analytics.
- Fabric: Metrics (especially for Spark executions) can be queried using KQL inside the Fabric workspace.
Here’s a sample query used to inspect memory usage of Spark workloads:
SparkMetrics_CL
| where fabricWorkspaceId_g == "{FabricWorkspaceId}"
and artifactId_g == "{ArtifactId}"
and fabricLivyId_g == "{LivyId}"
| where name_s endswith "jvm.total.used"
| summarize max(value_d) by bin(TimeGenerated, 30s), executorId_s
| order by TimeGenerated asc
Error Detection & Diagnostics
OmniSync includes multiple built-in mechanisms to track and detect errors:
- Dead Letter Queues (DLQ) on Event Grid handle failures in event delivery when downstream handlers fail.
- Application Insights provides rich exception tracking and telemetry for Azure Functions and Container Apps.
- Azure Monitor Logs (via KQL) are used for deeper insights across services, especially for correlating multi-step sync issues.
- Logic App Run History allows quick inspection of individual failed runs, including step-by-step trace.
These tools offer both real-time observability and post-mortem diagnostics without adding external tooling.
Security
OmniSync implements security with simplicity and control in mind, sticking to platform standards, but also making practical decisions to keep the setup lightweight for a PoC.
Authentication
There’s no unified SSO across all platforms. Ideally, a single Microsoft Entra ID (Azure AD) user would be used to sign in across Salesforce, Dynamics 365, and Fabric using SSO (Single Sign-On).
But to keep things simple:
- Each platform uses its own authentication system
- Even Azure , Dynamics 365 , and Fabric don’t share the same Entra user (Still they should but the project and trials evolved with 3 different Entra users)
- Platform specific users were configured instead of federating identity across systems
Authorization
Authorization was kept minimal for this PoC. In a production setup, roles would be more granular, but here most systems used default admin users or minimally scoped integration users:
- Dynamics 365 Integration User : Assigned the Basic User and Audit roles.
- Salesforce Integration User : Assigned a custom Integration Role.
- Salesforce Client App : Connected via OAuth client credentials with minimum required permissions.
- Fabric Client App : Entra App Registration with only: “Run Queries and Mutations” and “Power BI Service” access
Other Secrets & Access Keys
The architecture also relied on a few platform specific keys and credentials:
- Event Grid : Accessed using access tokens
- Event Hub : Secured with SAS tokens
- Salesforce : OAuth 2.0 client credentials flow for integrations
- SPA : OAuth 2.0 Delegated Entra permissions
- GeoApify : External API key for resolving geolocation data
- Azure services : Use Managed Identity when possible for secure inter-service authentication
Most secrets were not stored in Azure Key Vault , intentionally, to simplify the PoC. However:
- The GeoApify API key and Container App certificates were securely stored in Azure Key Vault
Network Security & Private Access
Private endpoints were not used in OmniSync. Since the architecture didn’t require private networking or integrate with on-prem/IaaS resources, all traffic was routed publicly over the internet.
This choice simplified the infrastructure and avoided unnecessary VNET configurations which is acceptable for a proof-of-concept like this.
Lessons Learned
Working on OmniSync provided a lot of insight into what it takes to integrate multiple enterprise systems together using native Azure tools. Here are some of the key lessons from building the PoC:
- Event-Driven Patterns and Preventions: while Logic Apps and Event Grid make near real-time sync simple, loop prevention is essential. Without these, you can quickly end up in an infinite update cycle between systems.
- Loosely Coupled architecture: by not using coupled services (Logic Apps, Event Hub, Functions) it makes easy to add or change systems (like SAP later) without rewriting the whole flows.
- Monitoring: even the project is at a small scale monitoring becomes important anyhow. Azure Monitor, App Insights, and simple KQL queries made it easy to trace errors and understand behavior without needing external tools.
- (Near) Real-Time Sync ≠ Instant Synchronization: Even in an event-driven system, things like cold starts, retries, and API latency mean synchronizations that won’t always be sub-second. But 20 seconds at max for this end-to-end scenario is still very usable for operational needs.
- Platform ecosystem: to integrate with SalesForce, Dynamics 365 and Fabric you are forced at least to understand and implement few things on those not familiar (until now) features like Flows on SalesForce, Power Apps or Lakehouses and DAX queries on Power BI.
- Low Code tools: most of the project was built without writing much code thanks to the low-code and visual tools provided by each platform. Logic Apps acting as the iPaaS layer handled most of the orchestration. Overall, these low-code tools significantly reduced development effort and complexity.
- Fabric: t urned out to be a strong choice for the analytics layer. While getting started with Lakehouse concepts and Spark notebooks wasn’t easy at first, it paid off quickly. It took time to ramp up, but in the end, Fabric offered a modern, scalable, and efficient platform for centralized reporting and analytics.
Future Plans
OmniSync already connects Salesforce, Dynamics 365 (Sales), and Microsoft Fabric still on the roadmap there’s still SAP S/4HANA Integration
Future plan is to use:
- SAP Event Mesh to publish change events (e.g. Product, Order, Customer)
- Azure Logic Apps for SAP to consume and route those events
- Reuse existing OmniSync sync patterns: conflict detection, and integration users
This will complete the loop across CRM, ERP, and analytics.
Coming Soon
📌In the next post, I’ll dig into Fabric Integration.
👀 Follow me here on Medium to catch Part 2.
💻 Source code: https://github.com/zodraz/omnisync-docs




Top comments (0)