DEV Community

Cover image for Building a Salesforce-Integrated App from Scratch: A Developer's Honest Walkthrough
Olivia Parker
Olivia Parker

Posted on

Building a Salesforce-Integrated App from Scratch: A Developer's Honest Walkthrough

Nobody tells you upfront how opinionated Salesforce is. You come in expecting an API integration — something you've done dozens of times — and about two days in, you realize you're not just connecting to a database. You're connecting to an entire ecosystem with its own vocabulary, its own security model, its own way of thinking about data relationships. That adjustment period is real, and skipping it mentally is how projects get into trouble early.

I want to walk through what building a Salesforce-integrated application actually looks like from the ground up — not the happy path from the documentation, but the version with the decisions, the friction points, and the things worth knowing before you start. Whether you're a dev lead evaluating the integration yourself or an engineering manager trying to understand what your team is walking into, this is the honest version.

Working alongside an experienced Salesforce development company changes the ramp-up curve dramatically — but understanding the architecture yourself means you can have real conversations about tradeoffs instead of just nodding at deliverables.

Getting Your Head Around the Salesforce Data Model

Before anything else, you need to internalize that Salesforce is not a relational database in the way you're used to thinking about one. It has objects — standard objects like Accounts, Contacts, Leads, Opportunities — and custom objects you define yourself. Relationships between objects work differently than foreign keys in a traditional schema. There are lookup relationships, master-detail relationships, and junction objects for many-to-many scenarios, each with different behavior around ownership, sharing rules, and cascade deletes.

The reason this matters upfront is that your integration architecture depends on it. If you're building an external application that needs to read and write Salesforce data, you need to know which objects you're touching, what their relationships look like, and what the sharing and visibility rules are in that specific Salesforce org. Two different orgs — even at the same company — can have wildly different configurations, custom fields, and validation rules. Assuming the data model is standard without checking is a reliable way to discover problems in staging instead of during planning.

Authentication First — And It's More Nuanced Than You Think

Salesforce supports several OAuth flows, and picking the right one for your use case matters more than most integration guides emphasize.

For server-to-server integrations — a backend service that needs to read or write Salesforce data without a user present — the JWT Bearer Flow is generally the right answer. You set up a Connected App in Salesforce, configure a certificate, and your server authenticates by signing a JWT assertion. No user interaction required, tokens refresh cleanly, and it's designed for production automation use cases.

For user-facing applications where individuals log in and the app acts on their behalf, the standard Web Server OAuth flow applies — the familiar redirect-based authorization you've seen in every OAuth implementation. Salesforce's implementation is standard enough that it integrates cleanly with most OAuth libraries.

Where teams get tripped up is the Username-Password flow, which looks convenient and is genuinely useful for quick prototypes. It's not appropriate for production integrations — it doesn't support MFA, it's deprecated in newer Salesforce security policies, and it creates credential management problems that compound over time. It's worth knowing it exists and worth knowing not to ship it.

The API Landscape Inside Salesforce

Salesforce doesn't have one API. It has several, and understanding which one to use for which scenario is a real architectural decision.

The REST API is the starting point for most integrations. Standard CRUD operations on Salesforce objects, query execution via SOQL, and straightforward JSON responses. For most external application integrations, this covers the majority of what you need.

The Bulk API exists for exactly what the name suggests — loading or extracting large volumes of records. If you're doing data migrations, historical syncs, or batch processing that involves tens of thousands of records, the standard REST API will get you rate-limited quickly. The Bulk API processes records asynchronously in batches and is built for volume. Knowing when to switch from REST to Bulk saves a lot of pain later.

The Streaming API and Platform Events are the pieces that make Salesforce integrations feel real-time rather than poll-based. Platform Events let Salesforce publish events that your external application subscribes to — so instead of polling for record changes every few minutes, your app gets notified the moment something relevant happens in the org. For integrations where latency matters, this changes the architecture entirely.

SOQL deserves a mention on its own. Salesforce Object Query Language looks enough like SQL to feel familiar, but it has important differences — no joins in the traditional sense, relationship traversal works through dot notation, and there are governor limits on query complexity and result size that don't exist in a standard database. Writing efficient SOQL is a skill that takes time, and inefficient queries are one of the most common performance problems in Salesforce integrations.

Governor Limits — The Thing That Will Humble You

If there's one thing about Salesforce development that catches experienced engineers off guard, it's governor limits. Salesforce is a multi-tenant platform — your org shares infrastructure with thousands of others — and governor limits are how Salesforce enforces that no single tenant can monopolize shared resources.

There are limits on the number of API calls per day, limits on the number of SOQL queries per transaction, limits on heap size, on CPU time, on the number of records processed in a single transaction. In an isolated environment, these limits feel abstract. In production, under real load, hitting them unexpectedly is a significant operational event.

The practical implication for integration design is that you need to think about batching, about query efficiency, and about API call volume from the start — not as an optimization you'll do later. An integration that works fine in testing, where you're processing tens of records, can fall apart in production when it's processing thousands.

Handling Data Sync — The Problem That's Harder Than It Looks

Most Salesforce integrations eventually involve keeping data in sync between Salesforce and an external system. This sounds straightforward and reliably isn't.

You need to decide on a sync direction — is Salesforce the system of record, is your external application, or is it bidirectional? Bidirectional sync is significantly more complex because you need a strategy for handling conflicts when both systems have updated the same record between sync cycles.

Change tracking is the other challenge. Salesforce doesn't have a built-in change data capture mechanism that works perfectly for all integration scenarios out of the box. The options — polling modified dates, using Salesforce's Change Data Capture feature on supported objects, or using Platform Events to publish changes as they happen — each have tradeoffs around latency, completeness, and complexity.

The right sync architecture depends entirely on your latency requirements, your data volume, and your tolerance for complexity. There is no universal answer — just tradeoffs worth understanding explicitly.

Error Handling and Observability in Production

Salesforce integrations fail in specific ways that are worth designing for deliberately.

Authentication tokens expire. API limits get hit. Validation rules in the Salesforce org reject records your integration is trying to write, for reasons that aren't always obvious from the error response. Network timeouts happen. Bulk jobs fail partway through.

Production-grade integrations need retry logic with exponential backoff, dead letter queues for records that repeatedly fail, detailed logging that captures enough context to diagnose failures without exposing sensitive data, and alerting that surfaces problems before they compound. These aren't interesting engineering problems — they're table stakes for anything that needs to run reliably.

Conclusion

Building a Salesforce-integrated application is a legitimate engineering challenge. The API surface is large, the data model is opinionated, the governor limits require architectural forethought, and the sync problems are genuinely hard to get right at scale. None of this is insurmountable — teams ship sophisticated Salesforce integrations regularly — but it rewards experience and punishes assumptions.

If your team is walking into this for the first time, the learning curve is real and the cost of architectural mistakes discovered late is high. Partnering with an established Salesforce development company like Hyperlink InfoSystem, which has built production Salesforce integrations across industries and scales, compresses that learning curve considerably and keeps the architecture decisions grounded in what actually works.

Go in with clear eyes about the complexity. Plan for the governor limits, the sync edge cases, and the auth nuances from day one. The integrations that run smoothly in production are almost always the ones that took those things seriously from the start.

Top comments (0)