DEV Community

Cover image for Why Your Salesforce Triggers Are Slower Than They Should Be: A Performance Debugging Guide
Sathish Kumar Velayudam
Sathish Kumar Velayudam

Posted on

Why Your Salesforce Triggers Are Slower Than They Should Be: A Performance Debugging Guide

You've inherited a trigger that works perfectly fine in sandbox with test data, but in production it's causing timeouts on bulk operations. Or maybe you wrote a trigger following all the best practices you know, but users are complaining about slow save times. The trigger executes without errors—it's just painfully slow.

This article isn't about teaching you to write efficient triggers from scratch. It's about diagnosing performance problems in triggers that already exist. The debugging methodology matters more than memorizing optimization patterns because every slow trigger fails in its own unique way.

The Wrong Way to Start

When a trigger is slow, most developers immediately start refactoring code based on what they think might be wrong. They see a SOQL query and assume it needs optimization. They spot a loop and start rewriting logic. This approach wastes time because you're guessing instead of measuring.

The correct approach is forensic: gather evidence, identify the actual bottleneck, then fix that specific problem. Debug logs are your primary diagnostic tool, but only if you know what to look for.

Setting Up Useful Debug Logs

Salesforce's default debug log settings won't give you the performance data you need. You need to configure logging specifically for performance analysis.

Open Developer Console, go to Debug → Change Log Levels, and create a new log level with these settings:

  • Apex Code: FINEST
  • Apex Profiling: FINEST
  • Callout: FINER
  • Database: FINER
  • System: DEBUG
  • Validation: INFO
  • Visualforce: INFO
  • Workflow: INFO

The critical settings are Apex Code and Apex Profiling at FINEST. These give you execution time for each line of code and cumulative resource consumption. Without these, you're debugging blind.

Apply this log level to your user, then trigger the slow operation. For bulk operations, test with realistic data volumes—50 to 100 records minimum. Performance problems that don't appear with 5 test records will emerge with 50.

Reading Debug Logs for Performance Issues

A debug log for a slow trigger will be large, often hitting Salesforce's 20MB limit. Don't try to read it sequentially from top to bottom. Instead, search for specific patterns that reveal performance bottlenecks.

Start by searching for "LIMIT_USAGE_FOR_NS". This appears at the end of your trigger execution and shows exactly how much of each governor limit you consumed:

LIMIT_USAGE_FOR_NS|(default)|
  Number of SOQL queries: 15 out of 100
  Number of query rows: 4823 out of 50000
  Number of SOQL queries: 15 out of 100
  Maximum CPU time: 8447 out of 10000
  Maximum heap size: 3891456 out of 6000000
Enter fullscreen mode Exit fullscreen mode

These numbers tell you which resource is your bottleneck. If CPU time is near the limit, you have inefficient processing logic. If query rows is high but SOQL queries is low, you're retrieving too much data per query. If SOQL queries is high, you're making redundant database calls.

In this example, CPU time at 8,447ms out of 10,000ms is the problem. The trigger is spending 84% of available processing time, which means computational overhead, not database access.

Finding the Expensive Operations

Once you know which resource is constrained, search the log for the operations consuming it. For CPU time issues, search for "CUMULATIVE_LIMIT_USAGE". This appears after each Apex statement and shows running totals:

CUMULATIVE_LIMIT_USAGE
  CUMULATIVE_LIMIT_USAGE_END
    Number of SOQL queries: 3 out of 100
    Number of query rows: 847 out of 50000
    Maximum CPU time: 2341 out of 10000

CUMULATIVE_LIMIT_USAGE
  CUMULATIVE_LIMIT_USAGE_END  
    Number of SOQL queries: 3 out of 100
    Number of query rows: 847 out of 50000
    Maximum CPU time: 7298 out of 10000
Enter fullscreen mode Exit fullscreen mode

The CPU time jumped from 2,341ms to 7,298ms between these two log entries—that's 4,957ms spent on whatever happened in between. Look at the code execution lines between these cumulative usage blocks to find the culprit.

For SOQL query issues, search for "SOQL_EXECUTE_BEGIN" to see each query and how many rows it returned. A query returning 10,000 rows when you only needed 50 is a problem, even if the query itself is fast.

Common Bottlenecks and How They Appear in Logs

Inefficient loops processing large collections show up as high CPU time with code execution lines showing the same method called hundreds or thousands of times in sequence. If you see your helper method executed 500 times consecutively in the log, you're processing records one at a time instead of in bulk.

SOQL queries inside loops appear as multiple SOQL_EXECUTE_BEGIN entries with identical or similar query text. You'll see the same query pattern repeated with different bind variables. This is the classic N+1 problem and it's immediately visible in logs.

Retrieving unnecessary fields or records shows up as SOQL queries returning far more rows than the Number of DML statements would suggest you're actually using. If you query 5,000 Account records but only update 50, you're over-fetching data.

Complex object relationships requiring multiple queries appears as several sequential SOQL queries where each query uses results from the previous one to build its WHERE clause. The queries execute fast individually, but the cumulative database time adds up.

A Real Example: Diagnosing Slow Contact Updates

Here's an actual trigger performance problem I debugged recently. A Contact trigger was timing out when users imported more than 100 contacts via Data Loader.

The debug log showed:

LIMIT_USAGE_FOR_NS|(default)|
  Number of SOQL queries: 87 out of 100
  Number of query rows: 2847 out of 50000  
  Maximum CPU time: 4234 out of 10000
Enter fullscreen mode Exit fullscreen mode

SOQL queries at 87 out of 100 is the constraint. The trigger is making way too many database calls.

Searching for SOQL_EXECUTE_BEGIN revealed the problem:

SOQL_EXECUTE_BEGIN|[47]|Aggregations:0|SELECT Id, Name FROM Account WHERE Id = :tmpVar1
SOQL_EXECUTE_BEGIN|[47]|Aggregations:0|SELECT Id, Name FROM Account WHERE Id = :tmpVar2  
SOQL_EXECUTE_BEGIN|[47]|Aggregations:0|SELECT Id, Name FROM Account WHERE Id = :tmpVar3
Enter fullscreen mode Exit fullscreen mode

Same query at line 47, executed repeatedly with different IDs. Classic SOQL in a loop.

The code looked like this:

for (Contact c : Trigger.new) {
    Account a = [SELECT Id, Name FROM Account WHERE Id = :c.AccountId];
    c.Account_Name__c = a.Name;
}
Enter fullscreen mode Exit fullscreen mode

The fix was straightforward once the problem was identified:

Set<Id> accountIds = new Set<Id>();
for (Contact c : Trigger.new) {
    if (c.AccountId != null) {
        accountIds.add(c.AccountId);
    }
}

Map<Id, Account> accountMap = new Map<Id, Account>(
    [SELECT Id, Name FROM Account WHERE Id IN :accountIds]
);

for (Contact c : Trigger.new) {
    if (c.AccountId != null && accountMap.containsKey(c.AccountId)) {
        c.Account_Name__c = accountMap.get(c.AccountId).Name;
    }
}
Enter fullscreen mode Exit fullscreen mode

The refactored trigger made 1 SOQL query instead of 87. The debug log for 100 contacts showed:

LIMIT_USAGE_FOR_NS|(default)|
  Number of SOQL queries: 1 out of 100
  Number of query rows: 43 out of 50000
  Maximum CPU time: 247 out of 10000  
Enter fullscreen mode Exit fullscreen mode

CPU time dropped from 4,234ms to 247ms—a 94% reduction. The change took 10 minutes to implement once I knew what was wrong. The hour I spent reading debug logs to identify the exact problem was time well spent.

When "Best Practices" Code Is Still Slow

Sometimes you'll find a trigger that follows all the standard optimization patterns but still performs poorly. The code is bulkified, SOQL queries are outside loops, and DML is batched properly. Yet it's still slow.

These cases usually involve one of three issues: complex business logic that's computationally expensive, unavoidable large data volumes, or problematic interactions with other automation in the org.

For complex business logic, the debug log will show high CPU time with no obvious inefficiencies. Every operation is necessary, but there are simply too many operations. The solution might be moving some processing to asynchronous context using Queueable Apex or redesigning the logic to reduce computational overhead.

For large data volumes, you'll see high query rows or heap size consumption even with efficient queries. Sometimes the fix is adding filters to reduce the dataset, but other times you need to rethink the approach entirely—perhaps using platform events to process data in smaller chunks or using batch Apex for operations that don't need to be synchronous.

For automation conflicts, other triggers, process builders, or flows are executing as a result of your trigger's DML operations, creating cascading performance problems. The debug log will show your trigger completing quickly, then additional automation firing. Search the log for "FLOW_START_INTERVIEW" or other trigger handler executions to identify what else is running.

The Tools Beyond Debug Logs

Developer Console's Execution Overview tab provides a visual representation of what's happening in your transaction. After capturing a debug log, click the "This Frame" tab and select "Execution Tree" view. You'll see each executed line with its duration, making it easier to spot expensive operations than reading raw logs.

The Query Plan tool helps optimize specific SOQL queries that are performing poorly. In Developer Console, go to Query Editor, enter your query, and click "Query Plan". Salesforce shows you whether your query is using indexes efficiently and suggests improvements.

Event Monitoring (if you have it enabled) provides production performance metrics without needing to reproduce issues in debug logs. The URI Event Type tracks how long each page takes to load, which can help identify slow operations affecting users even if they're not hitting governor limits.

What to Do When You Can't Reproduce the Problem

Sometimes triggers are slow in production but fast in sandbox with test data. The performance problem only appears with real data volumes or specific data patterns that don't exist in your test environment.

In these cases, you need production debug logs, but enabling debug logging for all users impacts org performance. Instead, enable logging only for specific users who are experiencing the problem, or enable it temporarily during a known slow operation window.

For bulk operations like Data Loader imports, ask the user to retry with logging enabled on their account. For record saves through the UI, enable logging on that user's account and ask them to notify you when they're about to perform the slow operation.

If you can't get production logs, use execution context from error emails or monitoring tools. Even without detailed logs, knowing which trigger handler executed, how many records were in the batch, and what the limit usage was helps narrow down the problem.

Making the Fix

Once you've identified the specific bottleneck, the fix usually follows predictable patterns. SOQL in loops becomes bulk queries with maps. Over-fetched data gets additional WHERE clause filters or reduced field lists. Expensive processing moves to async context. Complex nested loops get refactored into set-based operations.

The key is testing your fix with realistic data volumes before deploying. Write a test that processes at least 100 records—Salesforce's bulk trigger batch size. Measure governor limit consumption in assertions:

@isTest
static void testContactTriggerPerformance() {
    List<Contact> contacts = new List<Contact>();
    for (Integer i = 0; i < 100; i++) {
        contacts.add(new Contact(
            LastName = 'Test ' + i,
            AccountId = testAccount.Id
        ));
    }

    Test.startTest();
    insert contacts;
    Test.stopTest();

    // Verify governor limit usage stayed reasonable
    System.assert(Limits.getQueries() < 10, 
        'Expected fewer than 10 queries, got ' + Limits.getQueries());
    System.assert(Limits.getCpuTime() < 5000,
        'Expected CPU time under 5000ms, got ' + Limits.getCpuTime());
}
Enter fullscreen mode Exit fullscreen mode

These assertions fail if your optimization didn't actually improve performance, catching regressions before they reach production.

The Debugging Mindset

Debugging slow triggers is detective work. You gather evidence from logs, form hypotheses about what's wrong, test those hypotheses with targeted code changes, and measure results. It's methodical and systematic, not creative problem-solving.

This is different from writing new code where you can design for performance from the start. When debugging, you're working with constraints: existing business logic that must remain intact, data models you can't change, and automation you didn't write. Your job is identifying inefficiencies within those constraints.

The developers who are best at this work don't necessarily know the most optimization tricks. They know how to read debug logs systematically, how to isolate variables when reproducing problems, and how to measure whether their fixes actually helped. These are investigative skills, not coding skills.

The next time you encounter a slow trigger, resist the urge to immediately start refactoring code. Open a debug log first. Find the evidence. Let the data tell you what's actually wrong. Then fix that specific problem and measure the improvement. This disciplined approach solves performance issues faster than intuition ever will.

Top comments (0)