Database deadlocks are one of those production issues that don't announce themselves with a flashy error page — they hide in your MySQL slow query log, surface as mysterious 500 errors during peak traffic, and leave your team scratching their heads at 2 AM. If you're running Magento 2 at scale, deadlocks are not a question of if, but when.
This guide covers the full picture: why deadlocks happen in Magento specifically, how to detect them before they cause real damage, and concrete prevention strategies you can implement today.
What Is a Database Deadlock?
A deadlock occurs when two or more transactions are each waiting for the other to release a lock, creating a circular dependency that can never resolve on its own. MySQL's InnoDB engine detects this situation and automatically rolls back one of the transactions — the "victim" — returning a Deadlock found when trying to get lock; try restarting transaction error.
SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock;
try restarting transaction
Magento will log these and, if retry logic is in place, silently retry. But under high concurrency, deadlocks stack up fast and degrade the entire checkout flow.
Why Magento 2 Is Especially Prone to Deadlocks
Magento's architecture involves several high-concurrency write patterns that are classic deadlock recipes:
1. Inventory Reservation During Checkout
When multiple customers checkout concurrently with overlapping cart items, Magento locks inventory rows in inventory_reservation and cataloginventory_stock_item. If two transactions lock the rows in different orders — which Magento's parallel processing easily triggers — you get a deadlock.
2. Quote and Order Tables
The quote, quote_item, sales_order, and sales_order_item tables are constantly written during the checkout flow. Magento updates totals, applies rules, reserves stock, and generates orders — all in heavily nested transactions. These tables see high lock contention during flash sales or email campaigns.
3. EAV Attribute Updates
The EAV tables (catalog_product_entity_*, customer_entity_*) use multi-row inserts and updates. Under concurrent import or mass update jobs running alongside regular traffic, these tables frequently deadlock.
4. Indexer Runs During Business Hours
Running indexers (especially catalog_product_price or catalogrule_rule) while the storefront is serving traffic creates massive lock contention. Indexers can lock entire index tables while customer-facing queries try to read them.
Detecting Deadlocks
Check the InnoDB Status
The most direct way to see recent deadlocks:
SHOW ENGINE INNODB STATUS\G
Look for the LATEST DETECTED DEADLOCK section. It will show you the exact transactions involved, which tables and rows were locked, and which transaction was rolled back.
Enable the InnoDB Deadlock Log
For persistent logging, add this to your my.cnf:
[mysqld]
innodb_print_all_deadlocks = ON
This writes every deadlock to the MySQL error log (/var/log/mysql/error.log), giving you a historical record to analyze patterns.
Monitor with Performance Schema
SELECT * FROM performance_schema.events_errors_summary_global_by_error
WHERE error_name = 'ER_LOCK_DEADLOCK';
This shows the cumulative deadlock count since MySQL started — useful for baselining and alerting.
Magento Exception Log
Deadlocks that Magento doesn't retry successfully will end up in var/log/exception.log. Filter for 1213:
grep "1213" var/log/exception.log | tail -50
If you're seeing more than a handful per hour during peak traffic, you have a real problem.
Prevention Strategies
1. Reduce Transaction Scope
The longer a transaction holds locks, the higher the chance of a deadlock. Review custom code and plugins that wrap large operations in single transactions. Split them into smaller, targeted transactions where possible.
// Bad: one big transaction
$this->transactionFactory->create()
->addObject($product)
->addObject($stockItem)
->addObject($priceRule)
->save();
// Better: separate, focused saves
$product->save();
$stockItem->save();
2. Consistent Lock Ordering
Deadlocks often happen because two transactions acquire the same locks in different orders. If you have custom code that locks multiple rows or tables, ensure all code paths always acquire locks in the same order (e.g., always lock by entity ID ascending).
3. Use SELECT ... FOR UPDATE Sparingly
Magento (and many third-party modules) overuse SELECT FOR UPDATE. This pessimistic locking is often unnecessary. Consider whether optimistic locking — check-then-update with a version column — is sufficient for your use case.
For inventory specifically, Magento 2.3+ introduced the Inventory Reservation pattern (inventory_reservation table with append-only inserts) precisely to reduce lock contention. Make sure you're on Magento 2.3+ MSI and not using legacy CatalogInventory where avoidable.
4. Isolate Indexer Runs
Schedule all indexers to run during off-peak hours. Even better, switch from full reindex to incremental (realtime) indexing for most indexers — this spreads the write load over time instead of creating a burst:
php bin/magento indexer:set-mode schedule catalog_product_price catalogrule_rule
For indexers that must run in batch, use a maintenance window and disable your load balancer from sending traffic during that period.
5. Tune InnoDB Lock Wait Timeout
By default, InnoDB waits 50 seconds before giving up on a lock. That's too long for a web request. Tune it down:
[mysqld]
innodb_lock_wait_timeout = 10
This causes deadlock victims to fail faster, reducing the cascade effect on your web tier. Magento's retry logic will handle most of these gracefully.
6. Switch to READ COMMITTED Isolation
InnoDB's REPEATABLE READ isolation level (Magento's default) uses gap locks, which increase deadlock risk. Switching to READ COMMITTED reduces gap locking:
[mysqld]
transaction_isolation = READ-COMMITTED
This is safe for most Magento workloads and is often recommended in high-traffic setups. Test thoroughly in staging first — some edge cases in custom code may rely on the stricter isolation.
7. Connection Pooling
Too many simultaneous database connections increase the probability of lock contention. Use a connection pooler like ProxySQL or tune max_connections alongside PHP-FPM pool sizes to prevent connection storms during traffic spikes. See our Database Connection Pooling guide for details.
8. Audit Third-Party Modules
Many deadlocks originate in poorly written third-party modules that use direct SQL writes, lock entire tables, or run in hooks that fire during transactions. Use the InnoDB status output to identify which tables are involved in your deadlocks, then audit which modules write to those tables.
-- Find tables with high lock wait counts
SELECT object_schema, object_name, count_read_with_shared_locks,
count_write_allow_write, sum_timer_wait
FROM performance_schema.table_lock_waits_summary_by_table
ORDER BY sum_timer_wait DESC
LIMIT 20;
Handling Deadlocks in Custom Code
If you're building custom functionality that writes to Magento's core tables, implement retry logic:
$maxRetries = 3;
$attempt = 0;
while ($attempt < $maxRetries) {
try {
$this->doDatabaseWork();
break;
} catch (\Zend_Db_Statement_Exception $e) {
if (str_contains($e->getMessage(), '1213') && $attempt < $maxRetries - 1) {
$attempt++;
usleep(100000 * $attempt); // exponential backoff
continue;
}
throw $e;
}
}
Exponential backoff is important — retrying immediately often just causes another deadlock with the same competing transaction.
Summary
Deadlocks in Magento 2 are manageable once you know where to look. The key takeaways:
| Action | Impact |
|---|---|
Enable innodb_print_all_deadlocks
|
Visibility into deadlock patterns |
| Schedule indexers off-peak | Reduces lock contention significantly |
Switch indexers to schedule mode |
Spreads write load over time |
Set transaction_isolation = READ-COMMITTED
|
Reduces gap locks |
Tune innodb_lock_wait_timeout = 10
|
Faster failure, less cascade |
| Audit third-party modules | Often the root cause |
Start with visibility — enable logging, identify your most frequent deadlock tables, then apply targeted fixes. In most cases, a combination of indexer scheduling and isolation level tuning resolves 80% of the deadlock volume. For the remainder, it's usually a specific module or custom code path that needs attention.
Don't let deadlocks silently degrade your checkout. The data is all there in MySQL — you just need to look.
Top comments (0)