DEV Community

Aaron Jones
Aaron Jones

Posted on

Performance Degradation Due to ORM Misuse

Problem Statement: Slow page loads or timeouts occur because developers iterate record-by-record instead of using batch ORM methods (write, search, sudo), causing N+1 query issues.

Step 1 Confirm it’s an N+1 ORM problem (measure first)
Turn on SQL + performance logs (dev/test)

log_level = debug_sql
log_sql = True
Enter fullscreen mode Exit fullscreen mode

Restart Odoo and reproduce the slow screen. If you see hundreds/thousands of repeated SELECTs (same table, different ids), it’s classic N+1.

Quick profiling (optional)

Run with profiling flags:

./odoo-bin -d your_db --dev=all --log-level=debug_sql

Step 2 Identify the common ORM mistakes causing slowdowns
Most common slow patterns

Looping records and doing search() inside the loop

Calling write() inside the loop

Calling sudo() repeatedly inside the loop

Reading many fields when you need only a few

Doing heavy compute inside @api.onchange

Not using mapped(), filtered(), read_group()

Step 3 Fix N+1 searches: one search instead of many
Bad (search inside loop)

for partner in partners:
    orders = self.env['sale.order'].search([('partner_id', '=', partner.id)])
    partner.order_count = len(orders)

Enter fullscreen mode Exit fullscreen mode

This runs 1 query per partner.

Good (group query once using read_group)

data = self.env['sale.order'].read_group(
    domain=[('partner_id', 'in', partners.ids)],
    fields=['partner_id'],
    groupby=['partner_id']
)

count_map = {d['partner_id'][0]: d['partner_id_count'] for d in data}

for partner in partners:
    partner.order_count = count_map.get(partner.id, 0)
Enter fullscreen mode Exit fullscreen mode

Result: one grouped query instead of N queries.

Step 4 Fix record-by-record write() calls: batch write
Bad (write per record)

for rec in records:
    rec.write({'state': 'done'})
Enter fullscreen mode Exit fullscreen mode

Good (single write for all records)

records.write({'state': 'done'})
Enter fullscreen mode Exit fullscreen mode

One query instead of N.

Step 5 Avoid repeated sudo() in loops
Bad

for rec in records:
    rec.sudo().write({'x_flag': True})
Enter fullscreen mode Exit fullscreen mode

Good

records.sudo().write({'x_flag': True})
Enter fullscreen mode Exit fullscreen mode

Step 6 Fetch only required fields (use search_read or read)

If you don’t need full recordsets, avoid loading everything.

Bad

partners = self.env['res.partner'].search([('customer_rank', '>', 0)])
for p in partners:
    print(p.name, p.email, p.phone)
Enter fullscreen mode Exit fullscreen mode

This may prefetch lots of fields depending on usage.

Good

rows = self.env['res.partner'].search_read(
    domain=[('customer_rank', '>', 0)],
    fields=['name', 'email', 'phone'],
    limit=500
)
for r in rows:
    print(r['name'], r['email'], r['phone'])

Enter fullscreen mode Exit fullscreen mode

Step 7 Replace Python loops with ORM helpers (mapped, filtered, sorted)
Bad (manual loops)

emails = []
for p in partners:
    if p.email:
        emails.append(p.email)
Enter fullscreen mode Exit fullscreen mode

Good

emails = partners.mapped('email')
emails = [e for e in emails if e]
Enter fullscreen mode Exit fullscreen mode

Step 8 Optimize computed fields (store + depends + batch compute)

Computed fields can be the biggest hidden performance killer.

Bad (non-stored compute recomputed frequently)

order_total = fields.Float(compute="_compute_order_total")

def _compute_order_total(self):
    for rec in self:
        rec.order_total = sum(rec.order_line.mapped('price_total'))

Enter fullscreen mode Exit fullscreen mode

This can be recomputed constantly when views load.

Good (store it and compute in batch)

order_total = fields.Float(compute="_compute_order_total", store=True)

@api.depends('order_line.price_total')
def _compute_order_total(self):
    for rec in self:
        rec.order_total = sum(rec.order_line.mapped('price_total'))

Enter fullscreen mode Exit fullscreen mode

Stored computed fields reduce repeated recalculation on every page load.

Step 9 Avoid heavy logic in @api.onchange (use constraints or compute)
Bad

@api.onchange('partner_id')
def _onchange_partner(self):
    # heavy search on every field change
    self.warning = self.env['sale.order'].search_count([('partner_id', '=', self.partner_id.id)])

Enter fullscreen mode Exit fullscreen mode

Better

Move heavy logic to:

a button action

a stored compute

or show it via smart button computed once

Step 10 Example “before vs after” real optimization (classic)
Scenario:

You are computing a count + total for each partner, but it’s slow.

BEFORE (N+1)

for partner in partners:
    orders = self.env['sale.order'].search([('partner_id', '=', partner.id)])
    partner.order_count = len(orders)
    partner.order_amount_total = sum(orders.mapped('amount_total'))
Enter fullscreen mode Exit fullscreen mode

AFTER (2 queries total)

order_data = self.env['sale.order'].read_group(
    domain=[('partner_id', 'in', partners.ids)],
    fields=['partner_id', 'amount_total:sum'],
    groupby=['partner_id']
)

stats = {
    d['partner_id'][0]: {
        'count': d['partner_id_count'],
        'sum': d['amount_total']
    }
    for d in order_data
}

for partner in partners:
    s = stats.get(partner.id, {'count': 0, 'sum': 0.0})
    partner.order_count = s['count']
    partner.order_amount_total = s['sum']
Enter fullscreen mode Exit fullscreen mode

Step 11 General “Performance Rules” for Odoo ORM

Use this checklist as your permanent rule set:

Never search() inside a loop → use read_group, mapped, or one search with in domain

Never write() inside a loop → batch recordset.write()

Avoid repeated sudo() → apply once on the recordset

Prefer search_read() for lists where you only need few fields

Use stored computed fields for values shown often in views

Avoid heavy code in onchange (runs many times)

Use SQL only when absolutely required (and still carefully)

Conclusion

Odoo performance issues from ORM misuse are usually caused by N+1 query patterns running searches, writes, or sudo operations inside record loops. These patterns quickly explode in production with large datasets, leading to slow pages and timeouts. The reliable fix is to rewrite logic using batch ORM operations, read_group() for grouped aggregations, mapped() for efficient field collection, search_read() for lightweight reads, and stored computed fields for frequently displayed values. Once these changes are applied and verified with SQL debug logs, you’ll see dramatic improvements in page load time and system responsiveness.

Top comments (0)