DEV Community

Cover image for Database Transactions : Concurrency Control
Aymane Harmaz
Aymane Harmaz

Posted on

Database Transactions : Concurrency Control

The number of users who can use the database engine concurrently is a significant criteria for classifying the database management systems. A DBMS is single-user if at most one user at a time can use the engine and it is multiuser if many users can use it concurrently. Most of the DBMSs need to be multiuser, for example databases used in banks, insurance agencies, supermarkets should be multiuser, hundreds of thousands of users will be operating on the database by submitting transactions concurrently to the engine.

Why Concurrency Control is needed

When executing concurrent transaction in an uncontrolled way, many issues can rise like dirty read, non-repeatable read, phantom read, lost update. We will go through each of those issues.

Dirty Read

Is a situation in which a transaction T1 reads the update of a transaction T2 which has not committed yet, then if T2 fails, then T1 would have read and would have worked with a value that does not exist and is incorrect.

Image description

Non-repeatable Read

Is a situation in which transaction T1 may read a given value from a table, if another transaction T2 lated updates that value and T1 reads that value again, it will see a different value from the first one it got initially.

Image description

Phantom Read

Is a situation in which a transaction T1 may read a set of rows from a table based on some condition specified in the query where clause, then transaction T2 will insert a new row that also satisfies the where clause condition used in T1, then if T1 tries to perform the same query it will get the newly added row this time

Image description

Lost Update

after the T1 commits the change by T2 will be lost considered as if it was never done.

Image description

Dirty Write

Is a situation in which one of the transactions takes an uncommitted value (dirty read), modifies it and saves it.

Image description

Strategies for dealing with concurrent transactions

We have 2 options for controlling the execution of concurrent transactions, we either choose to run the access operation to a specific data item sequently across transactions, or we choose to parallelize the execution that access, there are multiple isolation levels that can be used for implementing these choices and each one of them may prevent and allow some of the issues related to concurrency.

In the SQL standard, there are 4 isolation levels :

Image description

In Oracle, there are only 2 isolation levels :

Image description

Dirty Reads are not allowed since read committed is the lowest isolation level supported in Oracle

In PostgreSQL, there are 4 isolation levels, and read committed and read uncommitted behaves in the same way :

Image description

read uncommitted and read committed behave in the same way and they don't allow dirty read and dirty write issues to happen

In MySQL, there are 4 isolation levels, and read uncommitted prevents dirty write and allows dirty read :

Image description

Top comments (0)