DEV Community


Posted on


Triggers are a part of the SQL standard and are very useful in applications that are data intensive in nature. Triggers are attached to database objects such as tables, views and foreign tables. They usually occur after some event such as an UPDATE, DELETE and INSERT has happened in a database. They help enforce constraints and monitoring of data. Triggers are classified according to whether they fire before, after, or instead of an operation. They’re referred to as BEFORE trigger , AFTER triggers, and INSTEAD OF triggers respectively. They can also be classified based on if they’re row level triggers or statement level triggers. Row level triggers are fired for each row of a table that’s affected by the statement that fired the trigger; triggers that fire based on UPDATE, DELETE OR INSERT may only be defined based on row level triggers. Statement level triggers fire only once after a statement has been executed even if the statement didn’t affect any rows. Triggers attached to TRUNCATE operation or views fire at the the statement level. To make a trigger in PostgreSQL we need to declare a stored function and then declare a trigger.


There is a whole range of possible ways of defining a trigger in PostgreSQL; this is due to the numerous options available for defining a trigger. In this article we would focus on only a subset of features to get you started. We can define a trigger minimally this way:

   ON table_name

The variables here are the trigger_name; which represents the name of the trigger, table_name represents the name of the table the trigger is attached to, and function_name represents the name of the stored function. For the basics of writing stored functions look up my article on them.


When writing application level code having informative error messages can make the debugging experience a lot easier and faster. Constraint violation error messages from databases are not the most informative and require some digging to understand what they mean. What if we could perform data validation and generate our custom error messages when our requirements are not met? This would certainly make an application developer’s life far more easier. Well we can perform data validation with triggers and generate informative error messages for violations. We check the data if it meets our requirements before inserting in the database and if it doesn’t we abort the operation and raise an error; if it does meet our requirements we continue with the operation.

Suppose we have an example table called passwd that handles users in an application, we want to ensure that the users password isn’t less than 10 characters or is null, and their name isn't NULL.

We could attach a trigger to the table listening for ‘INSERT’ and ‘UPDATE’ operations, instead of defining constraints at the database level. First we define our stored function.

The stored function returns a ‘TRIGGER’, this is essential for all stored functions used as stored procedures in triggers. The function takes no arguments. Special variables are passed to the stored function by default; the ‘NEW’ and ‘OLD’ variables: the former represents the incoming row for an INSERT or UPDATE operation while the later represents the outgoing row for an UPDATE or DELETE operation. Another is the ‘TG_NAME’ and ‘TG_OP’ which represents the name of the trigger that’s fired and the operation that fired the trigger: could be an ‘UPDATE’, ‘INSERT’ or ‘DELETE’. Others are ‘TG_WHEN’, ‘TG_TABLE_NAME’, and ‘TG_ARGV[]’: the first variable represents either of ‘BEFORE’, ‘AFTER’ or ‘INSTEAD OF’ depending on the trigger definition, the second variable represents the table name the trigger was fired on, while the third variable represents arguments that are passed into the trigger definition. There are more arguments that are passed into the stored functions by default see the PostgreSQL documentation for more. Our function makes use of the ‘NEW’ variable to check if the individual values of the new row meet our constraints, and if they don’t we raise an exception with a message that’s meaningful. The stored function must return either a NULL or a row matching the structure of the table it was fired for. In our case we return ‘NEW’ since it contains the new row we’re inserting or updating. It is possible to change individual values of ‘NEW’ and return the changed ‘NEW’ which will be inserted into the database. As an aside we can use triggers to populate columns during an ‘INSERT’ operation; this is done when performing full text search as we need to populate a ‘tsvector’ column.

Finally we define our main trigger.

The trigger is executed for both INSERT and UPDATE statements. It fires before a new row is inserted or a row is updated in the database; the ‘BEFORE’ key word ensures that.


Suppose we have a banking application how do we track changes made to the underlying data? How do we trace back changes made when a rogue individual or bug compromises our data. Certainly we wouldn’t want to guess what the value was before the malicious changes were made. What we need is a way to ensure that any changes made are kept track of; a sort of history of the state of each row in a table. This is best done at the database level with triggers. As an example suppose we have a banking application. We create a separate table to keep track of the changes made to our account table.

Again our stored function for our audit trigger is.

Our function is similar to the previous one. It returns a ‘TRIGGER’ and it checks the special variable ‘TG_OP’ for what operation caused the trigger to fire. If its an ‘INSERT ‘ we insert the new row into our table, NEW contains the new row we want to insert into our account table, and finally return ‘NEW’. We do the same for an ‘UPDATE’ operation. For a ‘DELETE’ operation we insert the values of ‘OLD’ , which contain the rows that’s about to be deleted, and finally return ‘OLD’ to continue the delete operation.

Finally our trigger definition.

The ‘AFTER’ keyword means that the operation that fired the trigger completes before our trigger fires. We could use foreign data wrappers to send the data from our audit trigger to a remote database to prevent loss of data in the case of a database crash.


Triggers are a powerful feature that can improve the efficiency of any application using a database. Even mongodb has added the feature. The only case when triggers can become a nuisance is in the case of cascading triggers, that is triggers that fire other triggers, that in most cases represents bad database design and can be difficult to maintain. That being said triggers can be a force for real good in any application.

Top comments (2)

andreasneuman profile image

Thanks, very useful.
Which GUI tool for PostgreSQL do you use? I'm looking for a good one.
Recently I've tried dbForge Studio for PostgreSQL, its code formatting features are really helpful. But I'd like to compare several tools.

idadzie profile image
Isaac Dadzie • Edited

@andreasneuman Try out DBeaver ❤️