The Rule System rules Author Written by Jan Wieck. Updates for 7.1 by Tom Lane. Production rule systems are conceptually simple, but there are many subtle points involved in actually using them. Some of these points and the theoretical foundations of the Postgres rule system can be found in . Some other database systems define active database rules. These are usually stored procedures and triggers and are implemented in Postgres as functions and triggers. The query rewrite rule system (the rule system from now on) is totally different from stored procedures and triggers. It modifies queries to take rules into consideration, and then passes the modified query to the query planner for planning and execution. It is very powerful, and can be used for many things such as query language procedures, views, and versions. The power of this rule system is discussed in as well as . What is a Query Tree? To understand how the rule system works it is necessary to know when it is invoked and what its input and results are. The rule system is located between the query parser and the planner. It takes the output of the parser, one query tree, and the rewrite rules from the pg_rewrite catalog, which are query trees too with some extra information, and creates zero or many query trees as result. So its input and output are always things the parser itself could have produced and thus, anything it sees is basically representable as an SQL statement. Now what is a query tree? It is an internal representation of an SQL statement where the single parts that built it are stored separately. These query trees are visible when starting the Postgres backend with debug level 4 and typing queries into the interactive backend interface. The rule actions in the pg_rewrite system catalog are also stored as query trees. They are not formatted like the debug output, but they contain exactly the same information. Reading a query tree requires some experience and it was a hard time when I started to work on the rule system. I can remember that I was standing at the coffee machine and I saw the cup in a target list, water and coffee powder in a range table and all the buttons in a qualification expression. Since SQL representations of query trees are sufficient to understand the rule system, this document will not teach how to read them. It might help to learn it and the naming conventions are required in the later following descriptions. The Parts of a Query tree When reading the SQL representations of the query trees in this document it is necessary to be able to identify the parts the statement is broken into when it is in the query tree structure. The parts of a query tree are the command type This is a simple value telling which command (SELECT, INSERT, UPDATE, DELETE) produced the parse tree. the range table range table The range table is a list of relations that are used in the query. In a SELECT statement these are the relations given after the FROM keyword. Every range table entry identifies a table or view and tells by which name it is called in the other parts of the query. In the query tree the range table entries are referenced by index rather than by name, so here it doesn't matter if there are duplicate names as it would in an SQL statement. This can happen after the range tables of rules have been merged in. The examples in this document will not have this situation. the result relation This is an index into the range table that identifies the relation where the results of the query go. SELECT queries normally don't have a result relation. The special case of a SELECT INTO is mostly identical to a CREATE TABLE, INSERT ... SELECT sequence and is not discussed separately here. On INSERT, UPDATE and DELETE queries the result relation is the table (or view!) where the changes take effect. the target list The target list is a list of expressions that define the result of the query. In the case of a SELECT, the expressions are what builds the final output of the query. They are the expressions between the SELECT and the FROM keywords. (* is just an abbreviation for all the attribute names of a relation. It is expanded by the parser into the individual attributes, so the rule system never sees it.) DELETE queries don't need a target list because they don't produce any result. In fact the planner will add a special CTID entry to the empty target list. But this is after the rule system and will be discussed later. For the rule system the target list is empty. In INSERT queries the target list describes the new rows that should go into the result relation. It is the expressions in the VALUES clause or the ones from the SELECT clause in INSERT ... SELECT. Missing columns of the result relation will be filled in by the planner with a constant NULL expression. In UPDATE queries, the target list describes the new rows that should replace the old ones. In the rule system, it contains just the expressions from the SET attribute = expression part of the query. The planner will add missing columns by inserting expressions that copy the values from the old row into the new one. And it will add the special CTID entry just as for DELETE too. Every entry in the target list contains an expression that can be a constant value, a variable pointing to an attribute of one of the relations in the range table, a parameter, or an expression tree made of function calls, constants, variables, operators etc. the qualification The query's qualification is an expression much like one of those contained in the target list entries. The result value of this expression is a Boolean that tells if the operation (INSERT, UPDATE, DELETE or SELECT) for the final result row should be executed or not. It is the WHERE clause of an SQL statement. the join tree The query's join tree shows the structure of the FROM clause. For a simple query like SELECT FROM a, b, c the join tree is just a list of the FROM items, because we are allowed to join them in any order. But when JOIN expressions --- particularly outer joins --- are used, we have to join in the order shown by the joins. The join tree shows the structure of the JOIN expressions. The restrictions associated with particular JOIN clauses (from ON or USING expressions) are stored as qualification expressions attached to those join tree nodes. It turns out to be convenient to store the top-level WHERE expression as a qualification attached to the top-level join tree item, too. So really the join tree represents both the FROM and WHERE clauses of a SELECT. the others The other parts of the query tree like the ORDER BY clause aren't of interest here. The rule system substitutes entries there while applying rules, but that doesn't have much to do with the fundamentals of the rule system. Views and the Rule System rulesand views Implementation of Views in <ProductName>Postgres</ProductName> Views in Postgres are implemented using the rule system. In fact there is absolutely no difference between a CREATE VIEW myview AS SELECT * FROM mytab; compared against the two commands CREATE TABLE myview (same attribute list as for mytab); CREATE RULE "_RETmyview" AS ON SELECT TO myview DO INSTEAD SELECT * FROM mytab; because this is exactly what the CREATE VIEW command does internally. This has some side effects. One of them is that the information about a view in the Postgres system catalogs is exactly the same as it is for a table. So for the query parser, there is absolutely no difference between a table and a view. They are the same thing - relations. That is the important one for now. How SELECT Rules Work Rules ON SELECT are applied to all queries as the last step, even if the command given is an INSERT, UPDATE or DELETE. And they have different semantics from the others in that they modify the parse tree in place instead of creating a new one. So SELECT rules are described first. Currently, there can be only one action in an ON SELECT rule, and it must be an unconditional SELECT action that is INSTEAD. This restriction was required to make rules safe enough to open them for ordinary users and it restricts rules ON SELECT to real view rules. The examples for this document are two join views that do some calculations and some more views using them in turn. One of the two first views is customized later by adding rules for INSERT, UPDATE and DELETE operations so that the final result will be a view that behaves like a real table with some magic functionality. It is not such a simple example to start from and this makes things harder to get into. But it's better to have one example that covers all the points discussed step by step rather than having many different ones that might mix up in mind. The database needed to play with the examples is named al_bundy. You'll see soon why this is the database name. And it needs the procedural language PL/pgSQL installed, because we need a little min() function returning the lower of 2 integer values. We create that as CREATE FUNCTION min(integer, integer) RETURNS integer AS 'BEGIN IF $1 < $2 THEN RETURN $1; END IF; RETURN $2; END;' LANGUAGE 'plpgsql'; The real tables we need in the first two rule system descriptions are these: CREATE TABLE shoe_data ( shoename char(10), -- primary key sh_avail integer, -- available # of pairs slcolor char(10), -- preferred shoelace color slminlen float, -- miminum shoelace length slmaxlen float, -- maximum shoelace length slunit char(8) -- length unit ); CREATE TABLE shoelace_data ( sl_name char(10), -- primary key sl_avail integer, -- available # of pairs sl_color char(10), -- shoelace color sl_len float, -- shoelace length sl_unit char(8) -- length unit ); CREATE TABLE unit ( un_name char(8), -- the primary key un_fact float -- factor to transform to cm ); I think most of us wear shoes and can realize that this is really useful data. Well there are shoes out in the world that don't require shoelaces, but this doesn't make Al's life easier and so we ignore it. The views are created as CREATE VIEW shoe AS SELECT sh.shoename, sh.sh_avail, sh.slcolor, sh.slminlen, sh.slminlen * un.un_fact AS slminlen_cm, sh.slmaxlen, sh.slmaxlen * un.un_fact AS slmaxlen_cm, sh.slunit FROM shoe_data sh, unit un WHERE sh.slunit = un.un_name; CREATE VIEW shoelace AS SELECT s.sl_name, s.sl_avail, s.sl_color, s.sl_len, s.sl_unit, s.sl_len * u.un_fact AS sl_len_cm FROM shoelace_data s, unit u WHERE s.sl_unit = u.un_name; CREATE VIEW shoe_ready AS SELECT rsh.shoename, rsh.sh_avail, rsl.sl_name, rsl.sl_avail, min(rsh.sh_avail, rsl.sl_avail) AS total_avail FROM shoe rsh, shoelace rsl WHERE rsl.sl_color = rsh.slcolor AND rsl.sl_len_cm >= rsh.slminlen_cm AND rsl.sl_len_cm <= rsh.slmaxlen_cm; The CREATE VIEW command for the shoelace view (which is the simplest one we have) will create a relation shoelace and an entry in pg_rewrite that tells that there is a rewrite rule that must be applied whenever the relation shoelace is referenced in a query's range table. The rule has no rule qualification (discussed later, with the non SELECT rules, since SELECT rules currently cannot have them) and it is INSTEAD. Note that rule qualifications are not the same as query qualifications! The rule's action has a query qualification. The rule's action is one query tree that is a copy of the SELECT statement in the view creation command. Note The two extra range table entries for NEW and OLD (named *NEW* and *CURRENT* for historical reasons in the printed query tree) you can see in the pg_rewrite entry aren't of interest for SELECT rules. Now we populate unit, shoe_data and shoelace_data and Al types the first SELECT in his life: al_bundy=> INSERT INTO unit VALUES ('cm', 1.0); al_bundy=> INSERT INTO unit VALUES ('m', 100.0); al_bundy=> INSERT INTO unit VALUES ('inch', 2.54); al_bundy=> al_bundy=> INSERT INTO shoe_data VALUES al_bundy-> ('sh1', 2, 'black', 70.0, 90.0, 'cm'); al_bundy=> INSERT INTO shoe_data VALUES al_bundy-> ('sh2', 0, 'black', 30.0, 40.0, 'inch'); al_bundy=> INSERT INTO shoe_data VALUES al_bundy-> ('sh3', 4, 'brown', 50.0, 65.0, 'cm'); al_bundy=> INSERT INTO shoe_data VALUES al_bundy-> ('sh4', 3, 'brown', 40.0, 50.0, 'inch'); al_bundy=> al_bundy=> INSERT INTO shoelace_data VALUES al_bundy-> ('sl1', 5, 'black', 80.0, 'cm'); al_bundy=> INSERT INTO shoelace_data VALUES al_bundy-> ('sl2', 6, 'black', 100.0, 'cm'); al_bundy=> INSERT INTO shoelace_data VALUES al_bundy-> ('sl3', 0, 'black', 35.0 , 'inch'); al_bundy=> INSERT INTO shoelace_data VALUES al_bundy-> ('sl4', 8, 'black', 40.0 , 'inch'); al_bundy=> INSERT INTO shoelace_data VALUES al_bundy-> ('sl5', 4, 'brown', 1.0 , 'm'); al_bundy=> INSERT INTO shoelace_data VALUES al_bundy-> ('sl6', 0, 'brown', 0.9 , 'm'); al_bundy=> INSERT INTO shoelace_data VALUES al_bundy-> ('sl7', 7, 'brown', 60 , 'cm'); al_bundy=> INSERT INTO shoelace_data VALUES al_bundy-> ('sl8', 1, 'brown', 40 , 'inch'); al_bundy=> al_bundy=> SELECT * FROM shoelace; sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm ----------+--------+----------+------+--------+--------- sl1 | 5|black | 80|cm | 80 sl2 | 6|black | 100|cm | 100 sl7 | 7|brown | 60|cm | 60 sl3 | 0|black | 35|inch | 88.9 sl4 | 8|black | 40|inch | 101.6 sl8 | 1|brown | 40|inch | 101.6 sl5 | 4|brown | 1|m | 100 sl6 | 0|brown | 0.9|m | 90 (8 rows) It's the simplest SELECT Al can do on our views, so we take this to explain the basics of view rules. The SELECT * FROM shoelace was interpreted by the parser and produced the parsetree SELECT shoelace.sl_name, shoelace.sl_avail, shoelace.sl_color, shoelace.sl_len, shoelace.sl_unit, shoelace.sl_len_cm FROM shoelace shoelace; and this is given to the rule system. The rule system walks through the range table and checks if there are rules in pg_rewrite for any relation. When processing the range table entry for shoelace (the only one up to now) it finds the rule _RETshoelace with the parse tree SELECT s.sl_name, s.sl_avail, s.sl_color, s.sl_len, s.sl_unit, float8mul(s.sl_len, u.un_fact) AS sl_len_cm FROM shoelace *OLD*, shoelace *NEW*, shoelace_data s, unit u WHERE bpchareq(s.sl_unit, u.un_name); Note that the parser changed the calculation and qualification into calls to the appropriate functions. But in fact this changes nothing. To expand the view, the rewriter simply creates a subselect rangetable entry containing the rule's action parsetree, and substitutes this range table entry for the original one that referenced the view. The resulting rewritten parse tree is almost the same as if Al had typed SELECT shoelace.sl_name, shoelace.sl_avail, shoelace.sl_color, shoelace.sl_len, shoelace.sl_unit, shoelace.sl_len_cm FROM (SELECT s.sl_name, s.sl_avail, s.sl_color, s.sl_len, s.sl_unit, s.sl_len * u.un_fact AS sl_len_cm FROM shoelace_data s, unit u WHERE s.sl_unit = u.un_name) shoelace; There is one difference however: the sub-query's range table has two extra entries shoelace *OLD*, shoelace *NEW*. These entries don't participate directly in the query, since they aren't referenced by the sub-query's join tree or target list. The rewriter uses them to store the access permission check info that was originally present in the rangetable entry that referenced the view. In this way, the executor will still check that the user has proper permissions to access the view, even though there's no direct use of the view in the rewritten query. That was the first rule applied. The rule system will continue checking the remaining rangetable entries in the top query (in this example there are no more), and it will recursively check the rangetable entries in the added sub-query to see if any of them reference views. (But it won't expand *OLD* or *NEW* --- otherwise we'd have infinite recursion!) In this example, there are no rewrite rules for shoelace_data or unit, so rewriting is complete and the above is the final result given to the planner. Now we face Al with the problem that the Blues Brothers appear in his shop and want to buy some new shoes, and as the Blues Brothers are, they want to wear the same shoes. And they want to wear them immediately, so they need shoelaces too. Al needs to know for which shoes currently in the store he has the matching shoelaces (color and size) and where the total number of exactly matching pairs is greater or equal to two. We teach him what to do and he asks his database: al_bundy=> SELECT * FROM shoe_ready WHERE total_avail >= 2; shoename |sh_avail|sl_name |sl_avail|total_avail ----------+--------+----------+--------+----------- sh1 | 2|sl1 | 5| 2 sh3 | 4|sl7 | 7| 4 (2 rows) Al is a shoe guru and so he knows that only shoes of type sh1 would fit (shoelace sl7 is brown and shoes that need brown shoelaces aren't shoes the Blues Brothers would ever wear). The output of the parser this time is the parse tree SELECT shoe_ready.shoename, shoe_ready.sh_avail, shoe_ready.sl_name, shoe_ready.sl_avail, shoe_ready.total_avail FROM shoe_ready shoe_ready WHERE int4ge(shoe_ready.total_avail, 2); The first rule applied will be the one for the shoe_ready view and it results in the parse tree SELECT shoe_ready.shoename, shoe_ready.sh_avail, shoe_ready.sl_name, shoe_ready.sl_avail, shoe_ready.total_avail FROM (SELECT rsh.shoename, rsh.sh_avail, rsl.sl_name, rsl.sl_avail, min(rsh.sh_avail, rsl.sl_avail) AS total_avail FROM shoe rsh, shoelace rsl WHERE rsl.sl_color = rsh.slcolor AND rsl.sl_len_cm >= rsh.slminlen_cm AND rsl.sl_len_cm <= rsh.slmaxlen_cm) shoe_ready WHERE int4ge(shoe_ready.total_avail, 2); Similarly, the rules for shoe and shoelace are substituted into the range table of the sub-query, leading to a three-level final query tree: SELECT shoe_ready.shoename, shoe_ready.sh_avail, shoe_ready.sl_name, shoe_ready.sl_avail, shoe_ready.total_avail FROM (SELECT rsh.shoename, rsh.sh_avail, rsl.sl_name, rsl.sl_avail, min(rsh.sh_avail, rsl.sl_avail) AS total_avail FROM (SELECT sh.shoename, sh.sh_avail, sh.slcolor, sh.slminlen, sh.slminlen * un.un_fact AS slminlen_cm, sh.slmaxlen, sh.slmaxlen * un.un_fact AS slmaxlen_cm, sh.slunit FROM shoe_data sh, unit un WHERE sh.slunit = un.un_name) rsh, (SELECT s.sl_name, s.sl_avail, s.sl_color, s.sl_len, s.sl_unit, s.sl_len * u.un_fact AS sl_len_cm FROM shoelace_data s, unit u WHERE s.sl_unit = u.un_name) rsl WHERE rsl.sl_color = rsh.slcolor AND rsl.sl_len_cm >= rsh.slminlen_cm AND rsl.sl_len_cm <= rsh.slmaxlen_cm) shoe_ready WHERE int4ge(shoe_ready.total_avail, 2); It turns out that the planner will collapse this tree into a two-level query tree: the bottommost selects will be pulled up into the middle select since there's no need to process them separately. But the middle select will remain separate from the top, because it contains aggregate functions. If we pulled those up it would change the behavior of the topmost select, which we don't want. However, collapsing the query tree is an optimization that the rewrite system doesn't have to concern itself with. Note There is currently no recursion stopping mechanism for view rules in the rule system (only for the other kinds of rules). This doesn't hurt much, because the only way to push this into an endless loop (blowing up the backend until it reaches the memory limit) is to create tables and then setup the view rules by hand with CREATE RULE in such a way, that one selects from the other that selects from the one. This could never happen if CREATE VIEW is used because for the first CREATE VIEW, the second relation does not exist and thus the first view cannot select from the second. View Rules in Non-SELECT Statements Two details of the parse tree aren't touched in the description of view rules above. These are the command type and the result relation. In fact, view rules don't need this information. There are only a few differences between a parse tree for a SELECT and one for any other command. Obviously they have another command type and this time the result relation points to the range table entry where the result should go. Everything else is absolutely the same. So having two tables t1 and t2 with attributes a and b, the parse trees for the two statements SELECT t2.b FROM t1, t2 WHERE t1.a = t2.a; UPDATE t1 SET b = t2.b WHERE t1.a = t2.a; are nearly identical. The range tables contain entries for the tables t1 and t2. The target lists contain one variable that points to attribute b of the range table entry for table t2. The qualification expressions compare the attributes a of both ranges for equality. The join trees show a simple join between t1 and t2. The consequence is, that both parse trees result in similar execution plans. They are both joins over the two tables. For the UPDATE the missing columns from t1 are added to the target list by the planner and the final parse tree will read as UPDATE t1 SET a = t1.a, b = t2.b WHERE t1.a = t2.a; and thus the executor run over the join will produce exactly the same result set as a SELECT t1.a, t2.b FROM t1, t2 WHERE t1.a = t2.a; will do. But there is a little problem in UPDATE. The executor does not care what the results from the join it is doing are meant for. It just produces a result set of rows. The difference that one is a SELECT command and the other is an UPDATE is handled in the caller of the executor. The caller still knows (looking at the parse tree) that this is an UPDATE, and he knows that this result should go into table t1. But which of the rows that are there has to be replaced by the new row? To resolve this problem, another entry is added to the target list in UPDATE (and also in DELETE) statements: the current tuple ID (ctid). This is a system attribute containing the file block number and position in the block for the row. Knowing the table, the ctid can be used to retrieve the original t1 row to be updated. After adding the ctid to the target list, the query actually looks like SELECT t1.a, t2.b, t1.ctid FROM t1, t2 WHERE t1.a = t2.a; Now another detail of Postgres enters the stage. At this moment, table rows aren't overwritten and this is why ABORT TRANSACTION is fast. In an UPDATE, the new result row is inserted into the table (after stripping ctid) and in the tuple header of the row that ctid pointed to the cmax and xmax entries are set to the current command counter and current transaction ID. Thus the old row is hidden and after the transaction committed the vacuum cleaner can really move it out. Knowing all that, we can simply apply view rules in absolutely the same way to any command. There is no difference. The Power of Views in <ProductName>Postgres</ProductName> The above demonstrates how the rule system incorporates view definitions into the original parse tree. In the second example a simple SELECT from one view created a final parse tree that is a join of 4 tables (unit is used twice with different names). Benefits The benefit of implementing views with the rule system is, that the planner has all the information about which tables have to be scanned plus the relationships between these tables plus the restrictive qualifications from the views plus the qualifications from the original query in one single parse tree. And this is still the situation when the original query is already a join over views. Now the planner has to decide which is the best path to execute the query. The more information the planner has, the better this decision can be. And the rule system as implemented in Postgres ensures, that this is all information available about the query up to now. What about updating a view? viewsupdating What happens if a view is named as the target relation for an INSERT, UPDATE, or DELETE? After doing the substitutions described above, we will have a query tree in which the result relation points at a subquery range table entry. This will not work, so the rewriter throws an error if it sees it has produced such a thing. To change this we can define rules that modify the behavior of non-SELECT queries. This is the topic of the next section. Rules on INSERT, UPDATE and DELETE Differences from View Rules Rules that are defined ON INSERT, UPDATE and DELETE are totally different from the view rules described in the previous section. First, their CREATE RULE command allows more: They can have no action. They can have multiple actions. The keyword INSTEAD is optional. The pseudo relations NEW and OLD become useful. They can have rule qualifications. Second, they don't modify the parse tree in place. Instead they create zero or many new parse trees and can throw away the original one. How These Rules Work Keep the syntax CREATE RULE rule_name AS ON event TO object [WHERE rule_qualification] DO [INSTEAD] [action | (actions) | NOTHING]; in mind. In the following, update rules means rules that are defined ON INSERT, UPDATE or DELETE. Update rules get applied by the rule system when the result relation and the command type of a parse tree are equal to the object and event given in the CREATE RULE command. For update rules, the rule system creates a list of parse trees. Initially the parse tree list is empty. There can be zero (NOTHING keyword), one or multiple actions. To simplify, we look at a rule with one action. This rule can have a qualification or not and it can be INSTEAD or not. What is a rule qualification? It is a restriction that tells when the actions of the rule should be done and when not. This qualification can only reference the NEW and/or OLD pseudo relations which are basically the relation given as object (but with a special meaning). So we have four cases that produce the following parse trees for a one-action rule. No qualification and not INSTEAD: The parse tree from the rule action where the original parse tree's qualification has been added. No qualification but INSTEAD: The parse tree from the rule action where the original parse tree's qualification has been added. Qualification given and not INSTEAD: The parse tree from the rule action where the rule qualification and the original parse tree's qualification have been added. Qualification given and INSTEAD: The parse tree from the rule action where the rule qualification and the original parse tree's qualification have been added. The original parse tree where the negated rule qualification has been added. Finally, if the rule is not INSTEAD, the unchanged original parse tree is added to the list. Since only qualified INSTEAD rules already add the original parse tree, we end up with either one or two output parse trees for a rule with one action. For ON INSERT rules, the original query (if not suppressed by INSTEAD) is done before any actions added by rules. This allows the actions to see the inserted row(s). But for ON UPDATE and ON DELETE rules, the original query is done after the actions added by rules. This ensures that the actions can see the to-be-updated or to-be-deleted rows; otherwise, the actions might do nothing because they find no rows matching their qualifications. The parse trees generated from rule actions are thrown into the rewrite system again and maybe more rules get applied resulting in more or less parse trees. So the parse trees in the rule actions must have either another command type or another result relation. Otherwise this recursive process will end up in a loop. There is a compiled in recursion limit of currently 10 iterations. If after 10 iterations there are still update rules to apply the rule system assumes a loop over multiple rule definitions and reports an error. The parsetrees found in the actions of the pg_rewrite system catalog are only templates. Since they can reference the rangetable entries for NEW and OLD, some substitutions have to be made before they can be used. For any reference to NEW, the targetlist of the original query is searched for a corresponding entry. If found, that entry's expression replaces the reference. Otherwise NEW means the same as OLD (for an UPDATE) or is replaced by NULL (for an INSERT). Any reference to OLD is replaced by a reference to the rangetable entry which is the resultrelation. After we are done applying update rules, we apply view rules to the produced parsetree(s). Views cannot insert new update actions so there is no need to apply update rules to the output of view rewriting. A First Rule Step by Step We want to trace changes to the sl_avail column in the shoelace_data relation. So we setup a log table and a rule that conditionally writes a log entry when an UPDATE is performed on shoelace_data. CREATE TABLE shoelace_log ( sl_name char(10), -- shoelace changed sl_avail integer, -- new available value log_who text, -- who did it log_when timestamp -- when ); CREATE RULE log_shoelace AS ON UPDATE TO shoelace_data WHERE NEW.sl_avail != OLD.sl_avail DO INSERT INTO shoelace_log VALUES ( NEW.sl_name, NEW.sl_avail, current_user, current_timestamp ); Now Al does al_bundy=> UPDATE shoelace_data SET sl_avail = 6 al_bundy-> WHERE sl_name = 'sl7'; and we look at the logtable. al_bundy=> SELECT * FROM shoelace_log; sl_name |sl_avail|log_who|log_when ----------+--------+-------+-------------------------------- sl7 | 6|Al |Tue Oct 20 16:14:45 1998 MET DST (1 row) That's what we expected. What happened in the background is the following. The parser created the parsetree (this time the parts of the original parsetree are highlighted because the base of operations is the rule action for update rules). UPDATE shoelace_data SET sl_avail = 6 FROM shoelace_data shoelace_data WHERE bpchareq(shoelace_data.sl_name, 'sl7'); There is a rule log_shoelace that is ON UPDATE with the rule qualification expression int4ne(NEW.sl_avail, OLD.sl_avail) and one action INSERT INTO shoelace_log VALUES( *NEW*.sl_name, *NEW*.sl_avail, current_user, current_timestamp FROM shoelace_data *NEW*, shoelace_data *OLD*; This is a little strange-looking since you can't normally write INSERT ... VALUES ... FROM. The FROM clause here is just to indicate that there are rangetable entries in the parsetree for *NEW* and *OLD*. These are needed so that they can be referenced by variables in the INSERT command's querytree. The rule is a qualified non-INSTEAD rule, so the rule system has to return two parse trees: the modified rule action and the original parsetree. In the first step the range table of the original query is incorporated into the rule's action parse tree. This results in INSERT INTO shoelace_log VALUES( *NEW*.sl_name, *NEW*.sl_avail, current_user, current_timestamp FROM shoelace_data *NEW*, shoelace_data *OLD*, shoelace_data shoelace_data; In step 2 the rule qualification is added to it, so the result set is restricted to rows where sl_avail changes. INSERT INTO shoelace_log VALUES( *NEW*.sl_name, *NEW*.sl_avail, current_user, current_timestamp FROM shoelace_data *NEW*, shoelace_data *OLD*, shoelace_data shoelace_data WHERE int4ne(*NEW*.sl_avail, *OLD*.sl_avail); This is even stranger-looking, since INSERT ... VALUES doesn't have a WHERE clause either, but the planner and executor will have no difficulty with it. They need to support this same functionality anyway for INSERT ... SELECT. In step 3 the original parse tree's qualification is added, restricting the result set further to only the rows touched by the original parse tree. INSERT INTO shoelace_log VALUES( *NEW*.sl_name, *NEW*.sl_avail, current_user, current_timestamp FROM shoelace_data *NEW*, shoelace_data *OLD*, shoelace_data shoelace_data WHERE int4ne(*NEW*.sl_avail, *OLD*.sl_avail) AND bpchareq(shoelace_data.sl_name, 'sl7'); Step 4 substitutes NEW references by the target list entries from the original parse tree or with the matching variable references from the result relation. INSERT INTO shoelace_log VALUES( shoelace_data.sl_name, 6, current_user, current_timestamp FROM shoelace_data *NEW*, shoelace_data *OLD*, shoelace_data shoelace_data WHERE int4ne(6, *OLD*.sl_avail) AND bpchareq(shoelace_data.sl_name, 'sl7'); Step 5 changes OLD references into result relation references. INSERT INTO shoelace_log VALUES( shoelace_data.sl_name, 6, current_user, current_timestamp FROM shoelace_data *NEW*, shoelace_data *OLD*, shoelace_data shoelace_data WHERE int4ne(6, shoelace_data.sl_avail) AND bpchareq(shoelace_data.sl_name, 'sl7'); That's it. Since the rule is not INSTEAD, we also output the original parse tree. In short, the output from the rule system is a list of two parse trees that are the same as the statements: INSERT INTO shoelace_log VALUES( shoelace_data.sl_name, 6, current_user, current_timestamp FROM shoelace_data WHERE 6 != shoelace_data.sl_avail AND shoelace_data.sl_name = 'sl7'; UPDATE shoelace_data SET sl_avail = 6 WHERE sl_name = 'sl7'; These are executed in this order and that is exactly what the rule defines. The substitutions and the qualifications added ensure that if the original query would be, say, UPDATE shoelace_data SET sl_color = 'green' WHERE sl_name = 'sl7'; no log entry would get written. This time the original parse tree does not contain a target list entry for sl_avail, so NEW.sl_avail will get replaced by shoelace_data.sl_avail resulting in the extra query INSERT INTO shoelace_log VALUES( shoelace_data.sl_name, shoelace_data.sl_avail, current_user, current_timestamp) FROM shoelace_data WHERE shoelace_data.sl_avail != shoelace_data.sl_avail AND shoelace_data.sl_name = 'sl7'; and that qualification will never be true. It will also work if the original query modifies multiple rows. So if Al would issue the command UPDATE shoelace_data SET sl_avail = 0 WHERE sl_color = 'black'; four rows in fact get updated (sl1, sl2, sl3 and sl4). But sl3 already has sl_avail = 0. This time, the original parse trees qualification is different and that results in the extra parse tree INSERT INTO shoelace_log SELECT shoelace_data.sl_name, 0, current_user, current_timestamp FROM shoelace_data WHERE 0 != shoelace_data.sl_avail AND shoelace_data.sl_color = 'black'; This parse tree will surely insert three new log entries. And that's absolutely correct. Here we can see why it is important that the original parse tree is executed last. If the UPDATE would have been executed first, all the rows are already set to zero, so the logging INSERT would not find any row where 0 != shoelace_data.sl_avail. Cooperation with Views A simple way to protect view relations from the mentioned possibility that someone can try to INSERT, UPDATE and DELETE on them is to let those parse trees get thrown away. We create the rules CREATE RULE shoe_ins_protect AS ON INSERT TO shoe DO INSTEAD NOTHING; CREATE RULE shoe_upd_protect AS ON UPDATE TO shoe DO INSTEAD NOTHING; CREATE RULE shoe_del_protect AS ON DELETE TO shoe DO INSTEAD NOTHING; If Al now tries to do any of these operations on the view relation shoe, the rule system will apply the rules. Since the rules have no actions and are INSTEAD, the resulting list of parse trees will be empty and the whole query will become nothing because there is nothing left to be optimized or executed after the rule system is done with it. Note This way might irritate frontend applications because absolutely nothing happened on the database and thus, the backend will not return anything for the query. Not even a PGRES_EMPTY_QUERY will be available in libpq. In psql, nothing happens. This might change in the future. A more sophisticated way to use the rule system is to create rules that rewrite the parse tree into one that does the right operation on the real tables. To do that on the shoelace view, we create the following rules: CREATE RULE shoelace_ins AS ON INSERT TO shoelace DO INSTEAD INSERT INTO shoelace_data VALUES ( NEW.sl_name, NEW.sl_avail, NEW.sl_color, NEW.sl_len, NEW.sl_unit); CREATE RULE shoelace_upd AS ON UPDATE TO shoelace DO INSTEAD UPDATE shoelace_data SET sl_name = NEW.sl_name, sl_avail = NEW.sl_avail, sl_color = NEW.sl_color, sl_len = NEW.sl_len, sl_unit = NEW.sl_unit WHERE sl_name = OLD.sl_name; CREATE RULE shoelace_del AS ON DELETE TO shoelace DO INSTEAD DELETE FROM shoelace_data WHERE sl_name = OLD.sl_name; Now there is a pack of shoelaces arriving in Al's shop and it has a big part list. Al is not that good in calculating and so we don't want him to manually update the shoelace view. Instead we setup two little tables, one where he can insert the items from the part list and one with a special trick. The create commands for these are: CREATE TABLE shoelace_arrive ( arr_name char(10), arr_quant integer ); CREATE TABLE shoelace_ok ( ok_name char(10), ok_quant integer ); CREATE RULE shoelace_ok_ins AS ON INSERT TO shoelace_ok DO INSTEAD UPDATE shoelace SET sl_avail = sl_avail + NEW.ok_quant WHERE sl_name = NEW.ok_name; Now Al can sit down and do whatever until al_bundy=> SELECT * FROM shoelace_arrive; arr_name |arr_quant ----------+--------- sl3 | 10 sl6 | 20 sl8 | 20 (3 rows) is exactly what's on the part list. We take a quick look at the current data, al_bundy=> SELECT * FROM shoelace; sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm ----------+--------+----------+------+--------+--------- sl1 | 5|black | 80|cm | 80 sl2 | 6|black | 100|cm | 100 sl7 | 6|brown | 60|cm | 60 sl3 | 0|black | 35|inch | 88.9 sl4 | 8|black | 40|inch | 101.6 sl8 | 1|brown | 40|inch | 101.6 sl5 | 4|brown | 1|m | 100 sl6 | 0|brown | 0.9|m | 90 (8 rows) move the arrived shoelaces in al_bundy=> INSERT INTO shoelace_ok SELECT * FROM shoelace_arrive; and check the results al_bundy=> SELECT * FROM shoelace ORDER BY sl_name; sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm ----------+--------+----------+------+--------+--------- sl1 | 5|black | 80|cm | 80 sl2 | 6|black | 100|cm | 100 sl7 | 6|brown | 60|cm | 60 sl4 | 8|black | 40|inch | 101.6 sl3 | 10|black | 35|inch | 88.9 sl8 | 21|brown | 40|inch | 101.6 sl5 | 4|brown | 1|m | 100 sl6 | 20|brown | 0.9|m | 90 (8 rows) al_bundy=> SELECT * FROM shoelace_log; sl_name |sl_avail|log_who|log_when ----------+--------+-------+-------------------------------- sl7 | 6|Al |Tue Oct 20 19:14:45 1998 MET DST sl3 | 10|Al |Tue Oct 20 19:25:16 1998 MET DST sl6 | 20|Al |Tue Oct 20 19:25:16 1998 MET DST sl8 | 21|Al |Tue Oct 20 19:25:16 1998 MET DST (4 rows) It's a long way from the one INSERT ... SELECT to these results. And its description will be the last in this document (but not the last example :-). First there was the parser's output INSERT INTO shoelace_ok SELECT shoelace_arrive.arr_name, shoelace_arrive.arr_quant FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok; Now the first rule shoelace_ok_ins is applied and turns it into UPDATE shoelace SET sl_avail = int4pl(shoelace.sl_avail, shoelace_arrive.arr_quant) FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, shoelace_ok *OLD*, shoelace_ok *NEW*, shoelace shoelace WHERE bpchareq(shoelace.sl_name, showlace_arrive.arr_name); and throws away the original INSERT on shoelace_ok. This rewritten query is passed to the rule system again and the second applied rule shoelace_upd produced UPDATE shoelace_data SET sl_name = shoelace.sl_name, sl_avail = int4pl(shoelace.sl_avail, shoelace_arrive.arr_quant), sl_color = shoelace.sl_color, sl_len = shoelace.sl_len, sl_unit = shoelace.sl_unit FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, shoelace_ok *OLD*, shoelace_ok *NEW*, shoelace shoelace, shoelace *OLD*, shoelace *NEW*, shoelace_data showlace_data WHERE bpchareq(shoelace.sl_name, showlace_arrive.arr_name) AND bpchareq(shoelace_data.sl_name, shoelace.sl_name); Again it's an INSTEAD rule and the previous parse tree is trashed. Note that this query still uses the view shoelace. But the rule system isn't finished with this loop so it continues and applies the rule _RETshoelace on it and we get UPDATE shoelace_data SET sl_name = s.sl_name, sl_avail = int4pl(s.sl_avail, shoelace_arrive.arr_quant), sl_color = s.sl_color, sl_len = s.sl_len, sl_unit = s.sl_unit FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, shoelace_ok *OLD*, shoelace_ok *NEW*, shoelace shoelace, shoelace *OLD*, shoelace *NEW*, shoelace_data showlace_data, shoelace *OLD*, shoelace *NEW*, shoelace_data s, unit u WHERE bpchareq(s.sl_name, showlace_arrive.arr_name) AND bpchareq(shoelace_data.sl_name, s.sl_name); Again an update rule has been applied and so the wheel turns on and we are in rewrite round 3. This time rule log_shoelace gets applied what produces the extra parse tree INSERT INTO shoelace_log SELECT s.sl_name, int4pl(s.sl_avail, shoelace_arrive.arr_quant), current_user, current_timestamp FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok, shoelace_ok *OLD*, shoelace_ok *NEW*, shoelace shoelace, shoelace *OLD*, shoelace *NEW*, shoelace_data showlace_data, shoelace *OLD*, shoelace *NEW*, shoelace_data s, unit u, shoelace_data *OLD*, shoelace_data *NEW* shoelace_log shoelace_log WHERE bpchareq(s.sl_name, showlace_arrive.arr_name) AND bpchareq(shoelace_data.sl_name, s.sl_name); AND int4ne(int4pl(s.sl_avail, shoelace_arrive.arr_quant), s.sl_avail); After that the rule system runs out of rules and returns the generated parse trees. So we end up with two final parse trees that are equal to the SQL statements INSERT INTO shoelace_log SELECT s.sl_name, s.sl_avail + shoelace_arrive.arr_quant, current_user, current_timestamp FROM shoelace_arrive shoelace_arrive, shoelace_data shoelace_data, shoelace_data s WHERE s.sl_name = shoelace_arrive.arr_name AND shoelace_data.sl_name = s.sl_name AND s.sl_avail + shoelace_arrive.arr_quant != s.sl_avail; UPDATE shoelace_data SET sl_avail = shoelace_data.sl_avail + shoelace_arrive.arr_quant FROM shoelace_arrive shoelace_arrive, shoelace_data shoelace_data, shoelace_data s WHERE s.sl_name = shoelace_arrive.sl_name AND shoelace_data.sl_name = s.sl_name; The result is that data coming from one relation inserted into another, changed into updates on a third, changed into updating a fourth plus logging that final update in a fifth gets reduced into two queries. There is a little detail that's a bit ugly. Looking at the two queries turns out, that the shoelace_data relation appears twice in the range table where it could definitely be reduced to one. The planner does not handle it and so the execution plan for the rule systems output of the INSERT will be Nested Loop -> Merge Join -> Seq Scan -> Sort -> Seq Scan on s -> Seq Scan -> Sort -> Seq Scan on shoelace_arrive -> Seq Scan on shoelace_data while omitting the extra range table entry would result in a Merge Join -> Seq Scan -> Sort -> Seq Scan on s -> Seq Scan -> Sort -> Seq Scan on shoelace_arrive that totally produces the same entries in the log relation. Thus, the rule system caused one extra scan on the shoelace_data relation that is absolutely not necessary. And the same obsolete scan is done once more in the UPDATE. But it was a really hard job to make that all possible at all. A final demonstration of the Postgres rule system and its power. There is a cute blonde that sells shoelaces. And what Al could never realize, she's not only cute, she's smart too - a little too smart. Thus, it happens from time to time that Al orders shoelaces that are absolutely not sellable. This time he ordered 1000 pairs of magenta shoelaces and since another kind is currently not available but he committed to buy some, he also prepared his database for pink ones. al_bundy=> INSERT INTO shoelace VALUES al_bundy-> ('sl9', 0, 'pink', 35.0, 'inch', 0.0); al_bundy=> INSERT INTO shoelace VALUES al_bundy-> ('sl10', 1000, 'magenta', 40.0, 'inch', 0.0); Since this happens often, we must lookup for shoelace entries, that fit for absolutely no shoe sometimes. We could do that in a complicated statement every time, or we can setup a view for it. The view for this is CREATE VIEW shoelace_obsolete AS SELECT * FROM shoelace WHERE NOT EXISTS (SELECT shoename FROM shoe WHERE slcolor = sl_color); Its output is al_bundy=> SELECT * FROM shoelace_obsolete; sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm ----------+--------+----------+------+--------+--------- sl9 | 0|pink | 35|inch | 88.9 sl10 | 1000|magenta | 40|inch | 101.6 For the 1000 magenta shoelaces we must debt Al before we can throw 'em away, but that's another problem. The pink entry we delete. To make it a little harder for Postgres, we don't delete it directly. Instead we create one more view CREATE VIEW shoelace_candelete AS SELECT * FROM shoelace_obsolete WHERE sl_avail = 0; and do it this way: DELETE FROM shoelace WHERE EXISTS (SELECT * FROM shoelace_candelete WHERE sl_name = shoelace.sl_name); Voilą: al_bundy=> SELECT * FROM shoelace; sl_name |sl_avail|sl_color |sl_len|sl_unit |sl_len_cm ----------+--------+----------+------+--------+--------- sl1 | 5|black | 80|cm | 80 sl2 | 6|black | 100|cm | 100 sl7 | 6|brown | 60|cm | 60 sl4 | 8|black | 40|inch | 101.6 sl3 | 10|black | 35|inch | 88.9 sl8 | 21|brown | 40|inch | 101.6 sl10 | 1000|magenta | 40|inch | 101.6 sl5 | 4|brown | 1|m | 100 sl6 | 20|brown | 0.9|m | 90 (9 rows) A DELETE on a view, with a subselect qualification that in total uses 4 nesting/joined views, where one of them itself has a subselect qualification containing a view and where calculated view columns are used, gets rewritten into one single parse tree that deletes the requested data from a real table. I think there are only a few situations out in the real world, where such a construct is necessary. But it makes me feel comfortable that it works. The truth is: Doing this I found one more bug while writing this document. But after fixing that I was a little amazed that it works at all. Rules and Permissions Due to rewriting of queries by the Postgres rule system, other tables/views than those used in the original query get accessed. Using update rules, this can include write access to tables. Rewrite rules don't have a separate owner. The owner of a relation (table or view) is automatically the owner of the rewrite rules that are defined for it. The Postgres rule system changes the behavior of the default access control system. Relations that are used due to rules get checked against the permissions of the rule owner, not the user invoking the rule. This means, that a user does only need the required permissions for the tables/views he names in his queries. For example: A user has a list of phone numbers where some of them are private, the others are of interest for the secretary of the office. He can construct the following: CREATE TABLE phone_data (person text, phone text, private bool); CREATE VIEW phone_number AS SELECT person, phone FROM phone_data WHERE NOT private; GRANT SELECT ON phone_number TO secretary; Nobody except him (and the database superusers) can access the phone_data table. But due to the GRANT, the secretary can SELECT from the phone_number view. The rule system will rewrite the SELECT from phone_number into a SELECT from phone_data and add the qualification that only entries where private is false are wanted. Since the user is the owner of phone_number, the read access to phone_data is now checked against his permissions and the query is considered granted. The check for accessing phone_number is also performed, but this is done against the invoking user, so nobody but the user and the secretary can use it. The permissions are checked rule by rule. So the secretary is for now the only one who can see the public phone numbers. But the secretary can setup another view and grant access to that to public. Then, anyone can see the phone_number data through the secretaries view. What the secretary cannot do is to create a view that directly accesses phone_data (actually he can, but it will not work since every access aborts the transaction during the permission checks). And as soon as the user will notice, that the secretary opened his phone_number view, he can REVOKE his access. Immediately any access to the secretaries view will fail. Someone might think that this rule by rule checking is a security hole, but in fact it isn't. If this would not work, the secretary could setup a table with the same columns as phone_number and copy the data to there once per day. Then it's his own data and he can grant access to everyone he wants. A GRANT means I trust you. If someone you trust does the thing above, it's time to think it over and then REVOKE. This mechanism does also work for update rules. In the examples of the previous section, the owner of the tables in Al's database could GRANT SELECT, INSERT, UPDATE and DELETE on the shoelace view to al. But only SELECT on shoelace_log. The rule action to write log entries will still be executed successfully. And Al could see the log entries. But he cannot create fake entries, nor could he manipulate or remove existing ones. Warning GRANT ALL currently includes RULE permission. This means the granted user could drop the rule, do the changes and reinstall it. I think this should get changed quickly. Rules versus Triggers Many things that can be done using triggers can also be implemented using the Postgres rule system. What currently cannot be implemented by rules are some kinds of constraints. It is possible, to place a qualified rule that rewrites a query to NOTHING if the value of a column does not appear in another table. But then the data is silently thrown away and that's not a good idea. If checks for valid values are required, and in the case of an invalid value an error message should be generated, it must be done by a trigger for now. On the other hand a trigger that is fired on INSERT on a view can do the same as a rule, put the data somewhere else and suppress the insert in the view. But it cannot do the same thing on UPDATE or DELETE, because there is no real data in the view relation that could be scanned and thus the trigger would never get called. Only a rule will help. For the things that can be implemented by both, it depends on the usage of the database, which is the best. A trigger is fired for any row affected once. A rule manipulates the parse tree or generates an additional one. So if many rows are affected in one statement, a rule issuing one extra query would usually do a better job than a trigger that is called for any single row and must execute his operations this many times. For example: There are two tables CREATE TABLE computer ( hostname text, -- indexed manufacturer text -- indexed ); CREATE TABLE software ( software text, -- indexed hostname text -- indexed ); Both tables have many thousands of rows and the index on hostname is unique. The hostname column contains the full qualified domain name of the computer. The rule/trigger should constraint delete rows from software that reference the deleted host. Since the trigger is called for each individual row deleted from computer, it can use the statement DELETE FROM software WHERE hostname = $1; in a prepared and saved plan and pass the hostname in the parameter. The rule would be written as CREATE RULE computer_del AS ON DELETE TO computer DO DELETE FROM software WHERE hostname = OLD.hostname; Now we look at different types of deletes. In the case of a DELETE FROM computer WHERE hostname = 'mypc.local.net'; the table computer is scanned by index (fast) and the query issued by the trigger would also be an index scan (fast too). The extra query from the rule would be a DELETE FROM software WHERE computer.hostname = 'mypc.local.net' AND software.hostname = computer.hostname; Since there are appropriate indexes setup, the planner will create a plan of Nestloop -> Index Scan using comp_hostidx on computer -> Index Scan using soft_hostidx on software So there would be not that much difference in speed between the trigger and the rule implementation. With the next delete we want to get rid of all the 2000 computers where the hostname starts with 'old'. There are two possible queries to do that. One is DELETE FROM computer WHERE hostname >= 'old' AND hostname < 'ole' Where the plan for the rule query will be a Hash Join -> Seq Scan on software -> Hash -> Index Scan using comp_hostidx on computer The other possible query is a DELETE FROM computer WHERE hostname ~ '^old'; with the execution plan Nestloop -> Index Scan using comp_hostidx on computer -> Index Scan using soft_hostidx on software This shows, that the planner does not realize that the qualification for the hostname on computer could also be used for an index scan on software when there are multiple qualification expressions combined with AND, what he does in the regexp version of the query. The trigger will get invoked once for any of the 2000 old computers that have to be deleted and that will result in one index scan over computer and 2000 index scans for the software. The rule implementation will do it with two queries over indexes. And it depends on the overall size of the software table if the rule will still be faster in the sequential scan situation. 2000 query executions over the SPI manager take some time, even if all the index blocks to look them up will soon appear in the cache. The last query we look at is a DELETE FROM computer WHERE manufacurer = 'bim'; Again this could result in many rows to be deleted from computer. So the trigger will again fire many queries into the executor. But the rule plan will again be the nested loop over two index scans. Only using another index on computer: Nestloop -> Index Scan using comp_manufidx on computer -> Index Scan using soft_hostidx on software resulting from the rules query DELETE FROM software WHERE computer.manufacurer = 'bim' AND software.hostname = computer.hostname; In any of these cases, the extra queries from the rule system will be more or less independent from the number of affected rows in a query. Another situation is cases on UPDATE where it depends on the change of an attribute if an action should be performed or not. In Postgres version 6.4, the attribute specification for rule events is disabled (it will have its comeback latest in 6.5, maybe earlier - stay tuned). So for now the only way to create a rule as in the shoelace_log example is to do it with a rule qualification. That results in an extra query that is performed always, even if the attribute of interest cannot change at all because it does not appear in the target list of the initial query. When this is enabled again, it will be one more advantage of rules over triggers. Optimization of a trigger must fail by definition in this case, because the fact that its actions will only be done when a specific attribute is updated is hidden in its functionality. The definition of a trigger only allows to specify it on row level, so whenever a row is touched, the trigger must be called to make its decision. The rule system will know it by looking up the target list and will suppress the additional query completely if the attribute isn't touched. So the rule, qualified or not, will only do its scans if there ever could be something to do. Rules will only be significantly slower than triggers if their actions result in large and bad qualified joins, a situation where the planner fails. They are a big hammer. Using a big hammer without caution can cause big damage. But used with the right touch, they can hit any nail on the head.