Performance Tips Query performance can be affected by many things. Some of these can be manipulated by the user, while others are fundamental to the underlying design of the system. This chapter provides some hints about understanding and tuning Postgres performance. Using <command>EXPLAIN</command> Postgres devises a query plan for each query it is given. Choosing the right plan to match the query structure and the properties of the data is absolutely critical for good performance. You can use the EXPLAIN command to see what query plan the system creates for any query. Plan-reading is an art that deserves an extensive tutorial, which this is not; but here is some basic information. The numbers that are currently quoted by EXPLAIN are: Estimated start-up cost (time expended before output scan can start, e.g., time to do the sorting in a SORT node). Estimated total cost (if all tuples are retrieved, which they may not be --- a query with a LIMIT will stop short of paying the total cost, for example). Estimated number of rows output by this plan node (again, without regard for any LIMIT). Estimated average width (in bytes) of rows output by this plan node. The costs are measured in units of disk page fetches. (CPU effort estimates are converted into disk-page units using some fairly arbitrary fudge-factors. If you want to experiment with these factors, see the list of run-time configuration parameters in the Administrator's Guide.) It's important to note that the cost of an upper-level node includes the cost of all its child nodes. It's also important to realize that the cost only reflects things that the planner/optimizer cares about. In particular, the cost does not consider the time spent transmitting result tuples to the frontend --- which could be a pretty dominant factor in the true elapsed time, but the planner ignores it because it cannot change it by altering the plan. (Every correct plan will output the same tuple set, we trust.) Rows output is a little tricky because it is not the number of rows processed/scanned by the query --- it is usually less, reflecting the estimated selectivity of any WHERE-clause constraints that are being applied at this node. Ideally the top-level rows estimate will approximate the number of rows actually returned, updated, or deleted by the query. Here are some examples (using the regress test database after a vacuum analyze, and 7.2 development sources): regression=# explain select * from tenk1; NOTICE: QUERY PLAN: Seq Scan on tenk1 (cost=0.00..333.00 rows=10000 width=148) This is about as straightforward as it gets. If you do select * from pg_class where relname = 'tenk1'; you will find out that tenk1 has 233 disk pages and 10000 tuples. So the cost is estimated at 233 page reads, defined as 1.0 apiece, plus 10000 * cpu_tuple_cost which is currently 0.01 (try show cpu_tuple_cost). Now let's modify the query to add a qualification clause: regression=# explain select * from tenk1 where unique1 < 1000; NOTICE: QUERY PLAN: Seq Scan on tenk1 (cost=0.00..358.00 rows=1003 width=148) The estimate of output rows has gone down because of the WHERE clause. However, the scan will still have to visit all 10000 rows, so the cost hasn't decreased; in fact it has gone up a bit to reflect the extra CPU time spent checking the WHERE condition. The actual number of rows this query would select is 1000, but the estimate is only approximate. If you try to duplicate this experiment, you will probably get a slightly different estimate; moreover, it will change after each ANALYZE command, because the statistics produced by ANALYZE are taken from a randomized sample of the table. Modify the query to restrict the qualification even more: regression=# explain select * from tenk1 where unique1 < 50; NOTICE: QUERY PLAN: Index Scan using tenk1_unique1 on tenk1 (cost=0.00..173.32 rows=47 width=148) and you will see that if we make the WHERE condition selective enough, the planner will eventually decide that an index scan is cheaper than a sequential scan. This plan will only have to visit 50 tuples because of the index, so it wins despite the fact that each individual fetch is more expensive than reading a whole disk page sequentially. Add another condition to the qualification: regression=# explain select * from tenk1 where unique1 < 50 and regression-# stringu1 = 'xxx'; NOTICE: QUERY PLAN: Index Scan using tenk1_unique1 on tenk1 (cost=0.00..173.44 rows=1 width=148) The added clause stringu1 = 'xxx' reduces the output-rows estimate, but not the cost because we still have to visit the same set of tuples. Let's try joining two tables, using the fields we have been discussing: regression=# explain select * from tenk1 t1, tenk2 t2 where t1.unique1 < 50 regression-# and t1.unique2 = t2.unique2; NOTICE: QUERY PLAN: Nested Loop (cost=0.00..269.11 rows=47 width=296) -> Index Scan using tenk1_unique1 on tenk1 t1 (cost=0.00..173.32 rows=47 width=148) -> Index Scan using tenk2_unique2 on tenk2 t2 (cost=0.00..2.01 rows=1 width=148) In this nested-loop join, the outer scan is the same index scan we had in the example before last, and so its cost and row count are the same because we are applying the unique1 < 50 WHERE clause at that node. The t1.unique2 = t2.unique2 clause is not relevant yet, so it doesn't affect row count of the outer scan. For the inner scan, the unique2 value of the current outer-scan tuple is plugged into the inner index scan to produce an index qualification like t2.unique2 = constant. So we get the same inner-scan plan and costs that we'd get from, say, explain select * from tenk2 where unique2 = 42. The costs of the loop node are then set on the basis of the cost of the outer scan, plus one repetition of the inner scan for each outer tuple (47 * 2.01, here), plus a little CPU time for join processing. In this example the loop's output row count is the same as the product of the two scans' row counts, but that's not true in general, because in general you can have WHERE clauses that mention both relations and so can only be applied at the join point, not to either input scan. For example, if we added WHERE ... AND t1.hundred < t2.hundred, that would decrease the output row count of the join node, but not change either input scan. One way to look at variant plans is to force the planner to disregard whatever strategy it thought was the winner, using the enable/disable flags for each plan type. (This is a crude tool, but useful. See also .) regression=# set enable_nestloop = off; SET VARIABLE regression=# explain select * from tenk1 t1, tenk2 t2 where t1.unique1 < 50 regression-# and t1.unique2 = t2.unique2; NOTICE: QUERY PLAN: Hash Join (cost=173.44..557.03 rows=47 width=296) -> Seq Scan on tenk2 t2 (cost=0.00..333.00 rows=10000 width=148) -> Hash (cost=173.32..173.32 rows=47 width=148) -> Index Scan using tenk1_unique1 on tenk1 t1 (cost=0.00..173.32 rows=47 width=148) This plan proposes to extract the 50 interesting rows of tenk1 using ye same olde index scan, stash them into an in-memory hash table, and then do a sequential scan of tenk2, probing into the hash table for possible matches of t1.unique2 = t2.unique2 at each tenk2 tuple. The cost to read tenk1 and set up the hash table is entirely start-up cost for the hash join, since we won't get any tuples out until we can start reading tenk2. The total time estimate for the join also includes a hefty charge for CPU time to probe the hash table 10000 times. Note, however, that we are NOT charging 10000 times 173.32; the hash table setup is only done once in this plan type. It is worth noting that EXPLAIN results should not be extrapolated to situations other than the one you are actually testing; for example, results on a toy-sized table can't be assumed to apply to large tables. The planner's cost estimates are not linear and so it may well choose a different plan for a larger or smaller table. An extreme example is that on a table that only occupies one disk page, you'll nearly always get a sequential scan plan whether indexes are available or not. The planner realizes that it's going to take one disk page read to process the table in any case, so there's no value in expending additional page reads to look at an index. Controlling the Planner with Explicit JOINs Beginning with Postgres 7.1 it is possible to control the query planner to some extent by using explicit JOIN syntax. To see why this matters, we first need some background. In a simple join query, such as SELECT * FROM a,b,c WHERE a.id = b.id AND b.ref = c.id; the planner is free to join the given tables in any order. For example, it could generate a query plan that joins A to B, using the WHERE clause a.id = b.id, and then joins C to this joined table, using the other WHERE clause. Or it could join B to C and then join A to that result. Or it could join A to C and then join them with B --- but that would be inefficient, since the full Cartesian product of A and C would have to be formed, there being no applicable WHERE clause to allow optimization of the join. (All joins in the Postgres executor happen between two input tables, so it's necessary to build up the result in one or another of these fashions.) The important point is that these different join possibilities give semantically equivalent results but may have hugely different execution costs. Therefore, the planner will explore all of them to try to find the most efficient query plan. When a query only involves two or three tables, there aren't many join orders to worry about. But the number of possible join orders grows exponentially as the number of tables expands. Beyond ten or so input tables it's no longer practical to do an exhaustive search of all the possibilities, and even for six or seven tables planning may take an annoyingly long time. When there are too many input tables, the Postgres planner will switch from exhaustive search to a genetic probabilistic search through a limited number of possibilities. (The switch-over threshold is set by the GEQO_THRESHOLD run-time parameter described in the Administrator's Guide.) The genetic search takes less time, but it won't necessarily find the best possible plan. When the query involves outer joins, the planner has much less freedom than it does for plain (inner) joins. For example, consider SELECT * FROM a LEFT JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); Although this query's restrictions are superficially similar to the previous example, the semantics are different because a row must be emitted for each row of A that has no matching row in the join of B and C. Therefore the planner has no choice of join order here: it must join B to C and then join A to that result. Accordingly, this query takes less time to plan than the previous query. In Postgres 7.1, the planner treats all explicit JOIN syntaxes as constraining the join order, even though it is not logically necessary to make such a constraint for inner joins. Therefore, although all of these queries give the same result: SELECT * FROM a,b,c WHERE a.id = b.id AND b.ref = c.id; SELECT * FROM a CROSS JOIN b CROSS JOIN c WHERE a.id = b.id AND b.ref = c.id; SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); the second and third take less time to plan than the first. This effect is not worth worrying about for only three tables, but it can be a lifesaver with many tables. You do not need to constrain the join order completely in order to cut search time, because it's OK to use JOIN operators in a plain FROM list. For example, SELECT * FROM a CROSS JOIN b, c, d, e WHERE ...; forces the planner to join A to B before joining them to other tables, but doesn't constrain its choices otherwise. In this example, the number of possible join orders is reduced by a factor of 5. If you have a mix of outer and inner joins in a complex query, you might not want to constrain the planner's search for a good ordering of inner joins inside an outer join. You can't do that directly in the JOIN syntax, but you can get around the syntactic limitation by using subselects. For example, SELECT * FROM d LEFT JOIN (SELECT * FROM a, b, c WHERE ...) AS ss ON (...); Here, joining D must be the last step in the query plan, but the planner is free to consider various join orders for A,B,C. Constraining the planner's search in this way is a useful technique both for reducing planning time and for directing the planner to a good query plan. If the planner chooses a bad join order by default, you can force it to choose a better order via JOIN syntax --- assuming that you know of a better order, that is. Experimentation is recommended. Populating a Database One may need to do a large number of table insertions when first populating a database. Here are some tips and techniques for making that as efficient as possible. Disable Auto-commit Turn off auto-commit and just do one commit at the end. (In plain SQL, this means issuing BEGIN at the start and COMMIT at the end. Some client libraries may do this behind your back, in which case you need to make sure the library does it when you want it done.) If you allow each insertion to be committed separately, Postgres is doing a lot of work for each record added. Use COPY FROM Use COPY FROM STDIN to load all the records in one command, instead of using a series of INSERT commands. This reduces parsing, planning, etc overhead a great deal. If you do this then it's not necessary to fool around with auto-commit, since it's only one command anyway. Remove Indexes If you are loading a freshly created table, the fastest way is to create the table, bulk-load with COPY, then create any indexes needed for the table. Creating an index on pre-existing data is quicker than updating it incrementally as each record is loaded. If you are augmenting an existing table, you can DROP INDEX, load the table, then recreate the index. Of course, the database performance for other users may be adversely affected during the time that the index is missing. One should also think twice before dropping UNIQUE indexes, since the error checking afforded by the UNIQUE constraint will be lost while the index is missing. ANALYZE Afterwards It's a good idea to run ANALYZE or VACUUM ANALYZE anytime you've added or updated a lot of data, including just after initially populating a table. This ensures that the planner has up-to-date statistics about the table. With no statistics or obsolete statistics, the planner may make poor choices of query plans, leading to bad performance on queries that use your table.