Indexes indexes Indexes are a common way to enhance database performance. An index allows the database server to find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database system as a whole, so they should be used sensibly. Introduction The classical example for the need of an index is if there is a table similar to this: CREATE TABLE test1 ( id integer, content varchar ); and the application requires a lot of queries of the form SELECT content FROM test1 WHERE id = constant; Ordinarily, the system would have to scan the entire test1 table row by row to find all matching entries. If there are a lot of rows in test1 and only a few rows (possibly zero or one) returned by the query, then this is clearly an inefficient method. If the system were instructed to maintain an index on the id column, then it could use a more efficient method for locating matching rows. For instance, it might only have to walk a few levels deep into a search tree. A similar approach is used in most books of non-fiction: Terms and concepts that are frequently looked up by readers are collected in an alphabetic index at the end of the book. The interested reader can scan the index relatively quickly and flip to the appropriate page, and would not have to read the entire book to find the interesting location. As it is the task of the author to anticipate the items that the readers are most likely to look up, it is the task of the database programmer to foresee which indexes would be of advantage. The following command would be used to create the index on the id column, as discussed: CREATE INDEX test1_id_index ON test1 (id); The name test1_id_index can be chosen freely, but you should pick something that enables you to remember later what the index was for. To remove an index, use the DROP INDEX command. Indexes can be added to and removed from tables at any time. Once the index is created, no further intervention is required: the system will use the index when it thinks it would be more efficient than a sequential table scan. But you may have to run the ANALYZE command regularly to update statistics to allow the query planner to make educated decisions. Also read for information about how to find out whether an index is used and when and why the planner may choose to not use an index. Indexes can benefit UPDATEs and DELETEs with search conditions. Indexes can also be used in join queries. Thus, an index defined on a column that is part of a join condition can significantly speed up queries with joins. When an index is created, the system has to keep it synchronized with the table. This adds overhead to data manipulation operations. Therefore indexes that are non-essential or do not get used at all should be removed. Note that a query or data manipulation command can use at most one index per table. Index Types PostgreSQL provides several index types: B-tree, R-tree, GiST, and Hash. Each index type is more appropriate for a particular query type because of the algorithm it uses. indexes B-tree B-tree indexes By default, the CREATE INDEX command will create a B-tree index, which fits the most common situations. In particular, the PostgreSQL query optimizer will consider using a B-tree index whenever an indexed column is involved in a comparison using one of these operators: < <= = >= > indexes R-tree R-tree indexes R-tree indexes are especially suited for spatial data. To create an R-tree index, use a command of the form CREATE INDEX name ON table USING RTREE (column); The PostgreSQL query optimizer will consider using an R-tree index whenever an indexed column is involved in a comparison using one of these operators: << &< &> >> @ ~= && (Refer to about the meaning of these operators.) indexes hash hash indexes The query optimizer will consider using a hash index whenever an indexed column is involved in a comparison using the = operator. The following command is used to create a hash index: CREATE INDEX name ON table USING HASH (column); Testing has shown PostgreSQL's hash indexes to be similar or slower than B-tree indexes, and the index size and build time for hash indexes is much worse. Hash indexes also suffer poor performance under high concurrency. For these reasons, hash index use is discouraged. The B-tree index is an implementation of Lehman-Yao high-concurrency B-trees. The R-tree index method implements standard R-trees using Guttman's quadratic split algorithm. The hash index is an implementation of Litwin's linear hashing. We mention the algorithms used solely to indicate that all of these access methods are fully dynamic and do not have to be optimized periodically (as is the case with, for example, static hash access methods). Multicolumn Indexes indexes multicolumn An index can be defined on more than one column. For example, if you have a table of this form: CREATE TABLE test2 ( major int, minor int, name varchar ); (Say, you keep your /dev directory in a database...) and you frequently make queries like SELECT name FROM test2 WHERE major = constant AND minor = constant; then it may be appropriate to define an index on the columns major and minor together, e.g., CREATE INDEX test2_mm_idx ON test2 (major, minor); Currently, only the B-tree and GiST implementations support multicolumn indexes. Up to 32 columns may be specified. (This limit can be altered when building PostgreSQL; see the file pg_config.h.) The query optimizer can use a multicolumn index for queries that involve the first n consecutive columns in the index (when used with appropriate operators), up to the total number of columns specified in the index definition. For example, an index on (a, b, c) can be used in queries involving all of a, b, and c, or in queries involving both a and b, or in queries involving only a, but not in other combinations. (In a query involving a and c the optimizer might choose to use the index for a only and treat c like an ordinary unindexed column.) Multicolumn indexes can only be used if the clauses involving the indexed columns are joined with AND. For instance, SELECT name FROM test2 WHERE major = constant OR minor = constant; cannot make use of the index test2_mm_idx defined above to look up both columns. (It can be used to look up only the major column, however.) Multicolumn indexes should be used sparingly. Most of the time, an index on a single column is sufficient and saves space and time. Indexes with more than three columns are almost certainly inappropriate. Unique Indexes indexes unique Indexes may also be used to enforce uniqueness of a column's value, or the uniqueness of the combined values of more than one column. CREATE UNIQUE INDEX name ON table (column , ...); Currently, only B-tree indexes can be declared unique. When an index is declared unique, multiple table rows with equal indexed values will not be allowed. NULL values are not considered equal. PostgreSQL automatically creates unique indexes when a table is declared with a unique constraint or a primary key, on the columns that make up the primary key or unique columns (a multicolumn index, if appropriate), to enforce that constraint. A unique index can be added to a table at any later time, to add a unique constraint. The preferred way to add a unique constraint to a table is ALTER TABLE ... ADD CONSTRAINT. The use of indexes to enforce unique constraints could be considered an implementation detail that should not be accessed directly. Functional Indexes indexes on functions For a functional index, an index is defined on the result of a function applied to one or more columns of a single table. Functional indexes can be used to obtain fast access to data based on the result of function calls. For example, a common way to do case-insensitive comparisons is to use the lower function: SELECT * FROM test1 WHERE lower(col1) = 'value'; This query can use an index, if one has been defined on the result of the lower(column) operation: CREATE INDEX test1_lower_col1_idx ON test1 (lower(col1)); The function in the index definition can take more than one argument, but they must be table columns, not constants. Functional indexes are always single-column (namely, the function result) even if the function uses more than one input field; there cannot be multicolumn indexes that contain function calls. The restrictions mentioned in the previous paragraph can easily be worked around by defining a custom function to use in the index definition that computes any desired result internally. Operator Classes An index definition may specify an operator class for each column of an index. CREATE INDEX name ON table (column opclass , ...); The operator class identifies the operators to be used by the index for that column. For example, a B-tree index on four-byte integers would use the int4_ops class; this operator class includes comparison functions for four-byte integers. In practice the default operator class for the column's data type is usually sufficient. The main point of having operator classes is that for some data types, there could be more than one meaningful ordering. For example, we might want to sort a complex-number data type either by absolute value or by real part. We could do this by defining two operator classes for the data type and then selecting the proper class when making an index. There are also some operator classes with special purposes: The operator classes box_ops and bigbox_ops both support R-tree indexes on the box data type. The difference between them is that bigbox_ops scales box coordinates down, to avoid floating-point exceptions from doing multiplication, addition, and subtraction on very large floating-point coordinates. If the field on which your rectangles lie is about 20 000 units square or larger, you should use bigbox_ops. The following query shows all defined operator classes: SELECT am.amname AS acc_method, opc.opcname AS ops_name FROM pg_am am, pg_opclass opc WHERE opc.opcamid = am.oid ORDER BY acc_method, ops_name; It can be extended to show all the operators included in each class: SELECT am.amname AS acc_method, opc.opcname AS ops_name, opr.oprname AS ops_comp FROM pg_am am, pg_opclass opc, pg_amop amop, pg_operator opr WHERE opc.opcamid = am.oid AND amop.amopclaid = opc.oid AND amop.amopopr = opr.oid ORDER BY acc_method, ops_name, ops_comp; Keys Author Written by Herouth Maoz (herouth@oumail.openu.ac.il). This originally appeared on the User's Mailing List on 1998-03-02 in response to the question: "What is the difference between PRIMARY KEY and UNIQUE constraints?". Subject: Re: [QUESTIONS] PRIMARY KEY | UNIQUE What's the difference between: PRIMARY KEY(fields,...) and UNIQUE (fields,...) - Is this an alias? - If PRIMARY KEY is already unique, then why is there another kind of key named UNIQUE? A primary key is the field(s) used to identify a specific row. For example, Social Security numbers identifying a person. A simply UNIQUE combination of fields has nothing to do with identifying the row. It's simply an integrity constraint. For example, I have collections of links. Each collection is identified by a unique number, which is the primary key. This key is used in relations. However, my application requires that each collection will also have a unique name. Why? So that a human being who wants to modify a collection will be able to identify it. It's much harder to know, if you have two collections named Life Science, the one tagged 24433 is the one you need, and the one tagged 29882 is not. So, the user selects the collection by its name. We therefore make sure, within the database, that names are unique. However, no other table in the database relates to the collections table by the collection Name. That would be very inefficient. Moreover, despite being unique, the collection name does not actually define the collection! For example, if somebody decided to change the name of the collection from Life Science to Biology, it will still be the same collection, only with a different name. As long as the name is unique, that's OK. So: Primary key: Is used for identifying the row and relating to it. Is impossible (or hard) to update. Should not allow null values. Unique field(s): Are used as an alternative access to the row. Are updatable, so long as they are kept unique. Null values are acceptable. As for why no non-unique keys are defined explicitly in standard SQL syntax? Well, you must understand that indexes are implementation-dependent. SQL does not define the implementation, merely the relations between data in the database. PostgreSQL does allow non-unique indexes, but indexes used to enforce SQL keys are always unique. Thus, you may query a table by any combination of its columns, despite the fact that you don't have an index on these columns. The indexes are merely an implementation aid that each RDBMS offers you, in order to cause commonly used queries to be done more efficiently. Some RDBMS may give you additional measures, such as keeping a key stored in main memory. They will have a special command, for example CREATE MEMSTORE ON table COLUMNS cols (This is not an existing command, just an example.) In fact, when you create a primary key or a unique combination of fields, nowhere in the SQL specification does it say that an index is created, nor that the retrieval of data by the key is going to be more efficient than a sequential scan! So, if you want to use a combination of fields that is not unique as a secondary key, you really don't have to specify anything - just start retrieving by that combination! However, if you want to make the retrieval efficient, you'll have to resort to the means your RDBMS provider gives you - be it an index, my imaginary MEMSTORE command, or an intelligent RDBMS that creates indexes without your knowledge based on the fact that you have sent it many queries based on a specific combination of keys... (It learns from experience). Partial Indexes indexes partial A partial index is an index built over a subset of a table; the subset is defined by a conditional expression (called the predicate of the partial index). The index contains entries for only those table rows that satisfy the predicate. A major motivation for partial indexes is to avoid indexing common values. Since a query searching for a common value (one that accounts for more than a few percent of all the table rows) will not use the index anyway, there is no point in keeping those rows in the index at all. This reduces the size of the index, which will speed up queries that do use the index. It will also speed up many table update operations because the index does not need to be updated in all cases. shows a possible application of this idea. Setting up a Partial Index to Exclude Common Values Suppose you are storing web server access logs in a database. Most accesses originate from the IP range of your organization but some are from elsewhere (say, employees on dial-up connections). If your searches by IP are primarily for outside accesses, you probably do not need to index the IP range that corresponds to your organization's subnet. Assume a table like this: CREATE TABLE access_log ( url varchar, client_ip inet, ... ); To create a partial index that suits our example, use a command such as this: CREATE INDEX access_log_client_ip_ix ON access_log (client_ip) WHERE NOT (client_ip > inet '192.168.100.0' AND client_ip < inet '192.168.100.255'); A typical query that can use this index would be: SELECT * FROM access_log WHERE url = '/index.html' AND client_ip = inet '212.78.10.32'; A query that cannot use this index is: SELECT * FROM access_log WHERE client_ip = inet '192.168.100.23'; Observe that this kind of partial index requires that the common values be predetermined. If the distribution of values is inherent (due to the nature of the application) and static (not changing over time), this is not difficult, but if the common values are merely due to the coincidental data load this can require a lot of maintenance work. Another possibility is to exclude values from the index that the typical query workload is not interested in; this is shown in . This results in the same advantages as listed above, but it prevents the uninteresting values from being accessed via that index at all, even if an index scan might be profitable in that case. Obviously, setting up partial indexes for this kind of scenario will require a lot of care and experimentation. Setting up a Partial Index to Exclude Uninteresting Values If you have a table that contains both billed and unbilled orders, where the unbilled orders take up a small fraction of the total table and yet those are the most-accessed rows, you can improve performance by creating an index on just the unbilled rows. The command to create the index would look like this: CREATE INDEX orders_unbilled_index ON orders (order_nr) WHERE billed is not true; A possible query to use this index would be SELECT * FROM orders WHERE billed is not true AND order_nr < 10000; However, the index can also be used in queries that do not involve order_nr at all, e.g., SELECT * FROM orders WHERE billed is not true AND amount > 5000.00; This is not as efficient as a partial index on the amount column would be, since the system has to scan the entire index. Yet, if there are relatively few unbilled orders, using this partial index just to find the unbilled orders could be a win. Note that this query cannot use this index: SELECT * FROM orders WHERE order_nr = 3501; The order 3501 may be among the billed or among the unbilled orders. also illustrates that the indexed column and the column used in the predicate do not need to match. PostgreSQL supports partial indexes with arbitrary predicates, so long as only columns of the table being indexed are involved. However, keep in mind that the predicate must match the conditions used in the queries that are supposed to benefit from the index. To be precise, a partial index can be used in a query only if the system can recognize that the query's WHERE condition mathematically implies the index's predicate. PostgreSQL does not have a sophisticated theorem prover that can recognize mathematically equivalent predicates that are written in different forms. (Not only is such a general theorem prover extremely difficult to create, it would probably be too slow to be of any real use.) The system can recognize simple inequality implications, for example x < 1 implies x < 2; otherwise the predicate condition must exactly match the query's WHERE condition or the index will not be recognized to be usable. A third possible use for partial indexes does not require the index to be used in queries at all. The idea here is to create a unique index over a subset of a table, as in . This enforces uniqueness among the rows that satisfy the index predicate, without constraining those that do not. Setting up a Partial Unique Index Suppose that we have a table describing test outcomes. We wish to ensure that there is only one successful entry for a given subject and target combination, but there might be any number of unsuccessful entries. Here is one way to do it: CREATE TABLE tests (subject text, target text, success bool, ...); CREATE UNIQUE INDEX tests_success_constraint ON tests (subject, target) WHERE success; This is a particularly efficient way of doing it when there are few successful trials and many unsuccessful ones. Finally, a partial index can also be used to override the system's query plan choices. It may occur that data sets with peculiar distributions will cause the system to use an index when it really should not. In that case the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan choices are cause for a bug report. Keep in mind that setting up a partial index indicates that you know at least as much as the query planner knows, in particular you know when an index might be profitable. Forming this knowledge requires experience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial index over a regular index will not be much. More information about partial indexes can be found in , , and . Examining Index Usage Although indexes in PostgreSQL do not need maintenance and tuning, it is still important to check which indexes are actually used by the real-life query workload. Examining index usage is done with the EXPLAIN command; its application for this purpose is illustrated in . It is difficult to formulate a general procedure for determining which indexes to set up. There are a number of typical cases that have been shown in the examples throughout the previous sections. A good deal of experimentation will be necessary in most cases. The rest of this section gives some tips for that. Always run ANALYZE first. This command collects statistics about the distribution of the values in the table. This information is required to guess the number of rows returned by a query, which is needed by the planner to assign realistic costs to each possible query plan. In absence of any real statistics, some default values are assumed, which are almost certain to be inaccurate. Examining an application's index usage without having run ANALYZE is therefore a lost cause. Use real data for experimentation. Using test data for setting up indexes will tell you what indexes you need for the test data, but that is all. It is especially fatal to use proportionally reduced data sets. While selecting 1000 out of 100000 rows could be a candidate for an index, selecting 1 out of 100 rows will hardly be, because the 100 rows will probably fit within a single disk page, and there is no plan that can beat sequentially fetching 1 disk page. Also be careful when making up test data, which is often unavoidable when the application is not in production use yet. Values that are very similar, completely random, or inserted in sorted order will skew the statistics away from the distribution that real data would have. When indexes are not used, it can be useful for testing to force their use. There are run-time parameters that can turn off various plan types (described in the Administrator's Guide). For instance, turning off sequential scans (enable_seqscan) and nested-loop joins (enable_nestloop), which are the most basic plans, will force the system to use a different plan. If the system still chooses a sequential scan or nested-loop join then there is probably a more fundamental problem for why the index is not used, for example, the query condition does not match the index. (What kind of query can use what kind of index is explained in the previous sections.) If forcing index usage does use the index, then there are two possibilities: Either the system is right and using the index is indeed not appropriate, or the cost estimates of the query plans are not reflecting reality. So you should time your query with and without indexes. The EXPLAIN ANALYZE command can be useful here. If it turns out that the cost estimates are wrong, there are, again, two possibilities. The total cost is computed from the per-row costs of each plan node times the selectivity estimate of the plan node. The costs of the plan nodes can be tuned with run-time parameters (described in the Administrator's Guide). An inaccurate selectivity estimate is due to insufficient statistics. It may be possible to help this by tuning the statistics-gathering parameters (see ALTER TABLE reference). If you do not succeed in adjusting the costs to be more appropriate, then you may have to resort to forcing index usage explicitly. You may also want to contact the PostgreSQL developers to examine the issue.