Routine Database Maintenance Tasks maintenance Routine maintenance PostgreSQL, like any database software, requires that certain tasks be performed regularly to achieve optimum performance. The tasks discussed here are required, but they are repetitive in nature and can easily be automated using standard Unix tools such as cron scripts. PostgreSQL is low-maintenance compared to other database management systems. It is also fairly easy, provided you read the documentation. There are three tasks that must be performed on a periodic basis. The first is backups. If you do not have a current backup and your system experiences a catastrophic failure, you will lose your data. You can read further about backups procedures in . The second is vacuum which is discussed in . The third is to update the planner statistics using the analyze command as discussed in . Another task that may need periodic attention is log file management. This is discussed in . Routine Vacuuming vacuum PostgreSQL's VACUUM command must be run on a regular basis for several reasons: To recover or reuse disk space occupied by updated or deleted rows. To update data statistics used by the PostgreSQL query planner. To protect against loss of very old data due to transaction ID wraparound. The frequency and scope of the VACUUM operations performed for each of these reasons will vary depending on the needs of each site. Therefore, database administrators must understand these issues and develop an appropriate maintenance strategy. This section concentrates on explaining the high-level issues; for details about command syntax and so on, see the reference page. The standard form of VACUUM does not intefere with production database operations. Items such as SELECTS, INSERTS, UPDATES and DELETES will continue to function as normal, though you will not be able to modify the definition (Such as ALTER TABLE ADD COLUMN) of a table while it is being vacuumed. The release of PostgreSQL 8.0, introduced new configuration parameters to further reduce the potentially negative performance impact of background vacuuming. See . An automated mechanism for performing the necessary VACUUM operations has been added in PostgreSQL 8.1. See . Recovering disk space disk space In normal PostgreSQL operation, an UPDATE or DELETE of a row does not immediately remove the old version of the row. This approach is necessary to gain the benefits of multiversion concurrency control (see ): the row version must not be deleted while it is still potentially visible to other transactions. But eventually, an outdated or deleted row version is no longer of interest to any transaction. The space it occupies must be reclaimed for reuse by new rows, to avoid infinite growth of disk space requirements. This is done by running VACUUM. Clearly, a table that receives frequent updates or deletes will need to be vacuumed more often than tables that are seldom updated. It may be useful to set up periodic cron tasks that VACUUM only selected tables, skipping tables that are known not to change often. This is only likely to be helpful if you have both large heavily-updated tables and large seldom-updated tables — the extra cost of vacuuming a small table isn't enough to be worth worrying about. There are two variants of the VACUUM command. The first form, known as lazy vacuum or just VACUUM, marks expired data in tables and indexes for future reuse; it does not attempt to reclaim the space used by this expired data unless the space is at the end of the table and an exclusive table lock can be easily obtained. Unused space at the start or middle of the file does not result in the file being shortened and space returned to the operating system. This variant of VACUUM can be run concurrently with normal database operations. The second form is the VACUUM FULL command. This uses a more aggressive algorithm for reclaiming the space consumed by expired row versions. Any space that is freed by VACUUM FULL is immediately returned to the operating system. Unfortunately, this variant of the VACUUM command acquires an exclusive lock on each table while VACUUM FULL is processing it. Therefore, frequently using VACUUM FULL can have an extremely negative effect on the performance of concurrent database queries. The standard form of VACUUM is best used with the goal of maintaining a fairly level steady-state usage of disk space. If you need to return disk space to the operating system you can use VACUUM FULL — but what's the point of releasing disk space that will only have to be allocated again soon? Moderately frequent standard VACUUM runs are a better approach than infrequent VACUUM FULL runs for maintaining heavily-updated tables. Recommended practice for most sites is to schedule a database-wide VACUUM once a day at a low-usage time of day, supplemented by more frequent vacuuming of heavily-updated tables if necessary. (Some installations with an extremely high rate of data modification VACUUM busy tables as often as once every few minutes.) If you have multiple databases in a cluster, don't forget to VACUUM each one; the program may be helpful. VACUUM FULL is recommended for cases where you know you have deleted the majority of rows in a table, so that the steady-state size of the table can be shrunk substantially with VACUUM FULL's more aggressive approach. Use plain VACUUM, not VACUUM FULL, for routine vacuuming for space recovery. If you have a table whose entire contents are deleted on a periodic basis, consider doing it with the TRUNCATE rather than using the DELETE followed by VACUUM. TRUNCATE removes the entire content of the table immediately, without requiring a subsequent VACUUM or VACUUM FULL to reclaim the now-unused disk space. Updating planner statistics statistics of the planner ANALYZE The PostgreSQL query planner relies on statistical information about the contents of tables in order to generate good plans for queries. These statistics are gathered by the ANALYZE command, which can be invoked by itself or as an optional step in VACUUM. It is important to have reasonably accurate statistics, otherwise poor choices of plans may degrade database performance. As with vacuuming for space recovery, frequent updates of statistics are more useful for heavily-updated tables than for seldom-updated ones. But even for a heavily-updated table, there may be no need for statistics updates if the statistical distribution of the data is not changing much. A simple rule of thumb is to think about how much the minimum and maximum values of the columns in the table change. For example, a timestamp column that contains the time of row update will have a constantly-increasing maximum value as rows are added and updated; such a column will probably need more frequent statistics updates than, say, a column containing URLs for pages accessed on a website. The URL column may receive changes just as often, but the statistical distribution of its values probably changes relatively slowly. It is possible to run ANALYZE on specific tables and even just specific columns of a table, so the flexibility exists to update some statistics more frequently than others if your application requires it. In practice, however, it is usually best to just analyze the entire database because it is a fast operation. It uses a statistical random sampling of the rows of a table rather than reading every single row. Although per-column tweaking of ANALYZE frequency may not be very productive, you may well find it worthwhile to do per-column adjustment of the level of detail of the statistics collected by ANALYZE. Columns that are heavily used in WHERE clauses and have highly irregular data distributions may require a finer-grain data histogram than other columns. See ALTER TABLE SET STATISTICS. Recommended practice for most sites is to schedule a database-wide ANALYZE once a day at a low-usage time of day; this can usefully be combined with a nightly VACUUM. However, sites with relatively slowly changing table statistics may find that this is overkill, and that less-frequent ANALYZE runs are sufficient. Preventing transaction ID wraparound failures transaction ID wraparound PostgreSQL's MVCC transaction semantics depend on being able to compare transaction ID (XID) numbers: a row version with an insertion XID greater than the current transaction's XID is in the future and should not be visible to the current transaction. But since transaction IDs have limited size (32 bits at this writing) a cluster that runs for a long time (more than 4 billion transactions) would suffer transaction ID wraparound: the XID counter wraps around to zero, and all of a sudden transactions that were in the past appear to be in the future — which means their outputs become invisible. In short, catastrophic data loss. (Actually the data is still there, but that's cold comfort if you can't get at it.) To avoid this, it is necessary to vacuum every table in every database at least once every billion transactions. In practice this isn't an onerous requirement, but since the consequences of failing to meet it can be complete data loss (not just wasted disk space or slow performance), some special provisions have been made to help database administrators avoid disaster. For each database in the cluster, PostgreSQL keeps track of the time of the last database-wide VACUUM. When any database approaches the billion-transaction danger level, the system begins to emit warning messages. If nothing is done, it will eventually shut down normal operations until appropriate manual maintenance is done. The remainder of this section gives the details. The new approach to XID comparison distinguishes two special XIDs, numbers 1 and 2 (BootstrapXID and FrozenXID). These two XIDs are always considered older than every normal XID. Normal XIDs (those greater than 2) are compared using modulo-231 arithmetic. This means that for every normal XID, there are two billion XIDs that are older and two billion that are newer; another way to say it is that the normal XID space is circular with no endpoint. Therefore, once a row version has been created with a particular normal XID, the row version will appear to be in the past for the next two billion transactions, no matter which normal XID we are talking about. If the row version still exists after more than two billion transactions, it will suddenly appear to be in the future. To prevent data loss, old row versions must be reassigned the XID FrozenXID sometime before they reach the two-billion-transactions-old mark. Once they are assigned this special XID, they will appear to be in the past to all normal transactions regardless of wraparound issues, and so such row versions will be good until deleted, no matter how long that is. This reassignment of XID is handled by VACUUM. VACUUM's normal policy is to reassign FrozenXID to any row version with a normal XID more than one billion transactions in the past. This policy preserves the original insertion XID until it is not likely to be of interest anymore. (In fact, most row versions will probably live and die without ever being frozen.) With this policy, the maximum safe interval between VACUUM runs on any table is exactly one billion transactions: if you wait longer, it's possible that a row version that was not quite old enough to be reassigned last time is now more than two billion transactions old and has wrapped around into the future — i.e., is lost to you. (Of course, it'll reappear after another two billion transactions, but that's no help.) Since periodic VACUUM runs are needed anyway for the reasons described earlier, it's unlikely that any table would not be vacuumed for as long as a billion transactions. But to help administrators ensure this constraint is met, VACUUM stores transaction ID statistics in the system table pg_database. In particular, the datfrozenxid column of a database's pg_database row is updated at the completion of any database-wide VACUUM operation (i.e., VACUUM that does not name a specific table). The value stored in this field is the freeze cutoff XID that was used by that VACUUM command. All normal XIDs older than this cutoff XID are guaranteed to have been replaced by FrozenXID within that database. A convenient way to examine this information is to execute the query SELECT datname, age(datfrozenxid) FROM pg_database; The age column measures the number of transactions from the cutoff XID to the current transaction's XID. With the standard freezing policy, the age column will start at one billion for a freshly-vacuumed database. When the age approaches two billion, the database must be vacuumed again to avoid risk of wraparound failures. Recommended practice is to VACUUM each database at least once every half-a-billion (500 million) transactions, so as to provide plenty of safety margin. To help meet this rule, each database-wide VACUUM automatically delivers a warning if there are any pg_database entries showing an age of more than 1.5 billion transactions, for example: play=# VACUUM; WARNING: database "mydb" must be vacuumed within 177009986 transactions HINT: To avoid a database shutdown, execute a full-database VACUUM in "mydb". VACUUM If the warnings emitted by VACUUM go ignored, then PostgreSQL will begin to emit a warning like the above on every transaction start once there are fewer than 10 million transactions left until wraparound. If those warnings also are ignored, the system will shut down and refuse to execute any new transactions once there are fewer than 1 million transactions left until wraparound: play=# select 2+2; ERROR: database is shut down to avoid wraparound data loss in database "mydb" HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb". The 1-million-transaction safety margin exists to let the administrator recover without data loss, by manually executing the required VACUUM commands. However, since the system will not execute commands once it has gone into the safety shutdown mode, the only way to do this is to stop the server and use a single-user backend to execute VACUUM. The shutdown mode is not enforced by a single-user backend. See the reference page for details about using a single-user backend. VACUUM with the FREEZE option uses a more aggressive freezing policy: row versions are frozen if they are old enough to be considered good by all open transactions. In particular, if a VACUUM FREEZE is performed in an otherwise-idle database, it is guaranteed that all row versions in that database will be frozen. Hence, as long as the database is not modified in any way, it will not need subsequent vacuuming to avoid transaction ID wraparound problems. This technique is used by initdb to prepare the template0 database. It should also be used to prepare any user-created databases that are to be marked datallowconn = false in pg_database, since there isn't any convenient way to VACUUM a database that you can't connect to. A database that is marked datallowconn = false in pg_database is assumed to be properly frozen; the automatic warnings and wraparound protection shutdown do not take such databases into account. Therefore it's up to you to ensure you've correctly frozen a database before you mark it with datallowconn = false. The auto-vacuum daemon autovacuum general information Beginning in PostgreSQL 8.1, there is a separate optional server process called the autovacuum daemon, whose purpose is to automate the execution of VACUUM and ANALYZE commands. When enabled, the autovacuum daemon runs periodically and checks for tables that have had a large number of inserted, updated or deleted tuples. These checks use the row-level statistics collection facility; therefore, the autovacuum daemon cannot be used unless and are set to true. Also, it's important to allow a slot for the autovacuum process when choosing the value of . The autovacuum daemon, when enabled, runs every seconds and determines which database to process. Any database which is close to transaction ID wraparound is immediately processed. In this case, autovacuum issues a database-wide VACUUM call, or VACUUM FREEZE if it's a template database, and then terminates. If no database fulfills this criterion, the one that was least recently processed by autovacuum is chosen. In this case each table in the selected database is checked, and individual VACUUM or ANALYZE commands are issued as needed. For each table, two conditions are used to determine which operation(s) to apply. If the number of obsolete tuples since the last VACUUM exceeds the vacuum threshold, the table is vacuumed. The vacuum threshold is defined as: vacuum threshold = vacuum base threshold + vacuum scale factor * number of tuples where the vacuum base threshold is , the vacuum scale factor is , and the number of tuples is pg_class.reltuples. The number of obsolete tuples is obtained from the statistics collector; it is a semi-accurate count updated by each UPDATE and DELETE operation. (It is only semi-accurate because some information may be lost under heavy load.) For analyze, a similar condition is used: the threshold, defined as analyze threshold = analyze base threshold + analyze scale factor * number of tuples is compared to the total number of tuples inserted, updated, or deleted since the last ANALYZE. The default thresholds and scale factors are taken from postgresql.conf, but it is possible to override them on a table-by-table basis by making entries in the system catalog pg_autovacuum. If a pg_autovacuum row exists for a particular table, the settings it specifies are applied; otherwise the global settings are used. See for more details on the global settings. Besides the base threshold values and scale factors, there are three more parameters that can be set for each table in pg_autovacuum. The first, pg_autovacuum.enabled, can be set to false to instruct the autovacuum daemon to skip that particular table entirely. In this case autovacuum will only touch the table when it vacuums the entire database to prevent transaction ID wraparound. The other two parameters, the vacuum cost delay (pg_autovacuum.vac_cost_delay) and the vacuum cost limit (pg_autovacuum.vac_cost_limit), are used to set table-specific values for the feature. If any of the values in pg_autovacuum are set to a negative number, or if a row is not present at all in pg_autovacuum for any particular table, the corresponding values from postgresql.conf are used. There is not currently any support for making pg_autovacuum entries, except by doing manual INSERTs into the catalog. This feature will be improved in future releases, and it is likely that the catalog definition will change. The contents of the pg_autovacuum system catalog are currently not saved in database dumps created by the tools pg_dump and pg_dumpall. If you want to preserve them across a dump/reload cycle, make sure you dump the catalog manually. Routine Reindexing reindex In some situations it is worthwhile to rebuild indexes periodically with the command. In PostgreSQL releases before 7.4, periodic reindexing was frequently necessary to avoid index bloat, due to lack of internal space reclamation in B-tree indexes. Any situation in which the range of index keys changed over time — for example, an index on timestamps in a table where old entries are eventually deleted — would result in bloat, because index pages for no-longer-needed portions of the key range were not reclaimed for re-use. Over time, the index size could become indefinitely much larger than the amount of useful data in it. In PostgreSQL 7.4 and later, index pages that have become completely empty are reclaimed for re-use. There is still a possibility for inefficient use of space: if all but a few index keys on a page have been deleted, the page remains allocated. So a usage pattern in which all but a few keys in each range are eventually deleted will see poor use of space. The potential for bloat is not indefinite — at worst there will be one key per page — but it may still be worthwhile to schedule periodic reindexing for indexes that have such usage patterns. The potential for bloat in non-B-tree indexes has not been well characterized. It is a good idea to keep an eye on the index's physical size when using any non-B-tree index type. Also, for B-tree indexes a freshly-constructed index is somewhat faster to access than one that has been updated many times, because logically adjacent pages are usually also physically adjacent in a newly built index. (This consideration does not currently apply to non-B-tree indexes.) It might be worthwhile to reindex periodically just to improve access speed. Log File Maintenance server log log file maintenance It is a good idea to save the database server's log output somewhere, rather than just routing it to /dev/null. The log output is invaluable when it comes time to diagnose problems. However, the log output tends to be voluminous (especially at higher debug levels) and you won't want to save it indefinitely. You need to rotate the log files so that new log files are started and old ones removed after a reasonable period of time. If you simply direct the stderr of postgres into a file, you will have log output, but the only way to truncate the log file is to stop and restart the server. This may be OK if you are using PostgreSQL in a development environment, but few production servers would find this behavior acceptable. A better approach is to send the server's stderr output to some type of log rotation program. There is a built-in log rotation program, which you can use by setting the configuration parameter redirect_stderr to true in postgresql.conf. The control parameters for this program are described in . Alternatively, you might prefer to use an external log rotation program, if you have one that you are already using with other server software. For example, the rotatelogs tool included in the Apache distribution can be used with PostgreSQL. To do this, just pipe the server's stderr output to the desired program. If you start the server with pg_ctl, then stderr is already redirected to stdout, so you just need a pipe command, for example: pg_ctl start | rotatelogs /var/log/pgsql_log 86400 Another production-grade approach to managing log output is to send it all to syslog and let syslog deal with file rotation. To do this, set the configuration parameter log_destination to syslog (to log to syslog only) in postgresql.conf. Then you can send a SIGHUP signal to the syslog daemon whenever you want to force it to start writing a new log file. If you want to automate log rotation, the logrotate program can be configured to work with log files from syslog. On many systems, however, syslog is not very reliable, particularly with large log messages; it may truncate or drop messages just when you need them the most. Also, on Linux, syslog will sync each message to disk, yielding poor performance. (You can use a - at the start of the file name in the syslog configuration file to disable this behavior.) Note that all the solutions described above take care of starting new log files at configurable intervals, but they do not handle deletion of old, no-longer-interesting log files. You will probably want to set up a batch job to periodically delete old log files. Another possibility is to configure the rotation program so that old log files are overwritten cyclically.