postgresql/doc/src/sgml/maintenance.sgml

717 lines
32 KiB
Plaintext
Raw Normal View History

<!-- $PostgreSQL: pgsql/doc/src/sgml/maintenance.sgml,v 1.57 2006/08/01 00:09:06 momjian Exp $ -->
<chapter id="maintenance">
<title>Routine Database Maintenance Tasks</title>
2003-08-31 19:32:24 +02:00
<indexterm zone="maintenance">
<primary>maintenance</primary>
</indexterm>
<indexterm zone="maintenance">
<primary>Routine maintenance</primary>
</indexterm>
<para>
PostgreSQL, like any database software, requires that certain tasks
be performed regularly to achieve optimum performance. The tasks
discussed here are <emphasis>required</emphasis>, but they
are repetitive in nature and can easily be automated using standard
Unix tools such as <application>cron</application> scripts.
</para>
<para>
<productname>PostgreSQL</productname> is low-maintenance compared
to other database management systems. It is also fairly easy, provided
you read the documentation.
</para>
<para>
There are three tasks that <emphasis>must</emphasis> be performed on
a periodic basis. The first is backups. If you do not have a current backup
and your system experiences a catastrophic failure, you will lose your data.
You can read further about backups procedures in <xref linkend="backup">.
The second is vacuum which is discussed in <xref linkend="routine-vacuuming">.
The third is to update the planner statistics using the analyze command as
discussed in <xref linkend="vacuum-for-statistics">.
</para>
<para>
Another task that may need periodic attention is log file management.
This is discussed in <xref linkend="logfile-maintenance">.
</para>
<sect1 id="routine-vacuuming">
<title>Routine Vacuuming</title>
2001-11-12 20:19:39 +01:00
<indexterm zone="routine-vacuuming">
<primary>vacuum</primary>
</indexterm>
<para>
<productname>PostgreSQL</productname>'s <command>VACUUM</> command
<emphasis>must</emphasis> be run on a regular basis for several reasons:
<orderedlist>
<listitem>
<simpara>To recover or reuse disk space occupied by updated or deleted
rows.</simpara>
</listitem>
<listitem>
<simpara>To update data statistics used by the
<productname>PostgreSQL</productname> query planner.</simpara>
</listitem>
<listitem>
<simpara>To protect against loss of very old data due to
<firstterm>transaction ID wraparound</>.</simpara>
</listitem>
</orderedlist>
The frequency and scope of the <command>VACUUM</> operations
performed for each of these reasons will vary depending on the
needs of each site. Therefore, database administrators must
understand these issues and develop an appropriate maintenance
strategy. This section concentrates on explaining the high-level
issues; for details about command syntax and so on, see the <xref
linkend="sql-vacuum" endterm="sql-vacuum-title"> reference page.
</para>
<para>
The standard form of <command>VACUUM</> does not intefere with production
database operations. Items such as SELECTS, INSERTS, UPDATES and DELETES
will continue to function as normal, though you will not be able to modify the
definition (Such as ALTER TABLE ADD COLUMN) of a table while it is being vacuumed.
The release of <productname>PostgreSQL</productname> 8.0, introduced new
configuration parameters to further reduce the potentially negative
2004-12-27 23:30:10 +01:00
performance impact of background vacuuming. See
<xref linkend="runtime-config-resource-vacuum-cost">.
</para>
2005-09-13 03:51:18 +02:00
<para>
An automated mechanism for performing the necessary <command>VACUUM</>
operations has been added in <productname>PostgreSQL</productname> 8.1.
See <xref linkend="autovacuum">.
</para>
<sect2 id="vacuum-for-space-recovery">
<title>Recovering disk space</title>
2001-11-12 20:19:39 +01:00
<indexterm zone="vacuum-for-space-recovery">
<primary>disk space</primary>
</indexterm>
<para>
In normal <productname>PostgreSQL</productname> operation, an
<command>UPDATE</> or <command>DELETE</> of a row does not
immediately remove the old version of the row.
This approach is necessary to gain the benefits of multiversion
concurrency control (see <xref linkend="mvcc">): the row version
must not be deleted while it is still potentially visible to other
transactions. But eventually, an outdated or deleted row version is no
longer of interest to any transaction. The space it occupies must be
reclaimed for reuse by new rows, to avoid infinite growth of disk
space requirements. This is done by running <command>VACUUM</>.
</para>
<para>
Clearly, a table that receives frequent updates or deletes will need
to be vacuumed more often than tables that are seldom updated. It
may be useful to set up periodic <application>cron</> tasks that
<command>VACUUM</command> only selected tables, skipping tables that are known not to
change often. This is only likely to be helpful if you have both
large heavily-updated tables and large seldom-updated tables &mdash; the
extra cost of vacuuming a small table isn't enough to be worth
worrying about.
</para>
<para>
There are two variants of the <command>VACUUM</command>
command. The first form, known as <quote>lazy vacuum</quote> or
just <command>VACUUM</command>, marks expired data in tables and
indexes for future reuse; it does <emphasis>not</emphasis> attempt
to reclaim the space used by this expired data unless the space is
2005-12-07 15:35:45 +01:00
at the end of the table and an exclusive table lock can be easily
obtained. Unused space at the start or middle of the file does
not result in the file being shortened and space returned to the
operating system. This variant of <command>VACUUM</command> can be
run concurrently with normal database operations.
</para>
<para>
The second form is the <command>VACUUM FULL</command>
command. This uses a more aggressive algorithm for reclaiming the
space consumed by expired row versions. Any space that is freed by
<command>VACUUM FULL</command> is immediately returned to the
operating system. Unfortunately, this variant of the
<command>VACUUM</command> command acquires an exclusive lock on
each table while <command>VACUUM FULL</command> is processing
it. Therefore, frequently using <command>VACUUM FULL</command> can
have an extremely negative effect on the performance of concurrent
database queries.
</para>
<para>
The standard form of <command>VACUUM</> is best used with the goal
of maintaining a fairly level steady-state usage of disk space. If
you need to return disk space to the operating system you can use
<command>VACUUM FULL</> &mdash; but what's the point of releasing disk
space that will only have to be allocated again soon? Moderately
frequent standard <command>VACUUM</> runs are a better approach
than infrequent <command>VACUUM FULL</> runs for maintaining
heavily-updated tables.
</para>
<para>
Recommended practice for most sites is to schedule a database-wide
<command>VACUUM</> once a day at a low-usage time of day,
supplemented by more frequent vacuuming of heavily-updated tables
2004-12-27 23:30:10 +01:00
if necessary. (Some installations with an extremely high
rate of data modification <command>VACUUM</command> busy tables as
often as once every few minutes.) If you have multiple databases
in a cluster, don't forget to <command>VACUUM</command> each one;
the program <xref linkend="app-vacuumdb" endterm="app-vacuumdb-title">
may be helpful.
</para>
<para>
<command>VACUUM FULL</> is recommended for cases where you know
you have deleted the majority of rows in a table, so that the
steady-state size of the table can be shrunk substantially with
<command>VACUUM FULL</>'s more aggressive approach. Use plain
<command>VACUUM</>, not <command>VACUUM FULL</>, for routine
vacuuming for space recovery.
</para>
<para>
If you have a table whose entire contents are deleted on a periodic
basis, consider doing it with the <command>TRUNCATE</command> rather
than using the <command>DELETE</command> followed by
<command>VACUUM</command>. <command>TRUNCATE</command> removes the
2004-12-13 19:05:10 +01:00
entire content of the table immediately, without requiring a
subsequent <command>VACUUM</command> or <command>VACUUM
FULL</command> to reclaim the now-unused disk space.
</para>
</sect2>
<sect2 id="vacuum-for-statistics">
<title>Updating planner statistics</title>
2003-08-31 19:32:24 +02:00
<indexterm zone="vacuum-for-statistics">
<primary>statistics</primary>
<secondary>of the planner</secondary>
</indexterm>
<indexterm zone="vacuum-for-statistics">
<primary>ANALYZE</primary>
</indexterm>
<para>
The <productname>PostgreSQL</productname> query planner relies on
statistical information about the contents of tables in order to
generate good plans for queries. These statistics are gathered by
the <command>ANALYZE</> command, which can be invoked by itself or
as an optional step in <command>VACUUM</>. It is important to have
reasonably accurate statistics, otherwise poor choices of plans may
degrade database performance.
</para>
<para>
As with vacuuming for space recovery, frequent updates of statistics
are more useful for heavily-updated tables than for seldom-updated
ones. But even for a heavily-updated table, there may be no need for
statistics updates if the statistical distribution of the data is
not changing much. A simple rule of thumb is to think about how much
the minimum and maximum values of the columns in the table change.
For example, a <type>timestamp</type> column that contains the time
of row update will have a constantly-increasing maximum value as
rows are added and updated; such a column will probably need more
frequent statistics updates than, say, a column containing URLs for
pages accessed on a website. The URL column may receive changes just
as often, but the statistical distribution of its values probably
changes relatively slowly.
</para>
<para>
It is possible to run <command>ANALYZE</> on specific tables and even
just specific columns of a table, so the flexibility exists to update some
statistics more frequently than others if your application requires it.
In practice, however, it is usually best to just analyze the entire database
because it is a fast operation. It uses a statistical random sampling of
the rows of a table rather than reading every single row.
</para>
<tip>
<para>
Although per-column tweaking of <command>ANALYZE</> frequency may not be
very productive, you may well find it worthwhile to do per-column
adjustment of the level of detail of the statistics collected by
<command>ANALYZE</>. Columns that are heavily used in <literal>WHERE</> clauses
and have highly irregular data distributions may require a finer-grain
data histogram than other columns. See <command>ALTER TABLE SET
STATISTICS</>.
</para>
</tip>
<para>
Recommended practice for most sites is to schedule a database-wide
<command>ANALYZE</> once a day at a low-usage time of day; this can
usefully be combined with a nightly <command>VACUUM</>. However,
sites with relatively slowly changing table statistics may find that
this is overkill, and that less-frequent <command>ANALYZE</> runs
are sufficient.
</para>
</sect2>
<sect2 id="vacuum-for-wraparound">
<title>Preventing transaction ID wraparound failures</title>
2001-11-12 20:19:39 +01:00
<indexterm zone="vacuum-for-wraparound">
<primary>transaction ID</primary>
<secondary>wraparound</secondary>
</indexterm>
<para>
<productname>PostgreSQL</productname>'s MVCC transaction semantics
depend on being able to compare transaction ID (<acronym>XID</>)
numbers: a row version with an insertion XID greater than the current
transaction's XID is <quote>in the future</> and should not be visible
to the current transaction. But since transaction IDs have limited size
(32 bits at this writing) a cluster that runs for a long time (more
than 4 billion transactions) would suffer <firstterm>transaction ID
wraparound</>: the XID counter wraps around to zero, and all of a sudden
transactions that were in the past appear to be in the future &mdash; which
2001-08-28 01:42:34 +02:00
means their outputs become invisible. In short, catastrophic data loss.
(Actually the data is still there, but that's cold comfort if you can't
get at it.) To avoid this, it is <emphasis>necessary to vacuum every table
in every database at least once every billion transactions</emphasis>.
</para>
<para>
In practice this isn't an onerous requirement, but since the
consequences of failing to meet it can be complete data loss (not
just wasted disk space or slow performance), some special provisions
have been made to help database administrators avoid disaster.
For each database in the cluster, <productname>PostgreSQL</productname>
keeps track of the time of the last database-wide <command>VACUUM</>.
When any database approaches the billion-transaction danger level,
the system begins to emit warning messages. If nothing is done, it
will eventually shut down normal operations until appropriate
manual maintenance is done. The remainder of this
section gives the details.
</para>
<para>
The new approach to XID comparison distinguishes two special XIDs,
numbers 1 and 2 (<literal>BootstrapXID</> and
<literal>FrozenXID</>). These two XIDs are always considered older
than every normal XID. Normal XIDs (those greater than 2) are
compared using modulo-2<superscript>31</> arithmetic. This means
that for every normal XID, there are two billion XIDs that are
<quote>older</> and two billion that are <quote>newer</>; another
way to say it is that the normal XID space is circular with no
endpoint. Therefore, once a row version has been created with a particular
normal XID, the row version will appear to be <quote>in the past</> for
the next two billion transactions, no matter which normal XID we are
talking about. If the row version still exists after more than two billion
transactions, it will suddenly appear to be in the future. To
prevent data loss, old row versions must be reassigned the XID
<literal>FrozenXID</> sometime before they reach the
two-billion-transactions-old mark. Once they are assigned this
special XID, they will appear to be <quote>in the past</> to all
normal transactions regardless of wraparound issues, and so such
row versions will be good until deleted, no matter how long that is. This
reassignment of XID is handled by <command>VACUUM</>.
</para>
<para>
<command>VACUUM</>'s normal policy is to reassign <literal>FrozenXID</>
to any row version with a normal XID more than one billion transactions in the
past. This policy preserves the original insertion XID until it is not
likely to be of interest anymore. (In fact, most row versions will probably
live and die without ever being <quote>frozen</>.) With this policy,
the maximum safe interval between <command>VACUUM</> runs on any table
is exactly one billion transactions: if you wait longer, it's possible
that a row version that was not quite old enough to be reassigned last time
is now more than two billion transactions old and has wrapped around
into the future &mdash; i.e., is lost to you. (Of course, it'll reappear
after another two billion transactions, but that's no help.)
</para>
<para>
Since periodic <command>VACUUM</> runs are needed anyway for the reasons
described earlier, it's unlikely that any table would not be vacuumed
for as long as a billion transactions. But to help administrators ensure
this constraint is met, <command>VACUUM</> stores transaction ID
statistics in the system table <literal>pg_database</>. In particular,
the <literal>datfrozenxid</> column of a database's
<literal>pg_database</> row is updated at the completion of any
database-wide <command>VACUUM</command> operation (i.e.,
<command>VACUUM</> that does not
name a specific table). The value stored in this field is the freeze
cutoff XID that was used by that <command>VACUUM</> command. All normal
XIDs older than this cutoff XID are guaranteed to have been replaced by
<literal>FrozenXID</> within that database. A convenient way to
examine this information is to execute the query
<programlisting>
2001-10-13 01:32:34 +02:00
SELECT datname, age(datfrozenxid) FROM pg_database;
</programlisting>
The <literal>age</> column measures the number of transactions from the
cutoff XID to the current transaction's XID.
</para>
<para>
With the standard freezing policy, the <literal>age</> column will start
at one billion for a freshly-vacuumed database. When the <literal>age</>
approaches two billion, the database must be vacuumed again to avoid
risk of wraparound failures. Recommended practice is to <command>VACUUM</command> each
database at least once every half-a-billion (500 million) transactions,
so as to provide plenty of safety margin. To help meet this rule,
each database-wide <command>VACUUM</> automatically delivers a warning
if there are any <literal>pg_database</> entries showing an
<literal>age</> of more than 1.5 billion transactions, for example:
<programlisting>
play=# VACUUM;
WARNING: database "mydb" must be vacuumed within 177009986 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in "mydb".
VACUUM
</programlisting>
</para>
<para>
If the warnings emitted by <command>VACUUM</> go ignored, then
<productname>PostgreSQL</productname> will begin to emit a warning
like the above on every transaction start once there are fewer than 10
million transactions left until wraparound. If those warnings also are
ignored, the system will shut down and refuse to execute any new
transactions once there are fewer than 1 million transactions left
until wraparound:
<programlisting>
play=# select 2+2;
ERROR: database is shut down to avoid wraparound data loss in database "mydb"
HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb".
</programlisting>
The 1-million-transaction safety margin exists to let the
administrator recover without data loss, by manually executing the
required <command>VACUUM</> commands. However, since the system will not
execute commands once it has gone into the safety shutdown mode,
the only way to do this is to stop the server and use a single-user
backend to execute <command>VACUUM</>. The shutdown mode is not enforced
by a single-user backend. See the <xref linkend="app-postgres"> reference
page for details about using a single-user backend.
</para>
<para>
<command>VACUUM</> with the <command>FREEZE</> option uses a more
aggressive freezing policy: row versions are frozen if they are old enough
to be considered good by all open transactions. In particular, if a
<command>VACUUM FREEZE</> is performed in an otherwise-idle
database, it is guaranteed that <emphasis>all</> row versions in that
database will be frozen. Hence, as long as the database is not
modified in any way, it will not need subsequent vacuuming to avoid
transaction ID wraparound problems. This technique is used by
<command>initdb</> to prepare the <literal>template0</> database.
It should also be used to prepare any user-created databases that
are to be marked <literal>datallowconn</> = <literal>false</> in
<literal>pg_database</>, since there isn't any convenient way to
<command>VACUUM</command> a database that you can't connect to.
</para>
2004-12-27 23:30:10 +01:00
<warning>
<para>
A database that is marked <literal>datallowconn</> = <literal>false</>
in <literal>pg_database</> is assumed to be properly frozen; the
automatic warnings and wraparound protection shutdown do not take
such databases into account. Therefore it's up to you to ensure
you've correctly frozen a database before you mark it with
<literal>datallowconn</> = <literal>false</>.
2004-12-27 23:30:10 +01:00
</para>
</warning>
</sect2>
2005-09-13 03:51:18 +02:00
<sect2 id="autovacuum">
<title id="autovacuum-title">The auto-vacuum daemon</title>
<indexterm>
<primary>autovacuum</primary>
<secondary>general information</secondary>
</indexterm>
<para>
Beginning in <productname>PostgreSQL </productname> 8.1, there is a
separate optional server process called the <firstterm>autovacuum
daemon</firstterm>, whose purpose is to automate the execution of
2005-09-13 03:51:18 +02:00
<command>VACUUM</command> and <command>ANALYZE </command> commands.
When enabled, the autovacuum daemon runs periodically and checks for
tables that have had a large number of inserted, updated or deleted
tuples. These checks use the row-level statistics collection facility;
therefore, the autovacuum daemon cannot be used unless <xref
linkend="guc-stats-start-collector"> and <xref
linkend="guc-stats-row-level"> are set to <literal>true</literal>. Also,
it's important to allow a slot for the autovacuum process when choosing
the value of <xref linkend="guc-superuser-reserved-connections">.
2005-09-13 03:51:18 +02:00
</para>
<para>
The autovacuum daemon, when enabled, runs every <xref
linkend="guc-autovacuum-naptime"> seconds and determines which database
to process. Any database which is close to transaction ID wraparound
is immediately processed. In this case, autovacuum issues a
database-wide <command>VACUUM</command> call, or <command>VACUUM
FREEZE</command> if it's a template database, and then terminates. If
no database fulfills this criterion, the one that was least recently
processed by autovacuum is chosen. In this case each table in
the selected database is checked, and individual <command>VACUUM</command>
or <command>ANALYZE</command> commands are issued as needed.
2005-09-13 03:51:18 +02:00
</para>
<para>
For each table, two conditions are used to determine which operation(s)
to apply. If the number of obsolete tuples since the last
2005-09-13 03:51:18 +02:00
<command>VACUUM</command> exceeds the <quote>vacuum threshold</quote>, the
table is vacuumed. The vacuum threshold is defined as:
2005-09-13 03:51:18 +02:00
<programlisting>
vacuum threshold = vacuum base threshold + vacuum scale factor * number of tuples
</programlisting>
where the vacuum base threshold is
<xref linkend="guc-autovacuum-vacuum-threshold">,
2005-09-13 03:51:18 +02:00
the vacuum scale factor is
<xref linkend="guc-autovacuum-vacuum-scale-factor">,
2005-09-13 03:51:18 +02:00
and the number of tuples is
<structname>pg_class</structname>.<structfield>reltuples</structfield>.
The number of obsolete tuples is obtained from the statistics
collector; it is a semi-accurate count updated by each
2005-09-13 03:51:18 +02:00
<command>UPDATE</command> and <command>DELETE</command> operation. (It
is only semi-accurate because some information may be lost under heavy
load.) For analyze, a similar condition is used: the threshold, defined as
<programlisting>
analyze threshold = analyze base threshold + analyze scale factor * number of tuples
</programlisting>
is compared to the total number of tuples inserted, updated, or deleted
since the last <command>ANALYZE</command>.
2005-09-13 03:51:18 +02:00
</para>
<para>
The default thresholds and scale factors are taken from
<filename>postgresql.conf</filename>, but it is possible to override them
on a table-by-table basis by making entries in the system catalog
<link
linkend="catalog-pg-autovacuum"><structname>pg_autovacuum</></link>.
If a <structname>pg_autovacuum</structname> row exists for a particular
table, the settings it specifies are applied; otherwise the global
settings are used. See <xref linkend="runtime-config-autovacuum"> for
more details on the global settings.
2005-09-13 03:51:18 +02:00
</para>
<para>
Besides the base threshold values and scale factors, there are three
more parameters that can be set for each table in
<structname>pg_autovacuum</structname>.
The first, <structname>pg_autovacuum</>.<structfield>enabled</>,
can be set to <literal>false</literal> to instruct the autovacuum daemon
to skip that particular table entirely. In this case
autovacuum will only touch the table when it vacuums the entire database
to prevent transaction ID wraparound.
The other two parameters, the vacuum cost delay
2005-09-13 03:51:18 +02:00
(<structname>pg_autovacuum</structname>.<structfield>vac_cost_delay</structfield>)
and the vacuum cost limit
(<structname>pg_autovacuum</structname>.<structfield>vac_cost_limit</structfield>),
are used to set table-specific values for the
2005-09-13 03:51:18 +02:00
<xref linkend="runtime-config-resource-vacuum-cost" endterm="runtime-config-resource-vacuum-cost-title">
feature.
2005-09-13 03:51:18 +02:00
</para>
<para>
If any of the values in <structname>pg_autovacuum</structname>
are set to a negative number, or if a row is not present at all in
<structname>pg_autovacuum</structname> for any particular table, the
corresponding values from <filename>postgresql.conf</filename> are used.
</para>
<para>
There is not currently any support for making
<structname>pg_autovacuum</structname> entries, except by doing
manual <command>INSERT</>s into the catalog. This feature will be
improved in future releases, and it is likely that the catalog
definition will change.
</para>
<caution>
<para>
The contents of the <structname>pg_autovacuum</structname> system
catalog are currently not saved in database dumps created by
the tools <command>pg_dump</command> and <command>pg_dumpall</command>.
If you want to preserve them across a dump/reload cycle, make sure you
dump the catalog manually.
</para>
</caution>
2005-09-13 03:51:18 +02:00
</sect2>
</sect1>
2002-06-13 06:36:50 +02:00
2002-06-22 06:08:07 +02:00
<sect1 id="routine-reindex">
<title>Routine Reindexing</title>
<indexterm zone="routine-reindex">
<primary>reindex</primary>
</indexterm>
<para>
In some situations it is worthwhile to rebuild indexes periodically
with the <xref linkend="sql-reindex" endterm="sql-reindex-title">
command.
</para>
<para>
In <productname>PostgreSQL</> releases before 7.4, periodic reindexing
was frequently necessary to avoid <quote>index bloat</>, due to lack of
2005-11-05 00:14:02 +01:00
internal space reclamation in B-tree indexes. Any situation in which the
range of index keys changed over time &mdash; for example, an index on
timestamps in a table where old entries are eventually deleted &mdash;
would result in bloat, because index pages for no-longer-needed portions
of the key range were not reclaimed for re-use. Over time, the index size
could become indefinitely much larger than the amount of useful data in it.
</para>
<para>
In <productname>PostgreSQL</> 7.4 and later, index pages that have become
completely empty are reclaimed for re-use. There is still a possibility
for inefficient use of space: if all but a few index keys on a page have
been deleted, the page remains allocated. So a usage pattern in which all
but a few keys in each range are eventually deleted will see poor use of
space. The potential for bloat is not indefinite &mdash; at worst there
will be one key per page &mdash; but it may still be worthwhile to schedule
periodic reindexing for indexes that have such usage patterns.
</para>
<para>
2005-11-05 00:14:02 +01:00
The potential for bloat in non-B-tree indexes has not been well
characterized. It is a good idea to keep an eye on the index's physical
2005-11-05 00:14:02 +01:00
size when using any non-B-tree index type.
</para>
<para>
2005-11-05 00:14:02 +01:00
Also, for B-tree indexes a freshly-constructed index is somewhat faster to
access than one that has been updated many times, because logically
adjacent pages are usually also physically adjacent in a newly built index.
2005-11-05 00:14:02 +01:00
(This consideration does not currently apply to non-B-tree indexes.) It
might be worthwhile to reindex periodically just to improve access speed.
2002-06-22 06:08:07 +02:00
</para>
</sect1>
<sect1 id="logfile-maintenance">
<title>Log File Maintenance</title>
<indexterm zone="logfile-maintenance">
2003-08-31 19:32:24 +02:00
<primary>server log</primary>
<secondary>log file maintenance</secondary>
</indexterm>
<para>
It is a good idea to save the database server's log output
somewhere, rather than just routing it to <filename>/dev/null</>.
The log output is invaluable when it comes time to diagnose
problems. However, the log output tends to be voluminous
(especially at higher debug levels) and you won't want to save it
indefinitely. You need to <quote>rotate</> the log files so that
new log files are started and old ones removed after a reasonable
period of time.
</para>
<para>
If you simply direct the <systemitem>stderr</> of
<command>postgres</command> into a
file, you will have log output, but
the only way to truncate the log file is to stop and restart
the server. This may be OK if you are using
<productname>PostgreSQL</productname> in a development environment,
but few production servers would find this behavior acceptable.
</para>
<para>
A better approach is to send the server's
<systemitem>stderr</> output to some type of log rotation program.
There is a built-in log rotation program, which you can use by
setting the configuration parameter <literal>redirect_stderr</> to
<literal>true</> in <filename>postgresql.conf</>. The control
parameters for this program are described in <xref
linkend="runtime-config-logging-where">.
</para>
<para>
Alternatively, you might prefer to use an external log rotation
program, if you have one that you are already using with other
server software. For example, the <application>rotatelogs</application>
tool included in the <productname>Apache</productname> distribution
can be used with <productname>PostgreSQL</productname>. To do this,
just pipe the server's
<systemitem>stderr</> output to the desired program.
If you start the server with
<command>pg_ctl</>, then <systemitem>stderr</>
is already redirected to <systemitem>stdout</>, so you just need a
2004-12-27 23:30:10 +01:00
pipe command, for example:
<programlisting>
pg_ctl start | rotatelogs /var/log/pgsql_log 86400
</programlisting>
</para>
<para>
Another production-grade approach to managing log output is to
send it all to <application>syslog</> and let
<application>syslog</> deal with file rotation. To do this, set the
configuration parameter <literal>log_destination</> to <literal>syslog</>
(to log to <application>syslog</> only) in
<filename>postgresql.conf</>. Then you can send a <literal>SIGHUP</literal>
signal to the <application>syslog</> daemon whenever you want to force it
to start writing a new log file. If you want to automate log
rotation, the <application>logrotate</application> program can be
configured to work with log files from
<application>syslog</application>.
</para>
<para>
On many systems, however, <application>syslog</> is not very reliable,
particularly with large log messages; it may truncate or drop messages
2004-12-27 23:30:10 +01:00
just when you need them the most. Also, on <productname>Linux</>,
2004-03-15 15:21:30 +01:00
<application>syslog</> will sync each message to disk, yielding poor
performance. (You can use a <literal>-</> at the start of the file name
2004-12-13 19:05:10 +01:00
in the <application>syslog</> configuration file to disable this behavior.)
</para>
<para>
Note that all the solutions described above take care of starting new
log files at configurable intervals, but they do not handle deletion
2004-12-27 23:30:10 +01:00
of old, no-longer-interesting log files. You will probably want to set
up a batch job to periodically delete old log files. Another possibility
is to configure the rotation program so that old log files are overwritten
cyclically.
</para>
</sect1>
</chapter>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
sgml-omittag:nil
sgml-shorttag:t
sgml-minimize-attributes:nil
sgml-always-quote-attributes:t
sgml-indent-step:1
sgml-indent-data:t
sgml-parent-document:nil
sgml-default-dtd-file:"./reference.ced"
sgml-exposed-tags:nil
sgml-local-catalogs:("/usr/lib/sgml/catalog")
sgml-local-ecat-files:nil
End:
-->