2010-09-20 22:08:53 +02:00
|
|
|
<!-- doc/src/sgml/monitoring.sgml -->
|
2001-10-17 01:57:06 +02:00
|
|
|
|
|
|
|
<chapter id="monitoring">
|
|
|
|
<title>Monitoring Database Activity</title>
|
|
|
|
|
2003-08-31 19:32:24 +02:00
|
|
|
<indexterm zone="monitoring">
|
|
|
|
<primary>monitoring</primary>
|
|
|
|
<secondary>database activity</secondary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<indexterm zone="monitoring">
|
|
|
|
<primary>database activity</primary>
|
|
|
|
<secondary>monitoring</secondary>
|
|
|
|
</indexterm>
|
|
|
|
|
2001-10-17 01:57:06 +02:00
|
|
|
<para>
|
2002-11-11 21:14:04 +01:00
|
|
|
A database administrator frequently wonders, <quote>What is the system
|
2001-10-17 01:57:06 +02:00
|
|
|
doing right now?</quote>
|
|
|
|
This chapter discusses how to find that out.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Several tools are available for monitoring database activity and
|
|
|
|
analyzing performance. Most of this chapter is devoted to describing
|
2022-04-08 06:35:35 +02:00
|
|
|
<productname>PostgreSQL</productname>'s cumulative statistics system,
|
2001-10-17 01:57:06 +02:00
|
|
|
but one should not neglect regular Unix monitoring programs such as
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>ps</command>, <command>top</command>, <command>iostat</command>, and <command>vmstat</command>.
|
2004-12-28 20:08:58 +01:00
|
|
|
Also, once one has identified a
|
Update documentation on may/can/might:
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
Also update two error messages mentioned in the documenation to match.
2007-01-31 21:56:20 +01:00
|
|
|
poorly-performing query, further investigation might be needed using
|
Improve <xref> vs. <command> formatting in the documentation
SQL commands are generally marked up as <command>, except when a link
to a reference page is used using <xref>. But the latter doesn't
create monospace markup, so this looks strange especially when a
paragraph contains a mix of links and non-links.
We considered putting <command> in the <refentrytitle> on the target
side, but that creates some formatting side effects elsewhere.
Generally, it seems safer to solve this on the link source side.
We can't put the <xref> inside the <command>; the DTD doesn't allow
this. DocBook 5 would allow the <command> to have the linkend
attribute itself, but we are not there yet.
So to solve this for now, convert the <xref>s to <link> plus
<command>. This gives the correct look and also gives some more
flexibility what we can put into the link text (e.g., subcommands or
other clauses). In the future, these could then be converted to
DocBook 5 style.
I haven't converted absolutely all xrefs to SQL command reference
pages, only those where we care about the appearance of the link text
or where it was otherwise appropriate to make the appearance match a
bit better. Also in some cases, the links where repetitive, so in
those cases the links where just removed and replaced by a plain
<command>. In cases where we just want the link and don't
specifically care about the generated link text (typically phrased
"for further information see <xref ...>") the xref is kept.
Reported-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://www.postgresql.org/message-id/flat/87o8pco34z.fsf@wibble.ilmari.org
2020-10-03 16:16:51 +02:00
|
|
|
<productname>PostgreSQL</productname>'s <link linkend="sql-explain"><command>EXPLAIN</command></link> command.
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="using-explain"/> discusses <command>EXPLAIN</command>
|
2001-10-17 01:57:06 +02:00
|
|
|
and other methods for understanding the behavior of an individual
|
|
|
|
query.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<sect1 id="monitoring-ps">
|
2011-01-29 19:00:18 +01:00
|
|
|
<title>Standard Unix Tools</title>
|
2001-10-17 01:57:06 +02:00
|
|
|
|
2001-11-12 20:19:39 +01:00
|
|
|
<indexterm zone="monitoring-ps">
|
|
|
|
<primary>ps</primary>
|
|
|
|
<secondary>to monitor activity</secondary>
|
|
|
|
</indexterm>
|
|
|
|
|
2001-10-17 01:57:06 +02:00
|
|
|
<para>
|
2010-02-03 18:25:06 +01:00
|
|
|
On most Unix platforms, <productname>PostgreSQL</productname> modifies its
|
2017-10-09 03:44:17 +02:00
|
|
|
command title as reported by <command>ps</command>, so that individual server
|
2001-10-17 01:57:06 +02:00
|
|
|
processes can readily be identified. A sample display is
|
|
|
|
|
|
|
|
<screen>
|
|
|
|
$ ps auxww | grep ^postgres
|
2012-05-12 00:33:39 +02:00
|
|
|
postgres 15551 0.0 0.1 57536 7132 pts/0 S 18:02 0:00 postgres -i
|
2017-09-20 03:15:09 +02:00
|
|
|
postgres 15554 0.0 0.0 57536 1184 ? Ss 18:02 0:00 postgres: background writer
|
|
|
|
postgres 15555 0.0 0.0 57536 916 ? Ss 18:02 0:00 postgres: checkpointer
|
|
|
|
postgres 15556 0.0 0.0 57536 916 ? Ss 18:02 0:00 postgres: walwriter
|
|
|
|
postgres 15557 0.0 0.0 58504 2244 ? Ss 18:02 0:00 postgres: autovacuum launcher
|
2012-05-12 00:33:39 +02:00
|
|
|
postgres 15582 0.0 0.0 58772 3080 ? Ss 18:04 0:00 postgres: joe runbug 127.0.0.1 idle
|
|
|
|
postgres 15606 0.0 0.0 58772 3052 ? Ss 18:07 0:00 postgres: tgl regression [local] SELECT waiting
|
|
|
|
postgres 15610 0.0 0.0 58772 3056 ? Ss 18:07 0:00 postgres: tgl regression [local] idle in transaction
|
2001-10-17 01:57:06 +02:00
|
|
|
</screen>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
(The appropriate invocation of <command>ps</command> varies across different
|
2001-10-17 01:57:06 +02:00
|
|
|
platforms, as do the details of what is shown. This example is from a
|
|
|
|
recent Linux system.) The first process listed here is the
|
2020-06-15 19:12:58 +02:00
|
|
|
primary server process. The command arguments
|
2022-04-08 06:35:35 +02:00
|
|
|
shown for it are the same ones used when it was launched. The next four
|
2006-06-29 22:00:08 +02:00
|
|
|
processes are background worker processes automatically launched by the
|
2022-04-08 06:35:35 +02:00
|
|
|
primary process. (The <quote>autovacuum launcher</quote> process will not
|
|
|
|
be present if you have set the system not to run autovacuum.)
|
2012-05-12 00:33:39 +02:00
|
|
|
Each of the remaining
|
2001-10-17 01:57:06 +02:00
|
|
|
processes is a server process handling one client connection. Each such
|
|
|
|
process sets its command line display in the form
|
|
|
|
|
|
|
|
<screen>
|
2017-10-09 03:44:17 +02:00
|
|
|
postgres: <replaceable>user</replaceable> <replaceable>database</replaceable> <replaceable>host</replaceable> <replaceable>activity</replaceable>
|
2001-10-17 01:57:06 +02:00
|
|
|
</screen>
|
|
|
|
|
2010-02-03 18:25:06 +01:00
|
|
|
The user, database, and (client) host items remain the same for
|
2001-10-17 01:57:06 +02:00
|
|
|
the life of the client connection, but the activity indicator changes.
|
2017-10-09 03:44:17 +02:00
|
|
|
The activity can be <literal>idle</literal> (i.e., waiting for a client command),
|
|
|
|
<literal>idle in transaction</literal> (waiting for client inside a <command>BEGIN</command> block),
|
|
|
|
or a command type name such as <literal>SELECT</literal>. Also,
|
|
|
|
<literal>waiting</literal> is appended if the server process is presently waiting
|
2010-02-03 18:25:06 +01:00
|
|
|
on a lock held by another session. In the above example we can infer
|
2012-05-12 00:33:39 +02:00
|
|
|
that process 15606 is waiting for process 15610 to complete its transaction
|
|
|
|
and thereby release some lock. (Process 15610 must be the blocker, because
|
|
|
|
there is no other active session. In more complicated cases it would be
|
|
|
|
necessary to look into the
|
|
|
|
<link linkend="view-pg-locks"><structname>pg_locks</structname></link>
|
|
|
|
system view to determine who is blocking whom.)
|
2001-10-17 01:57:06 +02:00
|
|
|
</para>
|
2001-11-29 21:26:55 +01:00
|
|
|
|
2014-06-29 14:15:09 +02:00
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
If <xref linkend="guc-cluster-name"/> has been configured the
|
2017-10-09 03:44:17 +02:00
|
|
|
cluster name will also be shown in <command>ps</command> output:
|
2014-06-29 14:15:09 +02:00
|
|
|
<screen>
|
|
|
|
$ psql -c 'SHOW cluster_name'
|
|
|
|
cluster_name
|
|
|
|
--------------
|
|
|
|
server1
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
$ ps aux|grep server1
|
2017-09-20 03:15:09 +02:00
|
|
|
postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: server1: background writer
|
2014-06-29 14:15:09 +02:00
|
|
|
...
|
|
|
|
</screen>
|
|
|
|
</para>
|
|
|
|
|
2006-06-29 22:00:08 +02:00
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
If you have turned off <xref linkend="guc-update-process-title"/> then the
|
2006-06-29 22:00:08 +02:00
|
|
|
activity indicator is not updated; the process title is set only once
|
2010-02-03 18:25:06 +01:00
|
|
|
when a new process is launched. On some platforms this saves a measurable
|
|
|
|
amount of per-command overhead; on others it's insignificant.
|
2006-06-29 22:00:08 +02:00
|
|
|
</para>
|
|
|
|
|
2001-11-29 21:26:55 +01:00
|
|
|
<tip>
|
|
|
|
<para>
|
|
|
|
<productname>Solaris</productname> requires special handling. You must
|
|
|
|
use <command>/usr/ucb/ps</command>, rather than
|
2003-03-24 15:32:51 +01:00
|
|
|
<command>/bin/ps</command>. You also must use two <option>w</option>
|
2001-11-29 21:26:55 +01:00
|
|
|
flags, not just one. In addition, your original invocation of the
|
2006-06-18 17:38:37 +02:00
|
|
|
<command>postgres</command> command must have a shorter
|
2001-11-29 21:26:55 +01:00
|
|
|
<command>ps</command> status display than that provided by each
|
2017-10-09 03:44:17 +02:00
|
|
|
server process. If you fail to do all three things, the <command>ps</command>
|
|
|
|
output for each server process will be the original <command>postgres</command>
|
2001-11-29 21:43:43 +01:00
|
|
|
command line.
|
2001-11-29 21:26:55 +01:00
|
|
|
</para>
|
|
|
|
</tip>
|
2001-10-17 01:57:06 +02:00
|
|
|
</sect1>
|
|
|
|
|
|
|
|
<sect1 id="monitoring-stats">
|
2022-04-08 06:35:35 +02:00
|
|
|
<title>The Cumulative Statistics System</title>
|
2001-10-17 01:57:06 +02:00
|
|
|
|
2001-11-12 20:19:39 +01:00
|
|
|
<indexterm zone="monitoring-stats">
|
|
|
|
<primary>statistics</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2001-10-17 01:57:06 +02:00
|
|
|
<para>
|
2022-04-08 06:35:35 +02:00
|
|
|
<productname>PostgreSQL</productname>'s <firstterm>cumulative statistics
|
|
|
|
system</firstterm> supports collection and reporting of information about
|
|
|
|
server activity. Presently, accesses to tables and indexes in both
|
|
|
|
disk-block and individual-row terms are counted. The total number of rows
|
|
|
|
in each table, and information about vacuum and analyze actions for each
|
|
|
|
table are also counted. If enabled, calls to user-defined functions and
|
|
|
|
the total time spent in each one are counted as well.
|
2006-06-19 03:51:22 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2014-11-16 13:47:44 +01:00
|
|
|
<productname>PostgreSQL</productname> also supports reporting dynamic
|
|
|
|
information about exactly what is going on in the system right now, such as
|
|
|
|
the exact command currently being executed by other server processes, and
|
|
|
|
which other connections exist in the system. This facility is independent
|
2022-04-08 06:35:35 +02:00
|
|
|
of the cumulative statistics system.
|
2001-10-17 01:57:06 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-stats-setup">
|
2011-01-29 19:00:18 +01:00
|
|
|
<title>Statistics Collection Configuration</title>
|
2001-10-17 01:57:06 +02:00
|
|
|
|
|
|
|
<para>
|
|
|
|
Since collection of statistics adds some overhead to query execution,
|
|
|
|
the system can be configured to collect or not collect information.
|
2003-03-24 15:32:51 +01:00
|
|
|
This is controlled by configuration parameters that are normally set in
|
2017-11-23 15:39:47 +01:00
|
|
|
<filename>postgresql.conf</filename>. (See <xref linkend="runtime-config"/> for
|
2003-03-24 15:32:51 +01:00
|
|
|
details about setting configuration parameters.)
|
2001-10-17 01:57:06 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
The parameter <xref linkend="guc-track-activities"/> enables monitoring
|
2012-04-29 19:24:44 +02:00
|
|
|
of the current command being executed by any server process.
|
2008-05-15 02:17:41 +02:00
|
|
|
</para>
|
|
|
|
|
2012-04-05 17:37:31 +02:00
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
The parameter <xref linkend="guc-track-counts"/> controls whether
|
2022-04-08 06:35:35 +02:00
|
|
|
cumulative statistics are collected about table and index accesses.
|
2012-04-05 17:37:31 +02:00
|
|
|
</para>
|
|
|
|
|
2008-05-15 02:17:41 +02:00
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
The parameter <xref linkend="guc-track-functions"/> enables tracking of
|
2008-05-15 02:17:41 +02:00
|
|
|
usage of user-defined functions.
|
2001-10-17 01:57:06 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
The parameter <xref linkend="guc-track-io-timing"/> enables monitoring
|
2012-04-29 19:24:44 +02:00
|
|
|
of block read and write times.
|
2006-02-14 03:03:44 +01:00
|
|
|
</para>
|
2008-05-15 02:17:41 +02:00
|
|
|
|
Track total amounts of times spent writing and syncing WAL data to disk.
This commit adds new GUC track_wal_io_timing. When this is enabled,
the total amounts of time XLogWrite writes and issue_xlog_fsync syncs
WAL data to disk are counted in pg_stat_wal. This information would be
useful to check how much WAL write and sync affect the performance.
Enabling track_wal_io_timing will make the server query the operating
system for the current time every time WAL is written or synced,
which may cause significant overhead on some platforms. To avoid such
additional overhead in the server with track_io_timing enabled,
this commit introduces track_wal_io_timing as a separate parameter from
track_io_timing.
Note that WAL write and sync activity by walreceiver has not been tracked yet.
This commit makes the server also track the numbers of times XLogWrite
writes and issue_xlog_fsync syncs WAL data to disk, in pg_stat_wal,
regardless of the setting of track_wal_io_timing. This counters can be
used to calculate the WAL write and sync time per request, for example.
Bump PGSTAT_FILE_FORMAT_ID.
Bump catalog version.
Author: Masahiro Ikeda
Reviewed-By: Japin Li, Hayato Kuroda, Masahiko Sawada, David Johnston, Fujii Masao
Discussion: https://postgr.es/m/0509ad67b585a5b86a83d445dfa75392@oss.nttdata.com
2021-03-09 08:52:06 +01:00
|
|
|
<para>
|
|
|
|
The parameter <xref linkend="guc-track-wal-io-timing"/> enables monitoring
|
|
|
|
of WAL write times.
|
|
|
|
</para>
|
|
|
|
|
2006-02-14 03:03:44 +01:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Normally these parameters are set in <filename>postgresql.conf</filename> so
|
2006-02-14 03:03:44 +01:00
|
|
|
that they apply to all server processes, but it is possible to turn
|
|
|
|
them on or off in individual sessions using the <xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="sql-set"/> command. (To prevent
|
2004-03-26 04:18:28 +01:00
|
|
|
ordinary users from hiding their activity from the administrator,
|
|
|
|
only superusers are allowed to change these parameters with
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>SET</command>.)
|
2001-10-17 01:57:06 +02:00
|
|
|
</para>
|
2008-08-05 14:09:30 +02:00
|
|
|
|
|
|
|
<para>
|
2022-04-08 06:35:35 +02:00
|
|
|
Cumulative statistics are collected in shared memory. Every
|
2022-04-11 10:48:48 +02:00
|
|
|
<productname>PostgreSQL</productname> process collects statistics locally,
|
2022-04-08 06:35:35 +02:00
|
|
|
then updates the shared data at appropriate intervals. When a server,
|
|
|
|
including a physical replica, shuts down cleanly, a permanent copy of the
|
|
|
|
statistics data is stored in the <filename>pg_stat</filename> subdirectory,
|
|
|
|
so that statistics can be retained across server restarts. In contrast,
|
|
|
|
when starting from an unclean shutdown (e.g., after an immediate shutdown,
|
|
|
|
a server crash, starting from a base backup, and point-in-time recovery),
|
|
|
|
all statistics counters are reset.
|
2008-08-05 14:09:30 +02:00
|
|
|
</para>
|
|
|
|
|
2001-10-17 01:57:06 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-stats-views">
|
2014-11-16 13:47:44 +01:00
|
|
|
<title>Viewing Statistics</title>
|
2001-10-17 01:57:06 +02:00
|
|
|
|
|
|
|
<para>
|
2004-12-28 20:08:58 +01:00
|
|
|
Several predefined views, listed in <xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="monitoring-stats-dynamic-views-table"/>, are available to show
|
2014-11-16 13:47:44 +01:00
|
|
|
the current state of the system. There are also several other
|
|
|
|
views, listed in <xref
|
2022-04-08 06:35:35 +02:00
|
|
|
linkend="monitoring-stats-views-table"/>, available to show the accumulated
|
|
|
|
statistics. Alternatively, one can
|
|
|
|
build custom views using the underlying cumulative statistics functions, as
|
|
|
|
discussed in <xref linkend="monitoring-stats-functions"/>.
|
2001-10-17 01:57:06 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2022-04-08 06:35:35 +02:00
|
|
|
When using the cumulative statistics views and functions to monitor
|
|
|
|
collected data, it is important to realize that the information does not
|
|
|
|
update instantaneously. Each individual server process flushes out
|
|
|
|
accumulated statistics to shared memory just before going idle, but not
|
|
|
|
more frequently than once per <varname>PGSTAT_MIN_INTERVAL</varname>
|
|
|
|
milliseconds (1 second unless altered while building the server); so a
|
|
|
|
query or transaction still in progress does not affect the displayed totals
|
|
|
|
and the displayed information lags behind actual activity. However,
|
|
|
|
current-query information collected by <varname>track_activities</varname>
|
|
|
|
is always up-to-date.
|
2001-10-17 01:57:06 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Another important point is that when a server process is asked to display
|
2022-04-08 06:35:35 +02:00
|
|
|
any of the accumulated statistics, accessed values are cached until the end
|
|
|
|
of its current transaction in the default configuration. So the statistics
|
|
|
|
will show static information as long as you continue the current
|
|
|
|
transaction. Similarly, information about the current queries of all
|
|
|
|
sessions is collected when any such information is first requested within a
|
|
|
|
transaction, and the same information will be displayed throughout the
|
|
|
|
transaction. This is a feature, not a bug, because it allows you to perform
|
|
|
|
several queries on the statistics and correlate the results without
|
|
|
|
worrying that the numbers are changing underneath you.
|
|
|
|
|
|
|
|
When analyzing statistics interactively, or with expensive queries, the
|
|
|
|
time delta between accesses to individual statistics can lead to
|
|
|
|
significant skew in the cached statistics. To minimize skew,
|
|
|
|
<varname>stats_fetch_consistency</varname> can be set to
|
|
|
|
<literal>snapshot</literal>, at the price of increased memory usage for
|
|
|
|
caching not-needed statistics data. Conversely, if it's known that
|
|
|
|
statistics are only accessed once, caching accessed statistics is
|
|
|
|
unnecessary and can be avoided by setting
|
|
|
|
<varname>stats_fetch_consistency</varname> to <literal>none</literal>.
|
|
|
|
|
|
|
|
You can invoke <function>pg_stat_clear_snapshot</function>() to discard the
|
|
|
|
current transaction's statistics snapshot or cached values (if any). The
|
|
|
|
next use of statistical information will (when in snapshot mode) cause a
|
|
|
|
new snapshot to be built or (when in cache mode) accessed statistics to be
|
|
|
|
cached.
|
2001-10-17 01:57:06 +02:00
|
|
|
</para>
|
|
|
|
|
2010-08-08 18:27:06 +02:00
|
|
|
<para>
|
2022-04-08 06:35:35 +02:00
|
|
|
A transaction can also see its own statistics (not yet flushed out to the
|
|
|
|
shared memory statistics) in the views
|
|
|
|
<structname>pg_stat_xact_all_tables</structname>,
|
2017-10-09 03:44:17 +02:00
|
|
|
<structname>pg_stat_xact_sys_tables</structname>,
|
|
|
|
<structname>pg_stat_xact_user_tables</structname>, and
|
|
|
|
<structname>pg_stat_xact_user_functions</structname>. These numbers do not act as
|
2012-04-29 19:24:44 +02:00
|
|
|
stated above; instead they update continuously throughout the transaction.
|
2010-08-08 18:27:06 +02:00
|
|
|
</para>
|
|
|
|
|
2019-02-21 19:49:27 +01:00
|
|
|
<para>
|
|
|
|
Some of the information in the dynamic statistics views shown in <xref
|
|
|
|
linkend="monitoring-stats-dynamic-views-table"/> is security restricted.
|
|
|
|
Ordinary users can only see all the information about their own sessions
|
|
|
|
(sessions belonging to a role that they are a member of). In rows about
|
|
|
|
other sessions, many columns will be null. Note, however, that the
|
|
|
|
existence of a session and its general properties such as its sessions user
|
2022-03-28 21:10:04 +02:00
|
|
|
and database are visible to all users. Superusers and roles with privileges of
|
2019-02-21 19:49:27 +01:00
|
|
|
built-in role <literal>pg_read_all_stats</literal> (see also <xref
|
2021-04-01 21:32:06 +02:00
|
|
|
linkend="predefined-roles"/>) can see all the information about all sessions.
|
2019-02-21 19:49:27 +01:00
|
|
|
</para>
|
|
|
|
|
2014-11-16 13:47:44 +01:00
|
|
|
<table id="monitoring-stats-dynamic-views-table">
|
|
|
|
<title>Dynamic Statistics Views</title>
|
2001-10-17 01:57:06 +02:00
|
|
|
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry>View Name</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
2012-01-19 14:19:20 +01:00
|
|
|
<entry>
|
|
|
|
<structname>pg_stat_activity</structname>
|
|
|
|
<indexterm><primary>pg_stat_activity</primary></indexterm>
|
|
|
|
</entry>
|
|
|
|
<entry>
|
2012-04-29 19:24:44 +02:00
|
|
|
One row per server process, showing information related to
|
|
|
|
the current activity of that process, such as state and current query.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-stat-activity-view">
|
|
|
|
<structname>pg_stat_activity</structname></link> for details.
|
2012-04-29 19:24:44 +02:00
|
|
|
</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
2014-11-16 13:47:44 +01:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_replication</structname><indexterm><primary>pg_stat_replication</primary></indexterm></entry>
|
2014-11-16 13:47:44 +01:00
|
|
|
<entry>One row per WAL sender process, showing statistics about
|
|
|
|
replication to that sender's connected standby server.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-stat-replication-view">
|
|
|
|
<structname>pg_stat_replication</structname></link> for details.
|
2014-11-16 13:47:44 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2016-01-07 20:21:19 +01:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_wal_receiver</structname><indexterm><primary>pg_stat_wal_receiver</primary></indexterm></entry>
|
2016-01-07 20:21:19 +01:00
|
|
|
<entry>Only one row, showing statistics about the WAL receiver from
|
|
|
|
that receiver's connected server.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-stat-wal-receiver-view">
|
|
|
|
<structname>pg_stat_wal_receiver</structname></link> for details.
|
2016-01-07 20:21:19 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2022-04-07 09:28:40 +02:00
|
|
|
<row>
|
|
|
|
<entry><structname>pg_stat_recovery_prefetch</structname><indexterm><primary>pg_stat_recovery_prefetch</primary></indexterm></entry>
|
|
|
|
<entry>Only one row, showing statistics about blocks prefetched during recovery.
|
2022-04-12 11:19:47 +02:00
|
|
|
See <link linkend="monitoring-pg-stat-recovery-prefetch">
|
|
|
|
<structname>pg_stat_recovery_prefetch</structname></link> for details.
|
2022-04-07 09:28:40 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2017-01-19 18:00:00 +01:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_subscription</structname><indexterm><primary>pg_stat_subscription</primary></indexterm></entry>
|
2017-01-19 18:00:00 +01:00
|
|
|
<entry>At least one row per subscription, showing information about
|
|
|
|
the subscription workers.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-stat-subscription">
|
|
|
|
<structname>pg_stat_subscription</structname></link> for details.
|
2017-01-19 18:00:00 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2015-04-12 19:07:46 +02:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_ssl</structname><indexterm><primary>pg_stat_ssl</primary></indexterm></entry>
|
2015-04-12 19:07:46 +02:00
|
|
|
<entry>One row per connection (regular and replication), showing information about
|
|
|
|
SSL used on this connection.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-stat-ssl-view">
|
|
|
|
<structname>pg_stat_ssl</structname></link> for details.
|
2015-04-12 19:07:46 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
|
|
|
<row>
|
|
|
|
<entry><structname>pg_stat_gssapi</structname><indexterm><primary>pg_stat_gssapi</primary></indexterm></entry>
|
|
|
|
<entry>One row per connection (regular and replication), showing information about
|
|
|
|
GSSAPI authentication and encryption used on this connection.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-stat-gssapi-view">
|
|
|
|
<structname>pg_stat_gssapi</structname></link> for details.
|
GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2020-01-15 15:02:09 +01:00
|
|
|
<row>
|
|
|
|
<entry><structname>pg_stat_progress_analyze</structname><indexterm><primary>pg_stat_progress_analyze</primary></indexterm></entry>
|
|
|
|
<entry>One row for each backend (including autovacuum worker processes) running
|
|
|
|
<command>ANALYZE</command>, showing current progress.
|
|
|
|
See <xref linkend='analyze-progress-reporting'/>.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
|
|
|
<entry><structname>pg_stat_progress_create_index</structname><indexterm><primary>pg_stat_progress_create_index</primary></indexterm></entry>
|
2019-04-07 11:30:14 +02:00
|
|
|
<entry>One row for each backend running <command>CREATE INDEX</command> or <command>REINDEX</command>, showing
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
current progress.
|
|
|
|
See <xref linkend='create-index-progress-reporting'/>.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2017-04-13 05:09:14 +02:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_progress_vacuum</structname><indexterm><primary>pg_stat_progress_vacuum</primary></indexterm></entry>
|
2017-04-13 05:09:14 +02:00
|
|
|
<entry>One row for each backend (including autovacuum worker processes) running
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>VACUUM</command>, showing current progress.
|
2017-11-23 15:39:47 +01:00
|
|
|
See <xref linkend='vacuum-progress-reporting'/>.
|
2017-04-13 05:09:14 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2019-03-25 15:59:04 +01:00
|
|
|
<row>
|
|
|
|
<entry><structname>pg_stat_progress_cluster</structname><indexterm><primary>pg_stat_progress_cluster</primary></indexterm></entry>
|
|
|
|
<entry>One row for each backend running
|
|
|
|
<command>CLUSTER</command> or <command>VACUUM FULL</command>, showing current progress.
|
|
|
|
See <xref linkend='cluster-progress-reporting'/>.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2020-03-03 04:03:43 +01:00
|
|
|
<row>
|
|
|
|
<entry><structname>pg_stat_progress_basebackup</structname><indexterm><primary>pg_stat_progress_basebackup</primary></indexterm></entry>
|
|
|
|
<entry>One row for each WAL sender process streaming a base backup,
|
|
|
|
showing current progress.
|
|
|
|
See <xref linkend='basebackup-progress-reporting'/>.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2021-01-06 21:46:26 +01:00
|
|
|
<row>
|
|
|
|
<entry><structname>pg_stat_progress_copy</structname><indexterm><primary>pg_stat_progress_copy</primary></indexterm></entry>
|
|
|
|
<entry>One row for each backend running <command>COPY</command>, showing current progress.
|
|
|
|
See <xref linkend='copy-progress-reporting'/>.
|
|
|
|
</entry>
|
|
|
|
</row>
|
2014-11-16 13:47:44 +01:00
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="monitoring-stats-views-table">
|
|
|
|
<title>Collected Statistics Views</title>
|
|
|
|
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry>View Name</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
2014-01-28 18:58:22 +01:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_archiver</structname><indexterm><primary>pg_stat_archiver</primary></indexterm></entry>
|
2014-01-28 18:58:22 +01:00
|
|
|
<entry>One row only, showing statistics about the
|
|
|
|
WAL archiver process's activity. See
|
2020-05-29 10:14:33 +02:00
|
|
|
<link linkend="monitoring-pg-stat-archiver-view">
|
|
|
|
<structname>pg_stat_archiver</structname></link> for details.
|
2014-01-28 18:58:22 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2007-03-30 20:34:56 +02:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_bgwriter</structname><indexterm><primary>pg_stat_bgwriter</primary></indexterm></entry>
|
2012-04-29 19:24:44 +02:00
|
|
|
<entry>One row only, showing statistics about the
|
|
|
|
background writer process's activity. See
|
2020-05-29 10:14:33 +02:00
|
|
|
<link linkend="monitoring-pg-stat-bgwriter-view">
|
|
|
|
<structname>pg_stat_bgwriter</structname></link> for details.
|
2007-03-30 20:34:56 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2020-10-02 03:17:11 +02:00
|
|
|
<row>
|
|
|
|
<entry><structname>pg_stat_wal</structname><indexterm><primary>pg_stat_wal</primary></indexterm></entry>
|
|
|
|
<entry>One row only, showing statistics about WAL activity. See
|
|
|
|
<link linkend="monitoring-pg-stat-wal-view">
|
|
|
|
<structname>pg_stat_wal</structname></link> for details.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2001-10-17 01:57:06 +02:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_database</structname><indexterm><primary>pg_stat_database</primary></indexterm></entry>
|
2012-02-25 16:33:57 +01:00
|
|
|
<entry>One row per database, showing database-wide statistics. See
|
2020-05-29 10:14:33 +02:00
|
|
|
<link linkend="monitoring-pg-stat-database-view">
|
|
|
|
<structname>pg_stat_database</structname></link> for details.
|
2012-02-25 15:20:49 +01:00
|
|
|
</entry>
|
2011-01-03 12:46:03 +01:00
|
|
|
</row>
|
|
|
|
|
2014-11-16 13:47:44 +01:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_database_conflicts</structname><indexterm><primary>pg_stat_database_conflicts</primary></indexterm></entry>
|
2014-11-16 13:47:44 +01:00
|
|
|
<entry>
|
|
|
|
One row per database, showing database-wide statistics about
|
|
|
|
query cancels due to conflict with recovery on standby servers.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-stat-database-conflicts-view">
|
|
|
|
<structname>pg_stat_database_conflicts</structname></link> for details.
|
2014-11-16 13:47:44 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2001-10-17 01:57:06 +02:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_all_tables</structname><indexterm><primary>pg_stat_all_tables</primary></indexterm></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
<entry>
|
2012-04-29 19:24:44 +02:00
|
|
|
One row for each table in the current database, showing statistics
|
|
|
|
about accesses to that specific table.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-stat-all-tables-view">
|
|
|
|
<structname>pg_stat_all_tables</structname></link> for details.
|
2006-11-24 22:18:42 +01:00
|
|
|
</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_sys_tables</structname><indexterm><primary>pg_stat_sys_tables</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_stat_all_tables</structname>, except that only
|
2005-10-06 04:29:23 +02:00
|
|
|
system tables are shown.</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_user_tables</structname><indexterm><primary>pg_stat_user_tables</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_stat_all_tables</structname>, except that only user
|
2005-10-06 04:29:23 +02:00
|
|
|
tables are shown.</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
2010-08-08 18:27:06 +02:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_xact_all_tables</structname><indexterm><primary>pg_stat_xact_all_tables</primary></indexterm></entry>
|
|
|
|
<entry>Similar to <structname>pg_stat_all_tables</structname>, but counts actions
|
|
|
|
taken so far within the current transaction (which are <emphasis>not</emphasis>
|
|
|
|
yet included in <structname>pg_stat_all_tables</structname> and related views).
|
2010-08-21 12:59:17 +02:00
|
|
|
The columns for numbers of live and dead rows and vacuum and
|
|
|
|
analyze actions are not present in this view.</entry>
|
2010-08-08 18:27:06 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_xact_sys_tables</structname><indexterm><primary>pg_stat_xact_sys_tables</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_stat_xact_all_tables</structname>, except that only
|
2010-08-08 18:27:06 +02:00
|
|
|
system tables are shown.</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_xact_user_tables</structname><indexterm><primary>pg_stat_xact_user_tables</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_stat_xact_all_tables</structname>, except that only
|
2010-08-08 18:27:06 +02:00
|
|
|
user tables are shown.</entry>
|
|
|
|
</row>
|
|
|
|
|
2001-10-17 01:57:06 +02:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_all_indexes</structname><indexterm><primary>pg_stat_all_indexes</primary></indexterm></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
<entry>
|
2012-04-29 19:24:44 +02:00
|
|
|
One row for each index in the current database, showing statistics
|
|
|
|
about accesses to that specific index.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-stat-all-indexes-view">
|
|
|
|
<structname>pg_stat_all_indexes</structname></link> for details.
|
2001-10-17 01:57:06 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_sys_indexes</structname><indexterm><primary>pg_stat_sys_indexes</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_stat_all_indexes</structname>, except that only
|
2005-10-06 04:29:23 +02:00
|
|
|
indexes on system tables are shown.</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_user_indexes</structname><indexterm><primary>pg_stat_user_indexes</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_stat_all_indexes</structname>, except that only
|
2005-10-06 04:29:23 +02:00
|
|
|
indexes on user tables are shown.</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_statio_all_tables</structname><indexterm><primary>pg_statio_all_tables</primary></indexterm></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
<entry>
|
2012-04-29 19:24:44 +02:00
|
|
|
One row for each table in the current database, showing statistics
|
|
|
|
about I/O on that specific table.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-statio-all-tables-view">
|
|
|
|
<structname>pg_statio_all_tables</structname></link> for details.
|
2001-10-17 01:57:06 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_statio_sys_tables</structname><indexterm><primary>pg_statio_sys_tables</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_statio_all_tables</structname>, except that only
|
2005-10-06 04:29:23 +02:00
|
|
|
system tables are shown.</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_statio_user_tables</structname><indexterm><primary>pg_statio_user_tables</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_statio_all_tables</structname>, except that only
|
2005-10-06 04:29:23 +02:00
|
|
|
user tables are shown.</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_statio_all_indexes</structname><indexterm><primary>pg_statio_all_indexes</primary></indexterm></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
<entry>
|
2012-04-29 19:24:44 +02:00
|
|
|
One row for each index in the current database,
|
|
|
|
showing statistics about I/O on that specific index.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-statio-all-indexes-view">
|
|
|
|
<structname>pg_statio_all_indexes</structname></link> for details.
|
2001-10-17 01:57:06 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_statio_sys_indexes</structname><indexterm><primary>pg_statio_sys_indexes</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_statio_all_indexes</structname>, except that only
|
2005-10-06 04:29:23 +02:00
|
|
|
indexes on system tables are shown.</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_statio_user_indexes</structname><indexterm><primary>pg_statio_user_indexes</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_statio_all_indexes</structname>, except that only
|
2005-10-06 04:29:23 +02:00
|
|
|
indexes on user tables are shown.</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_statio_all_sequences</structname><indexterm><primary>pg_statio_all_sequences</primary></indexterm></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
<entry>
|
2012-04-29 19:24:44 +02:00
|
|
|
One row for each sequence in the current database,
|
|
|
|
showing statistics about I/O on that specific sequence.
|
2020-05-29 10:14:33 +02:00
|
|
|
See <link linkend="monitoring-pg-statio-all-sequences-view">
|
|
|
|
<structname>pg_statio_all_sequences</structname></link> for details.
|
2012-02-25 15:20:49 +01:00
|
|
|
</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_statio_sys_sequences</structname><indexterm><primary>pg_statio_sys_sequences</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_statio_all_sequences</structname>, except that only
|
2005-10-06 04:29:23 +02:00
|
|
|
system sequences are shown. (Presently, no system sequences are defined,
|
2001-10-17 01:57:06 +02:00
|
|
|
so this view is always empty.)</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_statio_user_sequences</structname><indexterm><primary>pg_statio_user_sequences</primary></indexterm></entry>
|
|
|
|
<entry>Same as <structname>pg_statio_all_sequences</structname>, except that only
|
2005-10-06 04:29:23 +02:00
|
|
|
user sequences are shown.</entry>
|
2001-10-17 01:57:06 +02:00
|
|
|
</row>
|
2008-05-15 02:17:41 +02:00
|
|
|
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_user_functions</structname><indexterm><primary>pg_stat_user_functions</primary></indexterm></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
<entry>
|
2012-04-29 19:24:44 +02:00
|
|
|
One row for each tracked function, showing statistics
|
|
|
|
about executions of that function. See
|
2020-05-29 10:14:33 +02:00
|
|
|
<link linkend="monitoring-pg-stat-user-functions-view">
|
|
|
|
<structname>pg_stat_user_functions</structname></link> for details.
|
2012-02-25 15:20:49 +01:00
|
|
|
</entry>
|
2008-05-15 02:17:41 +02:00
|
|
|
</row>
|
|
|
|
|
2010-08-08 18:27:06 +02:00
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><structname>pg_stat_xact_user_functions</structname><indexterm><primary>pg_stat_xact_user_functions</primary></indexterm></entry>
|
|
|
|
<entry>Similar to <structname>pg_stat_user_functions</structname>, but counts only
|
|
|
|
calls during the current transaction (which are <emphasis>not</emphasis>
|
|
|
|
yet included in <structname>pg_stat_user_functions</structname>).</entry>
|
2010-08-08 18:27:06 +02:00
|
|
|
</row>
|
|
|
|
|
Collect statistics about SLRU caches
There's a number of SLRU caches used to access important data like clog,
commit timestamps, multixact, asynchronous notifications, etc. Until now
we had no easy way to monitor these shared caches, compute hit ratios,
number of reads/writes etc.
This commit extends the statistics collector to track this information
for a predefined list of SLRUs, and also introduces a new system view
pg_stat_slru displaying the data.
The list of built-in SLRUs is fixed, but additional SLRUs may be defined
in extensions. Unfortunately, there's no suitable registry of SLRUs, so
this patch simply defines a fixed list of SLRUs with entries for the
built-in ones and one entry for all additional SLRUs. Extensions adding
their own SLRU are fairly rare, so this seems acceptable.
This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
are still fixed (hard-coded in the code) and it's not entirely clear
which of the SLRUs might need a GUC to tune size. In a way, allowing us
to determine that is one of the goals of this patch.
Bump catversion as the patch introduces new functions and system view.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
2020-04-02 02:11:38 +02:00
|
|
|
<row>
|
|
|
|
<entry><structname>pg_stat_slru</structname><indexterm><primary>pg_stat_slru</primary></indexterm></entry>
|
|
|
|
<entry>One row per SLRU, showing statistics of operations. See
|
2020-05-29 10:14:33 +02:00
|
|
|
<link linkend="monitoring-pg-stat-slru-view">
|
|
|
|
<structname>pg_stat_slru</structname></link> for details.
|
Collect statistics about SLRU caches
There's a number of SLRU caches used to access important data like clog,
commit timestamps, multixact, asynchronous notifications, etc. Until now
we had no easy way to monitor these shared caches, compute hit ratios,
number of reads/writes etc.
This commit extends the statistics collector to track this information
for a predefined list of SLRUs, and also introduces a new system view
pg_stat_slru displaying the data.
The list of built-in SLRUs is fixed, but additional SLRUs may be defined
in extensions. Unfortunately, there's no suitable registry of SLRUs, so
this patch simply defines a fixed list of SLRUs with entries for the
built-in ones and one entry for all additional SLRUs. Extensions adding
their own SLRU are fairly rare, so this seems acceptable.
This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
are still fixed (hard-coded in the code) and it's not entirely clear
which of the SLRUs might need a GUC to tune size. In a way, allowing us
to determine that is one of the goals of this patch.
Bump catversion as the patch introduces new functions and system view.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
2020-04-02 02:11:38 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2021-10-01 04:47:56 +02:00
|
|
|
<row>
|
|
|
|
<entry><structname>pg_stat_replication_slots</structname><indexterm><primary>pg_stat_replication_slots</primary></indexterm></entry>
|
|
|
|
<entry>One row per replication slot, showing statistics about the
|
|
|
|
replication slot's usage. See
|
|
|
|
<link linkend="monitoring-pg-stat-replication-slots-view">
|
|
|
|
<structname>pg_stat_replication_slots</structname></link> for details.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2021-11-30 04:24:30 +01:00
|
|
|
<row>
|
2022-03-01 01:47:52 +01:00
|
|
|
<entry><structname>pg_stat_subscription_stats</structname><indexterm><primary>pg_stat_subscription_stats</primary></indexterm></entry>
|
|
|
|
<entry>One row per subscription, showing statistics about errors.
|
|
|
|
See <link linkend="monitoring-pg-stat-subscription-stats">
|
|
|
|
<structname>pg_stat_subscription_stats</structname></link> for details.
|
2021-11-30 04:24:30 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2001-10-17 01:57:06 +02:00
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The per-index statistics are particularly useful to determine which
|
|
|
|
indexes are being used and how effective they are.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <structname>pg_statio_</structname> views are primarily useful to
|
2003-03-20 04:34:57 +01:00
|
|
|
determine the effectiveness of the buffer cache. When the number
|
|
|
|
of actual disk reads is much smaller than the number of buffer
|
|
|
|
hits, then the cache is satisfying most read requests without
|
|
|
|
invoking a kernel call. However, these statistics do not give the
|
2017-10-09 03:44:17 +02:00
|
|
|
entire story: due to the way in which <productname>PostgreSQL</productname>
|
2003-03-20 04:34:57 +01:00
|
|
|
handles disk I/O, data that is not in the
|
2017-10-09 03:44:17 +02:00
|
|
|
<productname>PostgreSQL</productname> buffer cache might still reside in the
|
Update documentation on may/can/might:
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
Also update two error messages mentioned in the documenation to match.
2007-01-31 21:56:20 +01:00
|
|
|
kernel's I/O cache, and might therefore still be fetched without
|
2003-03-20 04:34:57 +01:00
|
|
|
requiring a physical read. Users interested in obtaining more
|
2017-10-09 03:44:17 +02:00
|
|
|
detailed information on <productname>PostgreSQL</productname> I/O behavior are
|
2022-04-08 06:35:35 +02:00
|
|
|
advised to use the <productname>PostgreSQL</productname> statistics views
|
2003-03-20 04:34:57 +01:00
|
|
|
in combination with operating system utilities that allow insight
|
|
|
|
into the kernel's handling of I/O.
|
2001-10-17 01:57:06 +02:00
|
|
|
</para>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-activity-view">
|
|
|
|
<title><structname>pg_stat_activity</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_activity</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The <structname>pg_stat_activity</structname> view will have one row
|
|
|
|
per server process, showing information related to
|
|
|
|
the current activity of that process.
|
|
|
|
</para>
|
2012-02-25 15:21:04 +01:00
|
|
|
|
2012-01-19 14:19:20 +01:00
|
|
|
<table id="pg-stat-activity-view" xreflabel="pg_stat_activity">
|
2012-06-07 23:06:20 +02:00
|
|
|
<title><structname>pg_stat_activity</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2012-01-19 14:19:20 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2012-01-19 14:19:20 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the database this backend is connected to
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the database this backend is connected to
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of this backend
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>leader_pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2020-07-26 09:32:11 +02:00
|
|
|
Process ID of the parallel group leader, if this process is a
|
|
|
|
parallel query worker. <literal>NULL</literal> if this process is a
|
|
|
|
parallel group leader or does not participate in parallel query.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>usesysid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the user logged into this backend
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>usename</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the user logged into this backend
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>application_name</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the application that is connected
|
|
|
|
to this backend
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>client_addr</structfield> <type>inet</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
IP address of the client connected to this backend.
|
|
|
|
If this field is null, it indicates either that the client is
|
|
|
|
connected via a Unix socket on the server machine or that this is an
|
|
|
|
internal process such as autovacuum.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>client_hostname</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Host name of the connected client, as reported by a
|
|
|
|
reverse DNS lookup of <structfield>client_addr</structfield>. This field will
|
|
|
|
only be non-null for IP connections, and only when <xref linkend="guc-log-hostname"/> is enabled.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>client_port</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
TCP port number that the client is using for communication
|
|
|
|
with this backend, or <literal>-1</literal> if a Unix socket is used.
|
|
|
|
If this field is null, it indicates that this is an internal server process.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>backend_start</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time when this process was started. For client backends,
|
|
|
|
this is the time the client connected to the server.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>xact_start</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time when this process' current transaction was started, or null
|
|
|
|
if no transaction is active. If the current
|
|
|
|
query is the first of its transaction, this column is equal to the
|
|
|
|
<structfield>query_start</structfield> column.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>query_start</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time when the currently active query was started, or if
|
|
|
|
<structfield>state</structfield> is not <literal>active</literal>, when the last query
|
|
|
|
was started
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>state_change</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time when the <structfield>state</structfield> was last changed
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
2016-03-10 18:44:09 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>wait_event_type</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
The type of event for which the backend is waiting, if any;
|
2020-05-14 05:36:58 +02:00
|
|
|
otherwise NULL. See <xref linkend="wait-event-table"/>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>wait_event</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Wait event name if backend is currently waiting, otherwise NULL.
|
|
|
|
See <xref linkend="wait-event-activity-table"/> through
|
|
|
|
<xref linkend="wait-event-timeout-table"/>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>state</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Current overall state of this backend.
|
|
|
|
Possible values are:
|
2016-03-10 18:44:09 +01:00
|
|
|
<itemizedlist>
|
|
|
|
<listitem>
|
2020-05-14 05:36:58 +02:00
|
|
|
<para>
|
|
|
|
<literal>active</literal>: The backend is executing a query.
|
2016-10-04 16:50:13 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-05-14 05:36:58 +02:00
|
|
|
<literal>idle</literal>: The backend is waiting for a new client command.
|
2016-10-04 16:50:13 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-05-14 05:36:58 +02:00
|
|
|
<literal>idle in transaction</literal>: The backend is in a transaction,
|
|
|
|
but is not currently executing a query.
|
2016-10-04 16:50:13 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-05-14 05:36:58 +02:00
|
|
|
<literal>idle in transaction (aborted)</literal>: This state is similar to
|
|
|
|
<literal>idle in transaction</literal>, except one of the statements in
|
|
|
|
the transaction caused an error.
|
2016-10-04 16:50:13 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-05-14 05:36:58 +02:00
|
|
|
<literal>fastpath function call</literal>: The backend is executing a
|
|
|
|
fast-path function.
|
2016-10-04 16:50:13 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-05-14 05:36:58 +02:00
|
|
|
<literal>disabled</literal>: This state is reported if <xref linkend="guc-track-activities"/> is disabled in this backend.
|
Create and use wait events for read, write, and fsync operations.
Previous commits, notably 53be0b1add7064ca5db3cd884302dfc3268d884e and
6f3bd98ebfc008cbd676da777bb0b2376c4c4bfa, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
2017-03-18 12:43:01 +01:00
|
|
|
</para>
|
2020-05-14 05:36:58 +02:00
|
|
|
</listitem>
|
2016-03-10 18:44:09 +01:00
|
|
|
</itemizedlist>
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
2016-03-10 18:44:09 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
|
|
|
<row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>backend_xid</structfield> <type>xid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Top-level transaction identifier of this backend, if any.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>backend_xmin</structfield> <type>xid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
The current backend's <literal>xmin</literal> horizon.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
Make use of in-core query id added by commit 5fd9dfa5f5
Use the in-core query id computation for pg_stat_activity,
log_line_prefix, and EXPLAIN VERBOSE.
Similar to other fields in pg_stat_activity, only the queryid from the
top level statements are exposed, and if the backends status isn't
active then the queryid from the last executed statements is displayed.
Add a %Q placeholder to include the queryid in log_line_prefix, which
will also only expose top level statements.
For EXPLAIN VERBOSE, if a query identifier has been computed, either by
enabling compute_query_id or using a third-party module, display it.
Bump catalog version.
Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol
Author: Julien Rouhaud
Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
2021-04-07 20:03:56 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
2021-04-20 18:22:26 +02:00
|
|
|
<structfield>query_id</structfield> <type>bigint</type>
|
Make use of in-core query id added by commit 5fd9dfa5f5
Use the in-core query id computation for pg_stat_activity,
log_line_prefix, and EXPLAIN VERBOSE.
Similar to other fields in pg_stat_activity, only the queryid from the
top level statements are exposed, and if the backends status isn't
active then the queryid from the last executed statements is displayed.
Add a %Q placeholder to include the queryid in log_line_prefix, which
will also only expose top level statements.
For EXPLAIN VERBOSE, if a query identifier has been computed, either by
enabling compute_query_id or using a third-party module, display it.
Bump catalog version.
Discussion: https://postgr.es/m/20210407125726.tkvjdbw76hxnpwfi@nol
Author: Julien Rouhaud
Reviewed-by: Alvaro Herrera, Nitin Jadhav, Zhihong Yu
2021-04-07 20:03:56 +02:00
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Identifier of this backend's most recent query. If
|
|
|
|
<structfield>state</structfield> is <literal>active</literal> this
|
|
|
|
field shows the identifier of the currently executing query. In
|
|
|
|
all other states, it shows the identifier of last query that was
|
|
|
|
executed. Query identifiers are not computed by default so this
|
|
|
|
field will be null unless <xref linkend="guc-compute-query-id"/>
|
|
|
|
parameter is enabled or a third-party module that computes query
|
|
|
|
identifiers is configured.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>query</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Text of this backend's most recent query. If
|
|
|
|
<structfield>state</structfield> is <literal>active</literal> this field shows the
|
|
|
|
currently executing query. In all other states, it shows the last query
|
|
|
|
that was executed. By default the query text is truncated at 1024
|
2020-07-09 06:31:33 +02:00
|
|
|
bytes; this value can be changed via the parameter
|
2020-05-14 05:36:58 +02:00
|
|
|
<xref linkend="guc-track-activity-query-size"/>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>backend_type</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Type of current backend. Possible types are
|
|
|
|
<literal>autovacuum launcher</literal>, <literal>autovacuum worker</literal>,
|
|
|
|
<literal>logical replication launcher</literal>,
|
|
|
|
<literal>logical replication worker</literal>,
|
|
|
|
<literal>parallel worker</literal>, <literal>background writer</literal>,
|
|
|
|
<literal>client backend</literal>, <literal>checkpointer</literal>,
|
Make archiver process an auxiliary process.
This commit changes WAL archiver process so that it's treated as
an auxiliary process and can use shared memory. This is an infrastructure
patch required for upcoming shared-memory based stats collector patch
series. These patch series basically need any processes including archiver
that can report the statistics to access to shared memory. Since this patch
itself is useful to simplify the code and when users monitor the status of
archiver, it's committed separately in advance.
This commit simplifies the code for WAL archiving. For example, previously
backends need to signal to archiver via postmaster when they notify
archiver that there are some WAL files to archive. On the other hand,
this commit removes that signal to postmaster and enables backends to
notify archier directly using shared latch.
Also, as the side of this change, the information about archiver process
becomes viewable at pg_stat_activity view.
Author: Kyotaro Horiguchi
Reviewed-by: Andres Freund, Álvaro Herrera, Julien Rouhaud, Tomas Vondra, Arthur Zakirov, Fujii Masao
Discussion: https://postgr.es/m/20180629.173418.190173462.horiguchi.kyotaro@lab.ntt.co.jp
2021-03-15 05:13:14 +01:00
|
|
|
<literal>archiver</literal>,
|
2020-05-14 05:36:58 +02:00
|
|
|
<literal>startup</literal>, <literal>walreceiver</literal>,
|
|
|
|
<literal>walsender</literal> and <literal>walwriter</literal>.
|
|
|
|
In addition, background workers registered by extensions may have
|
|
|
|
additional types.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<note>
|
|
|
|
<para>
|
|
|
|
The <structfield>wait_event</structfield> and <structfield>state</structfield> columns are
|
|
|
|
independent. If a backend is in the <literal>active</literal> state,
|
|
|
|
it may or may not be <literal>waiting</literal> on some event. If the state
|
|
|
|
is <literal>active</literal> and <structfield>wait_event</structfield> is non-null, it
|
|
|
|
means that a query is being executed, but is being blocked somewhere
|
|
|
|
in the system.
|
|
|
|
</para>
|
|
|
|
</note>
|
|
|
|
|
|
|
|
<table id="wait-event-table">
|
|
|
|
<title>Wait Event Types</title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry>Wait Event Type</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Activity</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>The server process is idle. This event type indicates a process
|
|
|
|
waiting for activity in its main processing loop.
|
|
|
|
<literal>wait_event</literal> will identify the specific wait point;
|
|
|
|
see <xref linkend="wait-event-activity-table"/>.
|
2020-05-14 05:36:58 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>BufferPin</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>The server process is waiting for exclusive access to
|
|
|
|
a data buffer. Buffer pin waits can be protracted if
|
|
|
|
another process holds an open cursor that last read data from the
|
2020-05-14 05:36:58 +02:00
|
|
|
buffer in question. See <xref linkend="wait-event-bufferpin-table"/>.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Client</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>The server process is waiting for activity on a socket
|
|
|
|
connected to a user application. Thus, the server expects something
|
|
|
|
to happen that is independent of its internal processes.
|
|
|
|
<literal>wait_event</literal> will identify the specific wait point;
|
|
|
|
see <xref linkend="wait-event-client-table"/>.
|
2020-05-14 05:36:58 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Extension</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>The server process is waiting for some condition defined by an
|
|
|
|
extension module.
|
2020-05-14 05:36:58 +02:00
|
|
|
See <xref linkend="wait-event-extension-table"/>.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>IO</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>The server process is waiting for an I/O operation to complete.
|
|
|
|
<literal>wait_event</literal> will identify the specific wait point;
|
|
|
|
see <xref linkend="wait-event-io-table"/>.
|
2020-05-14 05:36:58 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>IPC</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>The server process is waiting for some interaction with
|
|
|
|
another server process. <literal>wait_event</literal> will
|
|
|
|
identify the specific wait point;
|
|
|
|
see <xref linkend="wait-event-ipc-table"/>.
|
2020-05-14 05:36:58 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Lock</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>The server process is waiting for a heavyweight lock.
|
2020-05-14 05:36:58 +02:00
|
|
|
Heavyweight locks, also known as lock manager locks or simply locks,
|
|
|
|
primarily protect SQL-visible objects such as tables. However,
|
|
|
|
they are also used to ensure mutual exclusion for certain internal
|
|
|
|
operations such as relation extension. <literal>wait_event</literal>
|
2020-05-16 03:47:21 +02:00
|
|
|
will identify the type of lock awaited;
|
|
|
|
see <xref linkend="wait-event-lock-table"/>.
|
2020-05-14 05:36:58 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LWLock</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry> The server process is waiting for a lightweight lock.
|
|
|
|
Most such locks protect a particular data structure in shared memory.
|
2020-05-14 05:36:58 +02:00
|
|
|
<literal>wait_event</literal> will contain a name identifying the purpose
|
|
|
|
of the lightweight lock. (Some locks have specific names; others
|
|
|
|
are part of a group of locks each with a similar purpose.)
|
|
|
|
See <xref linkend="wait-event-lwlock-table"/>.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Timeout</literal></entry>
|
|
|
|
<entry>The server process is waiting for a timeout
|
|
|
|
to expire. <literal>wait_event</literal> will identify the specific wait
|
2020-05-16 03:47:21 +02:00
|
|
|
point; see <xref linkend="wait-event-timeout-table"/>.
|
2020-05-14 05:36:58 +02:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="wait-event-activity-table">
|
|
|
|
<title>Wait Events of Type <literal>Activity</literal></title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Activity</literal> Wait Event</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ArchiverMain</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting in main loop of archiver process.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>AutoVacuumMain</literal></entry>
|
|
|
|
<entry>Waiting in main loop of autovacuum launcher process.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>BgWriterHibernate</literal></entry>
|
|
|
|
<entry>Waiting in background writer process, hibernating.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>BgWriterMain</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting in main loop of background writer process.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>CheckpointerMain</literal></entry>
|
|
|
|
<entry>Waiting in main loop of checkpointer process.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LogicalApplyMain</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting in main loop of logical replication apply process.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LogicalLauncherMain</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting in main loop of logical replication launcher process.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>RecoveryWalStream</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting in main loop of startup process for WAL to arrive, during
|
|
|
|
streaming recovery.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>SysLoggerMain</literal></entry>
|
|
|
|
<entry>Waiting in main loop of syslogger process.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WalReceiverMain</literal></entry>
|
|
|
|
<entry>Waiting in main loop of WAL receiver process.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WalSenderMain</literal></entry>
|
|
|
|
<entry>Waiting in main loop of WAL sender process.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WalWriterMain</literal></entry>
|
|
|
|
<entry>Waiting in main loop of WAL writer process.</entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="wait-event-bufferpin-table">
|
|
|
|
<title>Wait Events of Type <literal>BufferPin</literal></title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry><literal>BufferPin</literal> Wait Event</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>BufferPin</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting to acquire an exclusive pin on a buffer.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="wait-event-client-table">
|
|
|
|
<title>Wait Events of Type <literal>Client</literal></title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Client</literal> Wait Event</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ClientRead</literal></entry>
|
|
|
|
<entry>Waiting to read data from the client.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ClientWrite</literal></entry>
|
|
|
|
<entry>Waiting to write data to the client.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>GSSOpenServer</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting to read data from the client while establishing a GSSAPI
|
2020-05-14 05:36:58 +02:00
|
|
|
session.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LibPQWalReceiverConnect</literal></entry>
|
|
|
|
<entry>Waiting in WAL receiver to establish connection to remote
|
|
|
|
server.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LibPQWalReceiverReceive</literal></entry>
|
|
|
|
<entry>Waiting in WAL receiver to receive data from remote server.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>SSLOpenServer</literal></entry>
|
|
|
|
<entry>Waiting for SSL while attempting connection.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WalSenderWaitForWAL</literal></entry>
|
|
|
|
<entry>Waiting for WAL to be flushed in WAL sender process.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WalSenderWriteData</literal></entry>
|
|
|
|
<entry>Waiting for any activity when processing replies from WAL
|
|
|
|
receiver in WAL sender process.</entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="wait-event-extension-table">
|
|
|
|
<title>Wait Events of Type <literal>Extension</literal></title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Extension</literal> Wait Event</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Extension</literal></entry>
|
|
|
|
<entry>Waiting in an extension.</entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="wait-event-io-table">
|
|
|
|
<title>Wait Events of Type <literal>IO</literal></title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry><literal>IO</literal> Wait Event</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
2020-06-17 17:39:17 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>BaseBackupRead</literal></entry>
|
|
|
|
<entry>Waiting for base backup to read from a file.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>BufFileRead</literal></entry>
|
|
|
|
<entry>Waiting for a read from a buffered file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>BufFileWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write to a buffered file.</entry>
|
|
|
|
</row>
|
2020-08-26 04:06:43 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>BufFileTruncate</literal></entry>
|
|
|
|
<entry>Waiting for a buffered file to be truncated.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>ControlFileRead</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for a read from the <filename>pg_control</filename>
|
|
|
|
file.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ControlFileSync</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for the <filename>pg_control</filename> file to reach
|
2020-08-31 21:23:19 +02:00
|
|
|
durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ControlFileSyncUpdate</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for an update to the <filename>pg_control</filename> file
|
2020-08-31 21:23:19 +02:00
|
|
|
to reach durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ControlFileWrite</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for a write to the <filename>pg_control</filename>
|
|
|
|
file.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ControlFileWriteUpdate</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for a write to update the <filename>pg_control</filename>
|
|
|
|
file.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>CopyFileRead</literal></entry>
|
|
|
|
<entry>Waiting for a read during a file copy operation.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>CopyFileWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write during a file copy operation.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>DSMFillZeroWrite</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting to fill a dynamic shared memory backing file with
|
|
|
|
zeroes.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>DataFileExtend</literal></entry>
|
|
|
|
<entry>Waiting for a relation data file to be extended.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>DataFileFlush</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for a relation data file to reach durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>DataFileImmediateSync</literal></entry>
|
|
|
|
<entry>Waiting for an immediate synchronization of a relation data file to
|
2020-08-31 21:23:19 +02:00
|
|
|
durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>DataFilePrefetch</literal></entry>
|
|
|
|
<entry>Waiting for an asynchronous prefetch from a relation data
|
|
|
|
file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>DataFileRead</literal></entry>
|
|
|
|
<entry>Waiting for a read from a relation data file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>DataFileSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for changes to a relation data file to reach durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>DataFileTruncate</literal></entry>
|
|
|
|
<entry>Waiting for a relation data file to be truncated.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>DataFileWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write to a relation data file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LockFileAddToDataDirRead</literal></entry>
|
|
|
|
<entry>Waiting for a read while adding a line to the data directory lock
|
|
|
|
file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LockFileAddToDataDirSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for data to reach durable storage while adding a line to the
|
2020-05-14 05:36:58 +02:00
|
|
|
data directory lock file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LockFileAddToDataDirWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write while adding a line to the data directory
|
|
|
|
lock file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LockFileCreateRead</literal></entry>
|
|
|
|
<entry>Waiting to read while creating the data directory lock
|
|
|
|
file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LockFileCreateSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for data to reach durable storage while creating the data
|
2020-05-14 05:36:58 +02:00
|
|
|
directory lock file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LockFileCreateWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write while creating the data directory lock
|
|
|
|
file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LockFileReCheckDataDirRead</literal></entry>
|
|
|
|
<entry>Waiting for a read during recheck of the data directory lock
|
|
|
|
file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LogicalRewriteCheckpointSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for logical rewrite mappings to reach durable storage
|
2020-05-14 05:36:58 +02:00
|
|
|
during a checkpoint.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LogicalRewriteMappingSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for mapping data to reach durable storage during a logical
|
2020-05-14 05:36:58 +02:00
|
|
|
rewrite.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LogicalRewriteMappingWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write of mapping data during a logical
|
|
|
|
rewrite.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LogicalRewriteSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for logical rewrite mappings to reach durable
|
2020-05-14 05:36:58 +02:00
|
|
|
storage.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LogicalRewriteTruncate</literal></entry>
|
|
|
|
<entry>Waiting for truncate of mapping data during a logical
|
|
|
|
rewrite.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LogicalRewriteWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write of logical rewrite mappings.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>RelationMapRead</literal></entry>
|
|
|
|
<entry>Waiting for a read of the relation map file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>RelationMapSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for the relation map file to reach durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>RelationMapWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write to the relation map file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ReorderBufferRead</literal></entry>
|
|
|
|
<entry>Waiting for a read during reorder buffer management.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ReorderBufferWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write during reorder buffer management.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ReorderLogicalMappingRead</literal></entry>
|
|
|
|
<entry>Waiting for a read of a logical mapping during reorder buffer
|
|
|
|
management.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ReplicationSlotRead</literal></entry>
|
|
|
|
<entry>Waiting for a read from a replication slot control file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ReplicationSlotRestoreSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for a replication slot control file to reach durable storage
|
2020-05-14 05:36:58 +02:00
|
|
|
while restoring it to memory.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ReplicationSlotSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for a replication slot control file to reach durable
|
2020-05-14 05:36:58 +02:00
|
|
|
storage.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ReplicationSlotWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write to a replication slot control file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>SLRUFlushSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for SLRU data to reach durable storage during a checkpoint
|
2020-05-14 05:36:58 +02:00
|
|
|
or database shutdown.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>SLRURead</literal></entry>
|
|
|
|
<entry>Waiting for a read of an SLRU page.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>SLRUSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for SLRU data to reach durable storage following a page
|
2020-05-14 05:36:58 +02:00
|
|
|
write.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>SLRUWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write of an SLRU page.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>SnapbuildRead</literal></entry>
|
|
|
|
<entry>Waiting for a read of a serialized historical catalog
|
|
|
|
snapshot.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>SnapbuildSync</literal></entry>
|
|
|
|
<entry>Waiting for a serialized historical catalog snapshot to reach
|
2020-08-31 21:23:19 +02:00
|
|
|
durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>SnapbuildWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write of a serialized historical catalog
|
|
|
|
snapshot.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>TimelineHistoryFileSync</literal></entry>
|
|
|
|
<entry>Waiting for a timeline history file received via streaming
|
2020-08-31 21:23:19 +02:00
|
|
|
replication to reach durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>TimelineHistoryFileWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write of a timeline history file received via
|
|
|
|
streaming replication.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>TimelineHistoryRead</literal></entry>
|
|
|
|
<entry>Waiting for a read of a timeline history file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>TimelineHistorySync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for a newly created timeline history file to reach durable
|
2020-05-14 05:36:58 +02:00
|
|
|
storage.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>TimelineHistoryWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write of a newly created timeline history
|
|
|
|
file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>TwophaseFileRead</literal></entry>
|
|
|
|
<entry>Waiting for a read of a two phase state file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>TwophaseFileSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for a two phase state file to reach durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>TwophaseFileWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write of a two phase state file.</entry>
|
|
|
|
</row>
|
Add new block-by-block strategy for CREATE DATABASE.
Because this strategy logs changes on a block-by-block basis, it
avoids the need to checkpoint before and after the operation.
However, because it logs each changed block individually, it might
generate a lot of extra write-ahead logging if the template database
is large. Therefore, the older strategy remains available via a new
STRATEGY parameter to CREATE DATABASE, and a corresponding --strategy
option to createdb.
Somewhat controversially, this patch assembles the list of relations
to be copied to the new database by reading the pg_class relation of
the template database. Cross-database access like this isn't normally
possible, but it can be made to work here because there can't be any
connections to the database being copied, nor can it contain any
in-doubt transactions. Even so, we have to use lower-level interfaces
than normal, since the table scan and relcache interfaces will not
work for a database to which we're not connected. The advantage of
this approach is that we do not need to rely on the filesystem to
determine what ought to be copied, but instead on PostgreSQL's own
knowledge of the database structure. This avoids, for example,
copying stray files that happen to be located in the source database
directory.
Dilip Kumar, with a fairly large number of cosmetic changes by me.
Reviewed and tested by Ashutosh Sharma, Andres Freund, John Naylor,
Greg Nancarrow, Neha Sharma. Additional feedback from Bruce Momjian,
Heikki Linnakangas, Julien Rouhaud, Adam Brusselback, Kyotaro
Horiguchi, Tomas Vondra, Andrew Dunstan, Álvaro Herrera, and others.
Discussion: http://postgr.es/m/CA+TgmoYtcdxBjLh31DLxUXHxFVMPGzrU5_T=CYCvRyFHywSBUQ@mail.gmail.com
2022-03-29 17:31:43 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>VersionFileWrite</literal></entry>
|
|
|
|
<entry>Waiting for the version file to be written while creating a database.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>WALBootstrapSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for WAL to reach durable storage during
|
2020-05-14 05:36:58 +02:00
|
|
|
bootstrapping.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALBootstrapWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write of a WAL page during bootstrapping.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALCopyRead</literal></entry>
|
|
|
|
<entry>Waiting for a read when creating a new WAL segment by copying an
|
|
|
|
existing one.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALCopySync</literal></entry>
|
|
|
|
<entry>Waiting for a new WAL segment created by copying an existing one to
|
2020-08-31 21:23:19 +02:00
|
|
|
reach durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALCopyWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write when creating a new WAL segment by copying an
|
|
|
|
existing one.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALInitSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for a newly initialized WAL file to reach durable
|
2020-05-14 05:36:58 +02:00
|
|
|
storage.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALInitWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write while initializing a new WAL file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALRead</literal></entry>
|
|
|
|
<entry>Waiting for a read from a WAL file.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALSenderTimelineHistoryRead</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for a read from a timeline history file during a walsender
|
2020-05-14 05:36:58 +02:00
|
|
|
timeline command.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALSync</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for a WAL file to reach durable storage.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALSyncMethodAssign</literal></entry>
|
2020-08-31 21:23:19 +02:00
|
|
|
<entry>Waiting for data to reach durable storage while assigning a new
|
2020-05-17 03:00:05 +02:00
|
|
|
WAL sync method.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>WALWrite</literal></entry>
|
|
|
|
<entry>Waiting for a write to a WAL file.</entry>
|
Add support for streaming to built-in logical replication.
To add support for streaming of in-progress transactions into the
built-in logical replication, we need to do three things:
* Extend the logical replication protocol, so identify in-progress
transactions, and allow adding additional bits of information (e.g.
XID of subtransactions).
* Modify the output plugin (pgoutput) to implement the new stream
API callbacks, by leveraging the extended replication protocol.
* Modify the replication apply worker, to properly handle streamed
in-progress transaction by spilling the data to disk and then
replaying them on commit.
We however must explicitly disable streaming replication during
replication slot creation, even if the plugin supports it. We
don't need to replicate the changes accumulated during this phase,
and moreover we don't have a replication connection open so we
don't have where to send the data anyway.
Author: Tomas Vondra, Dilip Kumar and Amit Kapila
Reviewed-by: Amit Kapila, Kuntal Ghosh and Ajin Cherian
Tested-by: Neha Sharma, Mahendra Singh Thalor and Ajin Cherian
Discussion: https://postgr.es/m/688b0b7f-2f6c-d827-c27b-216a8e3ea700@2ndquadrant.com
2020-09-03 04:24:07 +02:00
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="wait-event-ipc-table">
|
|
|
|
<title>Wait Events of Type <literal>IPC</literal></title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry><literal>IPC</literal> Wait Event</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
Add support for asynchronous execution.
This implements asynchronous execution, which runs multiple parts of a
non-parallel-aware Append concurrently rather than serially to improve
performance when possible. Currently, the only node type that can be
run concurrently is a ForeignScan that is an immediate child of such an
Append. In the case where such ForeignScans access data on different
remote servers, this would run those ForeignScans concurrently, and
overlap the remote operations to be performed simultaneously, so it'll
improve the performance especially when the operations involve
time-consuming ones such as remote join and remote aggregation.
We may extend this to other node types such as joins or aggregates over
ForeignScans in the future.
This also adds the support for postgres_fdw, which is enabled by the
table-level/server-level option "async_capable". The default is false.
Robert Haas, Kyotaro Horiguchi, Thomas Munro, and myself. This commit
is mostly based on the patch proposed by Robert Haas, but also uses
stuff from the patch proposed by Kyotaro Horiguchi and from the patch
proposed by Thomas Munro. Reviewed by Kyotaro Horiguchi, Konstantin
Knizhnik, Andrey Lepikhov, Movead Li, Thomas Munro, Justin Pryzby, and
others.
Discussion: https://postgr.es/m/CA%2BTgmoaXQEt4tZ03FtQhnzeDEMzBck%2BLrni0UWHVVgOTnA6C1w%40mail.gmail.com
Discussion: https://postgr.es/m/CA%2BhUKGLBRyu0rHrDCMC4%3DRn3252gogyp1SjOgG8SEKKZv%3DFwfQ%40mail.gmail.com
Discussion: https://postgr.es/m/20200228.170650.667613673625155850.horikyota.ntt%40gmail.com
2021-03-31 11:45:00 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>AppendReady</literal></entry>
|
|
|
|
<entry>Waiting for subplan nodes of an <literal>Append</literal> plan
|
|
|
|
node to be ready.</entry>
|
|
|
|
</row>
|
2021-11-22 02:28:21 +01:00
|
|
|
<row>
|
|
|
|
<entry><literal>ArchiveCleanupCommand</literal></entry>
|
|
|
|
<entry>Waiting for <xref linkend="guc-archive-cleanup-command"/> to
|
|
|
|
complete.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ArchiveCommand</literal></entry>
|
|
|
|
<entry>Waiting for <xref linkend="guc-archive-command"/> to
|
|
|
|
complete.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2021-04-08 11:32:14 +02:00
|
|
|
<entry><literal>BackendTermination</literal></entry>
|
|
|
|
<entry>Waiting for the termination of another backend.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>BackupWaitWalArchive</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for WAL files required for a backup to be successfully
|
2020-05-14 05:36:58 +02:00
|
|
|
archived.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>BgWorkerShutdown</literal></entry>
|
|
|
|
<entry>Waiting for background worker to shut down.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>BgWorkerStartup</literal></entry>
|
|
|
|
<entry>Waiting for background worker to start up.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>BtreePage</literal></entry>
|
|
|
|
<entry>Waiting for the page number needed to continue a parallel B-tree
|
|
|
|
scan to become available.</entry>
|
|
|
|
</row>
|
2021-03-10 22:05:58 +01:00
|
|
|
<row>
|
|
|
|
<entry><literal>BufferIO</literal></entry>
|
|
|
|
<entry>Waiting for buffer I/O to complete.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>CheckpointDone</literal></entry>
|
|
|
|
<entry>Waiting for a checkpoint to complete.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>CheckpointStart</literal></entry>
|
|
|
|
<entry>Waiting for a checkpoint to start.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ExecuteGather</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for activity from a child process while
|
|
|
|
executing a <literal>Gather</literal> plan node.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashBatchAllocate</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting for an elected Parallel Hash participant to allocate a hash
|
|
|
|
table.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashBatchElect</literal></entry>
|
|
|
|
<entry>Waiting to elect a Parallel Hash participant to allocate a hash
|
|
|
|
table.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashBatchLoad</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting for other Parallel Hash participants to finish loading a
|
|
|
|
hash table.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashBuildAllocate</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting for an elected Parallel Hash participant to allocate the
|
|
|
|
initial hash table.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashBuildElect</literal></entry>
|
|
|
|
<entry>Waiting to elect a Parallel Hash participant to allocate the
|
|
|
|
initial hash table.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashBuildHashInner</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting for other Parallel Hash participants to finish hashing the
|
|
|
|
inner relation.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashBuildHashOuter</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting for other Parallel Hash participants to finish partitioning
|
|
|
|
the outer relation.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashGrowBatchesAllocate</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting for an elected Parallel Hash participant to allocate more
|
|
|
|
batches.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashGrowBatchesDecide</literal></entry>
|
|
|
|
<entry>Waiting to elect a Parallel Hash participant to decide on future
|
|
|
|
batch growth.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashGrowBatchesElect</literal></entry>
|
|
|
|
<entry>Waiting to elect a Parallel Hash participant to allocate more
|
|
|
|
batches.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashGrowBatchesFinish</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting for an elected Parallel Hash participant to decide on
|
|
|
|
future batch growth.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashGrowBatchesRepartition</literal></entry>
|
|
|
|
<entry>Waiting for other Parallel Hash participants to finish
|
2020-05-14 05:36:58 +02:00
|
|
|
repartitioning.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashGrowBucketsAllocate</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting for an elected Parallel Hash participant to finish
|
|
|
|
allocating more buckets.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashGrowBucketsElect</literal></entry>
|
|
|
|
<entry>Waiting to elect a Parallel Hash participant to allocate more
|
|
|
|
buckets.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry><literal>HashGrowBucketsReinsert</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting for other Parallel Hash participants to finish inserting
|
|
|
|
tuples into new buckets.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LogicalSyncData</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for a logical replication remote server to send data for
|
2020-05-14 05:36:58 +02:00
|
|
|
initial table synchronization.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LogicalSyncStateChange</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for a logical replication remote server to change
|
|
|
|
state.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>MessageQueueInternal</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for another process to be attached to a shared message
|
|
|
|
queue.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>MessageQueuePutMessage</literal></entry>
|
|
|
|
<entry>Waiting to write a protocol message to a shared message queue.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>MessageQueueReceive</literal></entry>
|
|
|
|
<entry>Waiting to receive bytes from a shared message queue.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>MessageQueueSend</literal></entry>
|
|
|
|
<entry>Waiting to send bytes to a shared message queue.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ParallelBitmapScan</literal></entry>
|
|
|
|
<entry>Waiting for parallel bitmap scan to become initialized.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ParallelCreateIndexScan</literal></entry>
|
|
|
|
<entry>Waiting for parallel <command>CREATE INDEX</command> workers to
|
|
|
|
finish heap scan.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ParallelFinish</literal></entry>
|
|
|
|
<entry>Waiting for parallel workers to finish computing.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ProcArrayGroupUpdate</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for the group leader to clear the transaction ID at
|
|
|
|
end of a parallel operation.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ProcSignalBarrier</literal></entry>
|
|
|
|
<entry>Waiting for a barrier event to be processed by all
|
|
|
|
backends.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Promote</literal></entry>
|
|
|
|
<entry>Waiting for standby promotion.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>RecoveryConflictSnapshot</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for recovery conflict resolution for a vacuum
|
|
|
|
cleanup.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>RecoveryConflictTablespace</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for recovery conflict resolution for dropping a
|
|
|
|
tablespace.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
2021-11-22 02:28:21 +01:00
|
|
|
<row>
|
|
|
|
<entry><literal>RecoveryEndCommand</literal></entry>
|
|
|
|
<entry>Waiting for <xref linkend="guc-recovery-end-command"/> to
|
|
|
|
complete.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>RecoveryPause</literal></entry>
|
|
|
|
<entry>Waiting for recovery to be resumed.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ReplicationOriginDrop</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for a replication origin to become inactive so it can be
|
|
|
|
dropped.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>ReplicationSlotDrop</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for a replication slot to become inactive so it can be
|
|
|
|
dropped.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
2021-11-22 02:28:21 +01:00
|
|
|
<row>
|
|
|
|
<entry><literal>RestoreCommand</literal></entry>
|
|
|
|
<entry>Waiting for <xref linkend="guc-restore-command"/> to
|
|
|
|
complete.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>SafeSnapshot</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting to obtain a valid snapshot for a <literal>READ ONLY
|
|
|
|
DEFERRABLE</literal> transaction.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>SyncRep</literal></entry>
|
2020-05-17 03:00:05 +02:00
|
|
|
<entry>Waiting for confirmation from a remote server during synchronous
|
2020-05-14 05:36:58 +02:00
|
|
|
replication.</entry>
|
|
|
|
</row>
|
2021-03-12 07:07:27 +01:00
|
|
|
<row>
|
2021-03-24 02:37:54 +01:00
|
|
|
<entry><literal>WalReceiverExit</literal></entry>
|
|
|
|
<entry>Waiting for the WAL receiver to exit.</entry>
|
2021-03-12 07:07:27 +01:00
|
|
|
</row>
|
2021-03-23 02:09:42 +01:00
|
|
|
<row>
|
|
|
|
<entry><literal>WalReceiverWaitStart</literal></entry>
|
|
|
|
<entry>Waiting for startup process to send initial data for streaming
|
|
|
|
replication.</entry>
|
|
|
|
</row>
|
2020-05-17 03:00:05 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>XactGroupUpdate</literal></entry>
|
|
|
|
<entry>Waiting for the group leader to update transaction status at
|
|
|
|
end of a parallel operation.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="wait-event-lock-table">
|
|
|
|
<title>Wait Events of Type <literal>Lock</literal></title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Lock</literal> Wait Event</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>advisory</literal></entry>
|
|
|
|
<entry>Waiting to acquire an advisory user lock.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>extend</literal></entry>
|
|
|
|
<entry>Waiting to extend a relation.</entry>
|
|
|
|
</row>
|
2020-08-15 19:15:53 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>frozenid</literal></entry>
|
|
|
|
<entry>Waiting to
|
|
|
|
update <structname>pg_database</structname>.<structfield>datfrozenxid</structfield>
|
|
|
|
and <structname>pg_database</structname>.<structfield>datminmxid</structfield>.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>object</literal></entry>
|
|
|
|
<entry>Waiting to acquire a lock on a non-relation database object.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>page</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting to acquire a lock on a page of a relation.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>relation</literal></entry>
|
|
|
|
<entry>Waiting to acquire a lock on a relation.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry><literal>spectoken</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting to acquire a speculative insertion lock.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>transactionid</literal></entry>
|
|
|
|
<entry>Waiting for a transaction to finish.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>tuple</literal></entry>
|
|
|
|
<entry>Waiting to acquire a lock on a tuple.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>userlock</literal></entry>
|
|
|
|
<entry>Waiting to acquire a user lock.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>virtualxid</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting to acquire a virtual transaction ID lock.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="wait-event-lwlock-table">
|
|
|
|
<title>Wait Events of Type <literal>LWLock</literal></title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LWLock</literal> Wait Event</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>AddinShmemInit</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to manage an extension's space allocation in shared
|
|
|
|
memory.</entry>
|
2020-05-14 05:03:39 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>AutoFile</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to update the <filename>postgresql.auto.conf</filename>
|
|
|
|
file.</entry>
|
2020-05-14 05:03:39 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>Autovacuum</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update the current state of autovacuum
|
|
|
|
workers.</entry>
|
2020-05-14 05:03:39 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>AutovacuumSchedule</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to ensure that a table selected for autovacuum
|
2020-05-14 05:36:58 +02:00
|
|
|
still needs vacuuming.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>BackgroundWorker</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting to read or update background worker state.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>BtreeVacuum</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting to read or update vacuum-related information for a
|
|
|
|
B-tree index.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>BufferContent</literal></entry>
|
|
|
|
<entry>Waiting to access a data page in memory.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>BufferMapping</literal></entry>
|
|
|
|
<entry>Waiting to associate a data block with a buffer in the buffer
|
|
|
|
pool.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>CheckpointerComm</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting to manage fsync requests.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>CommitTs</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update the last value set for a
|
|
|
|
transaction commit timestamp.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>CommitTsBuffer</literal></entry>
|
|
|
|
<entry>Waiting for I/O on a commit timestamp SLRU buffer.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>CommitTsSLRU</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to access the commit timestamp SLRU cache.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>ControlFile</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update the <filename>pg_control</filename>
|
|
|
|
file or create a new WAL file.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>DynamicSharedMemoryControl</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update dynamic shared memory allocation
|
|
|
|
information.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LockFastPath</literal></entry>
|
|
|
|
<entry>Waiting to read or update a process' fast-path lock
|
|
|
|
information.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>LockManager</literal></entry>
|
|
|
|
<entry>Waiting to read or update information
|
|
|
|
about <quote>heavyweight</quote> locks.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>LogicalRepWorker</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update the state of logical replication
|
|
|
|
workers.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>MultiXactGen</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting to read or update shared multixact state.</entry>
|
|
|
|
</row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>MultiXactMemberBuffer</literal></entry>
|
|
|
|
<entry>Waiting for I/O on a multixact member SLRU buffer.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>MultiXactMemberSLRU</literal></entry>
|
Rename SLRU structures and associated LWLocks.
Originally, the names assigned to SLRUs had no purpose other than
being shmem lookup keys, so not a lot of thought went into them.
As of v13, though, we're exposing them in the pg_stat_slru view and
the pg_stat_reset_slru function, so it seems advisable to take a bit
more care. Rename them to names based on the associated on-disk
storage directories (which fortunately we *did* think about, to some
extent; since those are also visible to DBAs, consistency seems like
a good thing). Also rename the associated LWLocks, since those names
are likewise user-exposed now as wait event names.
For the most part I only touched symbols used in the respective modules'
SimpleLruInit() calls, not the names of other related objects. This
renaming could have been taken further, and maybe someday we will do so.
But for now it seems undesirable to change the names of any globally
visible functions or structs, so some inconsistency is unavoidable.
(But I *did* terminate "oldserxid" with prejudice, as I found that
name both unreadable and not descriptive of the SLRU's contents.)
Table 27.12 needs re-alphabetization now, but I'll leave that till
after the other LWLock renamings I have in mind.
Discussion: https://postgr.es/m/28683.1589405363@sss.pgh.pa.us
2020-05-15 20:28:19 +02:00
|
|
|
<entry>Waiting to access the multixact member SLRU cache.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>MultiXactOffsetBuffer</literal></entry>
|
|
|
|
<entry>Waiting for I/O on a multixact offset SLRU buffer.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>MultiXactOffsetSLRU</literal></entry>
|
Rename SLRU structures and associated LWLocks.
Originally, the names assigned to SLRUs had no purpose other than
being shmem lookup keys, so not a lot of thought went into them.
As of v13, though, we're exposing them in the pg_stat_slru view and
the pg_stat_reset_slru function, so it seems advisable to take a bit
more care. Rename them to names based on the associated on-disk
storage directories (which fortunately we *did* think about, to some
extent; since those are also visible to DBAs, consistency seems like
a good thing). Also rename the associated LWLocks, since those names
are likewise user-exposed now as wait event names.
For the most part I only touched symbols used in the respective modules'
SimpleLruInit() calls, not the names of other related objects. This
renaming could have been taken further, and maybe someday we will do so.
But for now it seems undesirable to change the names of any globally
visible functions or structs, so some inconsistency is unavoidable.
(But I *did* terminate "oldserxid" with prejudice, as I found that
name both unreadable and not descriptive of the SLRU's contents.)
Table 27.12 needs re-alphabetization now, but I'll leave that till
after the other LWLock renamings I have in mind.
Discussion: https://postgr.es/m/28683.1589405363@sss.pgh.pa.us
2020-05-15 20:28:19 +02:00
|
|
|
<entry>Waiting to access the multixact offset SLRU cache.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>MultiXactTruncation</literal></entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
<entry>Waiting to read or truncate multixact information.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>NotifyBuffer</literal></entry>
|
|
|
|
<entry>Waiting for I/O on a <command>NOTIFY</command> message SLRU
|
|
|
|
buffer.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>NotifyQueue</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update <command>NOTIFY</command> messages.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
2020-08-15 19:15:53 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>NotifyQueueTail</literal></entry>
|
|
|
|
<entry>Waiting to update limit on <command>NOTIFY</command> message
|
|
|
|
storage.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>NotifySLRU</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to access the <command>NOTIFY</command> message SLRU
|
|
|
|
cache.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>OidGen</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to allocate a new OID.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>OldSnapshotTimeMap</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update old snapshot control information.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>ParallelAppend</literal></entry>
|
|
|
|
<entry>Waiting to choose the next subplan during Parallel Append plan
|
|
|
|
execution.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>ParallelHashJoin</literal></entry>
|
|
|
|
<entry>Waiting to synchronize workers during Parallel Hash Join plan
|
|
|
|
execution.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>ParallelQueryDSA</literal></entry>
|
|
|
|
<entry>Waiting for parallel query dynamic shared memory allocation.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>PerSessionDSA</literal></entry>
|
|
|
|
<entry>Waiting for parallel query dynamic shared memory allocation.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>PerSessionRecordType</literal></entry>
|
|
|
|
<entry>Waiting to access a parallel query's information about composite
|
|
|
|
types.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>PerSessionRecordTypmod</literal></entry>
|
|
|
|
<entry>Waiting to access a parallel query's information about type
|
|
|
|
modifiers that identify anonymous record types.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>PerXactPredicateList</literal></entry>
|
|
|
|
<entry>Waiting to access the list of predicate locks held by the current
|
|
|
|
serializable transaction during a parallel query.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>PredicateLockManager</literal></entry>
|
|
|
|
<entry>Waiting to access predicate lock information used by
|
2020-05-14 05:36:58 +02:00
|
|
|
serializable transactions.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>ProcArray</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to access the shared per-process data structures
|
|
|
|
(typically, to get a snapshot or report a session's transaction
|
|
|
|
ID).</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>RelationMapping</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update
|
|
|
|
a <filename>pg_filenode.map</filename> file (used to track the
|
|
|
|
filenode assignments of certain system catalogs).</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>RelCacheInit</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update a <filename>pg_internal.init</filename>
|
|
|
|
relation cache initialization file.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>ReplicationOrigin</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to create, drop or use a replication origin.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>ReplicationOriginState</literal></entry>
|
|
|
|
<entry>Waiting to read or update the progress of one replication
|
|
|
|
origin.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>ReplicationSlotAllocation</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to allocate or free a replication slot.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>ReplicationSlotControl</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update replication slot state.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>ReplicationSlotIO</literal></entry>
|
|
|
|
<entry>Waiting for I/O on a replication slot.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>SerialBuffer</literal></entry>
|
|
|
|
<entry>Waiting for I/O on a serializable transaction conflict SLRU
|
|
|
|
buffer.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>SerializableFinishedList</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to access the list of finished serializable
|
|
|
|
transactions.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>SerializablePredicateList</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to access the list of predicate locks held by
|
|
|
|
serializable transactions.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
2022-04-07 06:29:46 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>PgStatsDSA</literal></entry>
|
|
|
|
<entry>Waiting for stats dynamic shared memory allocator access</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>PgStatsHash</literal></entry>
|
|
|
|
<entry>Waiting for stats shared memory hash table access</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>PgStatsData</literal></entry>
|
|
|
|
<entry>Waiting for shared memory stats data access</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>SerializableXactHash</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update information about serializable
|
|
|
|
transactions.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>SerialSLRU</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to access the serializable transaction conflict SLRU
|
|
|
|
cache.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>SharedTidBitmap</literal></entry>
|
|
|
|
<entry>Waiting to access a shared TID bitmap during a parallel bitmap
|
|
|
|
index scan.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>SharedTupleStore</literal></entry>
|
|
|
|
<entry>Waiting to access a shared tuple store during parallel
|
|
|
|
query.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>ShmemIndex</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to find or allocate space in shared memory.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>SInvalRead</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to retrieve messages from the shared catalog invalidation
|
|
|
|
queue.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>SInvalWrite</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to add a message to the shared catalog invalidation
|
|
|
|
queue.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>SubtransBuffer</literal></entry>
|
|
|
|
<entry>Waiting for I/O on a sub-transaction SLRU buffer.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>SubtransSLRU</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to access the sub-transaction SLRU cache.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>SyncRep</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update information about the state of
|
|
|
|
synchronous replication.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>SyncScan</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to select the starting location of a synchronized table
|
|
|
|
scan.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>TablespaceCreate</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to create or drop a tablespace.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>TwoPhaseState</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to read or update the state of prepared transactions.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>WALBufMapping</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to replace a page in WAL buffers.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>WALInsert</literal></entry>
|
|
|
|
<entry>Waiting to insert WAL data into a memory buffer.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>WALWrite</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting for WAL buffers to be written to disk.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
2020-08-15 19:15:53 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>WrapLimitsVacuum</literal></entry>
|
|
|
|
<entry>Waiting to update limits on transaction id and multixact
|
|
|
|
consumption.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry><literal>XactBuffer</literal></entry>
|
|
|
|
<entry>Waiting for I/O on a transaction status SLRU buffer.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>XactSLRU</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to access the transaction status SLRU cache.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>XactTruncation</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to execute <function>pg_xact_status</function> or update
|
|
|
|
the oldest transaction ID available to it.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2020-05-16 01:55:56 +02:00
|
|
|
<entry><literal>XidGen</literal></entry>
|
2020-05-16 00:11:03 +02:00
|
|
|
<entry>Waiting to allocate a new transaction ID.</entry>
|
2020-05-14 05:03:39 +02:00
|
|
|
</row>
|
|
|
|
</tbody>
|
2012-01-19 14:19:20 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2016-03-10 18:44:09 +01:00
|
|
|
<note>
|
|
|
|
<para>
|
2020-05-14 05:36:58 +02:00
|
|
|
Extensions can add <literal>LWLock</literal> types to the list shown in
|
|
|
|
<xref linkend="wait-event-lwlock-table"/>. In some cases, the name
|
|
|
|
assigned by an extension will not be available in all server processes;
|
2020-05-16 03:47:21 +02:00
|
|
|
so an <literal>LWLock</literal> wait event might be reported as
|
2020-05-14 05:36:58 +02:00
|
|
|
just <quote><literal>extension</literal></quote> rather than the
|
|
|
|
extension-assigned name.
|
2016-03-10 18:44:09 +01:00
|
|
|
</para>
|
|
|
|
</note>
|
|
|
|
|
2020-05-14 05:36:58 +02:00
|
|
|
<table id="wait-event-timeout-table">
|
|
|
|
<title>Wait Events of Type <literal>Timeout</literal></title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry><literal>Timeout</literal> Wait Event</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>BaseBackupThrottle</literal></entry>
|
|
|
|
<entry>Waiting during base backup when throttling activity.</entry>
|
|
|
|
</row>
|
2022-03-16 01:37:58 +01:00
|
|
|
<row>
|
2022-03-16 03:53:28 +01:00
|
|
|
<entry><literal>CheckpointWriteDelay</literal></entry>
|
2022-03-16 01:37:58 +01:00
|
|
|
<entry>Waiting between writes while performing a checkpoint.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>PgSleep</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting due to a call to <function>pg_sleep</function> or
|
|
|
|
a sibling function.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>RecoveryApplyDelay</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting to apply WAL during recovery because of a delay
|
|
|
|
setting.</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>RecoveryRetrieveRetryInterval</literal></entry>
|
2020-05-16 03:47:21 +02:00
|
|
|
<entry>Waiting during recovery when WAL data is not available from any
|
|
|
|
source (<filename>pg_wal</filename>, archive or stream).</entry>
|
2020-05-14 05:36:58 +02:00
|
|
|
</row>
|
2022-03-16 03:35:16 +01:00
|
|
|
<row>
|
|
|
|
<entry><literal>RegisterSyncRequest</literal></entry>
|
|
|
|
<entry>Waiting while sending synchronization requests to the
|
|
|
|
checkpointer, because the request queue is full.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>VacuumDelay</literal></entry>
|
|
|
|
<entry>Waiting in a cost-based vacuum delay point.</entry>
|
|
|
|
</row>
|
2021-07-02 05:58:34 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>VacuumTruncate</literal></entry>
|
|
|
|
<entry>Waiting to acquire an exclusive lock to truncate off any
|
|
|
|
empty pages at the end of a table vacuumed.</entry>
|
|
|
|
</row>
|
2020-05-14 05:36:58 +02:00
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2016-03-10 18:44:09 +01:00
|
|
|
<para>
|
2020-05-14 05:36:58 +02:00
|
|
|
Here is an example of how wait events can be viewed:
|
2016-03-10 18:44:09 +01:00
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event is NOT NULL;
|
2020-05-29 10:14:33 +02:00
|
|
|
pid | wait_event_type | wait_event
|
|
|
|
------+-----------------+------------
|
2016-03-10 18:44:09 +01:00
|
|
|
2540 | Lock | relation
|
2020-05-29 10:14:33 +02:00
|
|
|
6644 | LWLock | ProcArray
|
2016-03-10 18:44:09 +01:00
|
|
|
(2 rows)
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
2016-05-05 19:27:59 +02:00
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-replication-view">
|
|
|
|
<title><structname>pg_stat_replication</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_replication</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The <structname>pg_stat_replication</structname> view will contain one row
|
|
|
|
per WAL sender process, showing statistics about replication to that
|
|
|
|
sender's connected standby server. Only directly connected standbys are
|
|
|
|
listed; no information is available about downstream standby servers.
|
|
|
|
</para>
|
|
|
|
|
2014-11-16 13:47:44 +01:00
|
|
|
<table id="pg-stat-replication-view" xreflabel="pg_stat_replication">
|
|
|
|
<title><structname>pg_stat_replication</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2014-11-16 13:47:44 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2014-11-16 13:47:44 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of a WAL sender process
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2014-11-16 13:47:44 +01:00
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>usesysid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the user logged into this WAL sender process
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2014-11-16 13:47:44 +01:00
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>usename</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the user logged into this WAL sender process
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2017-03-23 15:05:28 +01:00
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>application_name</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the application that is connected
|
|
|
|
to this WAL sender
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2017-03-23 15:05:28 +01:00
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>client_addr</structfield> <type>inet</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
IP address of the client connected to this WAL sender.
|
|
|
|
If this field is null, it indicates that the client is
|
|
|
|
connected via a Unix socket on the server machine.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>client_hostname</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Host name of the connected client, as reported by a
|
|
|
|
reverse DNS lookup of <structfield>client_addr</structfield>. This field will
|
|
|
|
only be non-null for IP connections, and only when <xref linkend="guc-log-hostname"/> is enabled.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>client_port</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
TCP port number that the client is using for communication
|
|
|
|
with this WAL sender, or <literal>-1</literal> if a Unix socket is used
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>backend_start</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time when this process was started, i.e., when the
|
|
|
|
client connected to this WAL sender
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>backend_xmin</structfield> <type>xid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
This standby's <literal>xmin</literal> horizon reported
|
|
|
|
by <xref linkend="guc-hot-standby-feedback"/>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>state</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Current WAL sender state.
|
|
|
|
Possible values are:
|
|
|
|
<itemizedlist>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>startup</literal>: This WAL sender is starting up.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>catchup</literal>: This WAL sender's connected standby is
|
|
|
|
catching up with the primary.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>streaming</literal>: This WAL sender is streaming changes
|
|
|
|
after its connected standby server has caught up with the primary.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>backup</literal>: This WAL sender is sending a backup.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>stopping</literal>: This WAL sender is stopping.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sent_lsn</structfield> <type>pg_lsn</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last write-ahead log location sent on this connection
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>write_lsn</structfield> <type>pg_lsn</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last write-ahead log location written to disk by this standby
|
|
|
|
server
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>flush_lsn</structfield> <type>pg_lsn</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last write-ahead log location flushed to disk by this standby
|
|
|
|
server
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>replay_lsn</structfield> <type>pg_lsn</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last write-ahead log location replayed into the database on this
|
|
|
|
standby server
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>write_lag</structfield> <type>interval</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time elapsed between flushing recent WAL locally and receiving
|
|
|
|
notification that this standby server has written it (but not yet
|
|
|
|
flushed it or applied it). This can be used to gauge the delay that
|
|
|
|
<literal>synchronous_commit</literal> level
|
|
|
|
<literal>remote_write</literal> incurred while committing if this
|
|
|
|
server was configured as a synchronous standby.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>flush_lag</structfield> <type>interval</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time elapsed between flushing recent WAL locally and receiving
|
|
|
|
notification that this standby server has written and flushed it
|
|
|
|
(but not yet applied it). This can be used to gauge the delay that
|
|
|
|
<literal>synchronous_commit</literal> level
|
|
|
|
<literal>on</literal> incurred while committing if this
|
|
|
|
server was configured as a synchronous standby.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>replay_lag</structfield> <type>interval</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time elapsed between flushing recent WAL locally and receiving
|
|
|
|
notification that this standby server has written, flushed and
|
|
|
|
applied it. This can be used to gauge the delay that
|
|
|
|
<literal>synchronous_commit</literal> level
|
|
|
|
<literal>remote_apply</literal> incurred while committing if this
|
|
|
|
server was configured as a synchronous standby.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sync_priority</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Priority of this standby server for being chosen as the
|
|
|
|
synchronous standby in a priority-based synchronous replication.
|
|
|
|
This has no effect in a quorum-based synchronous replication.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sync_state</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Synchronous state of this standby server.
|
|
|
|
Possible values are:
|
|
|
|
<itemizedlist>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>async</literal>: This standby server is asynchronous.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>potential</literal>: This standby server is now asynchronous,
|
|
|
|
but can potentially become synchronous if one of current
|
|
|
|
synchronous ones fails.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>sync</literal>: This standby server is synchronous.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>quorum</literal>: This standby server is considered as a candidate
|
|
|
|
for quorum standbys.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>reply_time</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Send time of last reply message received from standby server
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The lag times reported in the <structname>pg_stat_replication</structname>
|
|
|
|
view are measurements of the time taken for recent WAL to be written,
|
|
|
|
flushed and replayed and for the sender to know about it. These times
|
|
|
|
represent the commit delay that was (or would have been) introduced by each
|
|
|
|
synchronous commit level, if the remote server was configured as a
|
|
|
|
synchronous standby. For an asynchronous standby, the
|
|
|
|
<structfield>replay_lag</structfield> column approximates the delay
|
|
|
|
before recent transactions became visible to queries. If the standby
|
|
|
|
server has entirely caught up with the sending server and there is no more
|
|
|
|
WAL activity, the most recently measured lag times will continue to be
|
|
|
|
displayed for a short time and then show NULL.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Lag times work automatically for physical replication. Logical decoding
|
|
|
|
plugins may optionally emit tracking messages; if they do not, the tracking
|
|
|
|
mechanism will simply display NULL lag.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<note>
|
|
|
|
<para>
|
|
|
|
The reported lag times are not predictions of how long it will take for
|
|
|
|
the standby to catch up with the sending server assuming the current
|
|
|
|
rate of replay. Such a system would show similar times while new WAL is
|
|
|
|
being generated, but would differ when the sender becomes idle. In
|
|
|
|
particular, when the standby has caught up completely,
|
|
|
|
<structname>pg_stat_replication</structname> shows the time taken to
|
|
|
|
write, flush and replay the most recent reported WAL location rather than
|
|
|
|
zero as some users might expect. This is consistent with the goal of
|
|
|
|
measuring synchronous commit and transaction visibility delays for
|
|
|
|
recent write transactions.
|
|
|
|
To reduce confusion for users expecting a different model of lag, the
|
|
|
|
lag columns revert to NULL after a short time on a fully replayed idle
|
|
|
|
system. Monitoring systems should choose whether to represent this
|
2017-03-23 15:05:28 +01:00
|
|
|
as missing data, zero or continue to display the last known value.
|
|
|
|
</para>
|
|
|
|
</note>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
2020-10-08 05:39:08 +02:00
|
|
|
<sect2 id="monitoring-pg-stat-replication-slots-view">
|
|
|
|
<title><structname>pg_stat_replication_slots</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_replication_slots</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The <structname>pg_stat_replication_slots</structname> view will contain
|
|
|
|
one row per logical replication slot, showing statistics about its usage.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-replication-slots-view" xreflabel="pg_stat_replication_slots">
|
|
|
|
<title><structname>pg_stat_replication_slots</structname> View</title>
|
|
|
|
<tgroup cols="1">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
2020-10-20 06:54:36 +02:00
|
|
|
<structfield>slot_name</structfield> <type>text</type>
|
2020-10-08 05:39:08 +02:00
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
A unique, cluster-wide identifier for the replication slot
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>spill_txns</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2021-01-29 06:24:49 +01:00
|
|
|
Number of transactions spilled to disk once the memory used by
|
|
|
|
logical decoding to decode changes from WAL has exceeded
|
2020-10-27 03:52:39 +01:00
|
|
|
<literal>logical_decoding_work_mem</literal>. The counter gets
|
2021-07-16 10:35:38 +02:00
|
|
|
incremented for both top-level transactions and subtransactions.
|
2020-10-08 05:39:08 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>spill_count</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2021-01-29 06:24:49 +01:00
|
|
|
Number of times transactions were spilled to disk while decoding
|
|
|
|
changes from WAL for this slot. This counter is incremented each time
|
|
|
|
a transaction is spilled, and the same transaction may be spilled
|
|
|
|
multiple times.
|
2020-10-08 05:39:08 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>spill_bytes</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2020-10-27 03:52:39 +01:00
|
|
|
Amount of decoded transaction data spilled to disk while performing
|
|
|
|
decoding of changes from WAL for this slot. This and other spill
|
2020-10-28 02:58:51 +01:00
|
|
|
counters can be used to gauge the I/O which occurred during logical
|
|
|
|
decoding and allow tuning <literal>logical_decoding_work_mem</literal>.
|
2020-10-08 05:39:08 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
2020-10-29 04:41:51 +01:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>stream_txns</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2021-01-29 06:24:49 +01:00
|
|
|
Number of in-progress transactions streamed to the decoding output
|
|
|
|
plugin after the memory used by logical decoding to decode changes
|
|
|
|
from WAL for this slot has exceeded
|
|
|
|
<literal>logical_decoding_work_mem</literal>. Streaming only
|
2021-07-16 10:35:38 +02:00
|
|
|
works with top-level transactions (subtransactions can't be streamed
|
2021-01-29 06:24:49 +01:00
|
|
|
independently), so the counter is not incremented for subtransactions.
|
2020-10-29 04:41:51 +01:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>stream_count</structfield><type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times in-progress transactions were streamed to the decoding
|
2021-01-29 06:24:49 +01:00
|
|
|
output plugin while decoding changes from WAL for this slot. This
|
|
|
|
counter is incremented each time a transaction is streamed, and the
|
|
|
|
same transaction may be streamed multiple times.
|
2020-10-29 04:41:51 +01:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>stream_bytes</structfield><type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2021-05-06 07:51:26 +02:00
|
|
|
Amount of transaction data decoded for streaming in-progress
|
|
|
|
transactions to the decoding output plugin while decoding changes from
|
|
|
|
WAL for this slot. This and other streaming counters for this slot can
|
|
|
|
be used to tune <literal>logical_decoding_work_mem</literal>.
|
2020-10-29 04:41:51 +01:00
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2021-04-16 04:04:43 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>total_txns</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of decoded transactions sent to the decoding output plugin for
|
2021-07-16 10:35:38 +02:00
|
|
|
this slot. This counts top-level transactions only, and is not incremented
|
2021-04-19 05:32:47 +02:00
|
|
|
for subtransactions. Note that this includes the transactions that are
|
|
|
|
streamed and/or spilled.
|
2021-04-16 04:04:43 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>total_bytes</structfield><type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2021-05-06 07:51:26 +02:00
|
|
|
Amount of transaction data decoded for sending transactions to the
|
|
|
|
decoding output plugin while decoding changes from WAL for this slot.
|
|
|
|
Note that this includes data that is streamed and/or spilled.
|
2021-04-16 04:04:43 +02:00
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
2020-10-08 05:39:08 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time at which these statistics were last reset
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
<sect2 id="monitoring-pg-stat-wal-receiver-view">
|
|
|
|
<title><structname>pg_stat_wal_receiver</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_wal_receiver</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The <structname>pg_stat_wal_receiver</structname> view will contain only
|
|
|
|
one row, showing statistics about the WAL receiver from that receiver's
|
|
|
|
connected server.
|
|
|
|
</para>
|
|
|
|
|
2016-01-07 20:21:19 +01:00
|
|
|
<table id="pg-stat-wal-receiver-view" xreflabel="pg_stat_wal_receiver">
|
|
|
|
<title><structname>pg_stat_wal_receiver</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2016-01-07 20:21:19 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2016-01-07 20:21:19 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of the WAL receiver process
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>status</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Activity status of the WAL receiver process
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>receive_start_lsn</structfield> <type>pg_lsn</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
First write-ahead log location used when WAL receiver is
|
|
|
|
started
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>receive_start_tli</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
First timeline number used when WAL receiver is started
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
2020-05-17 02:22:07 +02:00
|
|
|
<structfield>written_lsn</structfield> <type>pg_lsn</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last write-ahead log location already received and written to disk,
|
|
|
|
but not flushed. This should not be used for data integrity checks.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>flushed_lsn</structfield> <type>pg_lsn</type>
|
2020-05-14 05:03:39 +02:00
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last write-ahead log location already received and flushed to
|
|
|
|
disk, the initial value of this field being the first log location used
|
|
|
|
when WAL receiver is started
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>received_tli</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Timeline number of last write-ahead log location received and
|
|
|
|
flushed to disk, the initial value of this field being the timeline
|
|
|
|
number of the first log location used when WAL receiver is started
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_msg_send_time</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Send time of last message received from origin WAL sender
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_msg_receipt_time</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Receipt time of last message received from origin WAL sender
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>latest_end_lsn</structfield> <type>pg_lsn</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last write-ahead log location reported to origin WAL sender
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>latest_end_time</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time of last write-ahead log location reported to origin WAL sender
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>slot_name</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Replication slot name used by this WAL receiver
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sender_host</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Host of the <productname>PostgreSQL</productname> instance
|
|
|
|
this WAL receiver is connected to. This can be a host name,
|
|
|
|
an IP address, or a directory path if the connection is via
|
|
|
|
Unix socket. (The path case can be distinguished because it
|
|
|
|
will always be an absolute path, beginning with <literal>/</literal>.)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sender_port</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Port number of the <productname>PostgreSQL</productname> instance
|
|
|
|
this WAL receiver is connected to.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>conninfo</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Connection string used by this WAL receiver,
|
|
|
|
with security-sensitive fields obfuscated.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2016-01-07 20:21:19 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
2022-04-12 11:19:47 +02:00
|
|
|
<sect2 id="monitoring-pg-stat-recovery-prefetch">
|
|
|
|
<title><structname>pg_stat_recovery_prefetch</structname></title>
|
2020-05-29 10:14:33 +02:00
|
|
|
|
|
|
|
<indexterm>
|
2022-04-12 11:19:47 +02:00
|
|
|
<primary>pg_stat_recovery_prefetch</primary>
|
2020-05-29 10:14:33 +02:00
|
|
|
</indexterm>
|
|
|
|
|
2016-01-07 20:21:19 +01:00
|
|
|
<para>
|
2022-04-12 11:19:47 +02:00
|
|
|
The <structname>pg_stat_recovery_prefetch</structname> view will contain
|
|
|
|
only one row. The columns <structfield>wal_distance</structfield>,
|
|
|
|
<structfield>block_distance</structfield> and
|
|
|
|
<structfield>io_depth</structfield> show current values, and the
|
|
|
|
other columns show cumulative counters that can be reset
|
|
|
|
with the <function>pg_stat_reset_shared</function> function.
|
2016-01-07 20:21:19 +01:00
|
|
|
</para>
|
|
|
|
|
2022-04-07 09:28:40 +02:00
|
|
|
<table id="pg-stat-recovery-prefetch-view" xreflabel="pg_stat_recovery_prefetch">
|
|
|
|
<title><structname>pg_stat_recovery_prefetch</structname> View</title>
|
2022-04-12 11:19:47 +02:00
|
|
|
<tgroup cols="1">
|
2022-04-07 09:28:40 +02:00
|
|
|
<thead>
|
2022-04-12 11:19:47 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2022-04-07 09:28:40 +02:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2022-04-12 11:19:47 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry">
|
|
|
|
<para role="column_definition">
|
|
|
|
<structfield>prefetch</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of blocks prefetched because they were not in the buffer pool
|
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry">
|
|
|
|
<para role="column_definition">
|
|
|
|
<structfield>hit</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of blocks not prefetched because they were already in the buffer pool
|
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry">
|
|
|
|
<para role="column_definition">
|
|
|
|
<structfield>skip_init</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of blocks not prefetched because they would be zero-initialized
|
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry">
|
|
|
|
<para role="column_definition">
|
|
|
|
<structfield>skip_new</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of blocks not prefetched because they didn't exist yet
|
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry">
|
|
|
|
<para role="column_definition">
|
|
|
|
<structfield>skip_fpw</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of blocks not prefetched because a full page image was included in the WAL
|
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry">
|
|
|
|
<para role="column_definition">
|
|
|
|
<structfield>skip_rep</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of blocks not prefetched because they were already recently prefetched
|
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry">
|
|
|
|
<para role="column_definition">
|
|
|
|
<structfield>wal_distance</structfield> <type>int</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
How many bytes ahead the prefetcher is looking
|
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry">
|
|
|
|
<para role="column_definition">
|
|
|
|
<structfield>block_distance</structfield> <type>int</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
How many blocks ahead the prefetcher is looking
|
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry">
|
|
|
|
<para role="column_definition">
|
|
|
|
<structfield>io_depth</structfield> <type>int</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
How many prefetches have been initiated but are not yet known to have completed
|
|
|
|
</para>
|
|
|
|
</entry>
|
|
|
|
</row>
|
2022-04-07 09:28:40 +02:00
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2022-04-12 11:19:47 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-subscription">
|
|
|
|
<title><structname>pg_stat_subscription</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_subscription</primary>
|
|
|
|
</indexterm>
|
2022-04-07 09:28:40 +02:00
|
|
|
|
2017-01-19 18:00:00 +01:00
|
|
|
<table id="pg-stat-subscription" xreflabel="pg_stat_subscription">
|
|
|
|
<title><structname>pg_stat_subscription</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2017-01-19 18:00:00 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2017-01-19 18:00:00 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>subid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the subscription
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>subname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the subscription
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of the subscription worker process
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the relation that the worker is synchronizing; null for the
|
|
|
|
main apply worker
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>received_lsn</structfield> <type>pg_lsn</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last write-ahead log location received, the initial value of
|
|
|
|
this field being 0
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_msg_send_time</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Send time of last message received from origin WAL sender
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_msg_receipt_time</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Receipt time of last message received from origin WAL sender
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>latest_end_lsn</structfield> <type>pg_lsn</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last write-ahead log location reported to origin WAL sender
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>latest_end_time</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time of last write-ahead log location reported to origin WAL
|
|
|
|
sender
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2017-01-19 18:00:00 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
2022-03-01 01:47:52 +01:00
|
|
|
<sect2 id="monitoring-pg-stat-subscription-stats">
|
|
|
|
<title><structname>pg_stat_subscription_stats</structname></title>
|
2021-11-30 04:24:30 +01:00
|
|
|
|
|
|
|
<indexterm>
|
2022-03-01 01:47:52 +01:00
|
|
|
<primary>pg_stat_subscription_stats</primary>
|
2021-11-30 04:24:30 +01:00
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
2022-03-01 01:47:52 +01:00
|
|
|
The <structname>pg_stat_subscription_stats</structname> view will contain
|
|
|
|
one row per subscription.
|
2021-11-30 04:24:30 +01:00
|
|
|
</para>
|
|
|
|
|
2022-03-01 01:47:52 +01:00
|
|
|
<table id="pg-stat-subscription-stats" xreflabel="pg_stat_subscription_stats">
|
|
|
|
<title><structname>pg_stat_subscription_stats</structname> View</title>
|
2021-11-30 04:24:30 +01:00
|
|
|
<tgroup cols="1">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>subid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the subscription
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>subname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the subscription
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
2022-03-01 01:47:52 +01:00
|
|
|
<structfield>apply_error_count</structfield> <type>bigint</type>
|
2021-11-30 04:24:30 +01:00
|
|
|
</para>
|
|
|
|
<para>
|
2022-03-01 01:47:52 +01:00
|
|
|
Number of times an error occurred while applying changes
|
2021-11-30 04:24:30 +01:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
2022-03-01 01:47:52 +01:00
|
|
|
<structfield>sync_error_count</structfield> <type>bigint</type>
|
2021-11-30 04:24:30 +01:00
|
|
|
</para>
|
|
|
|
<para>
|
2022-03-01 01:47:52 +01:00
|
|
|
Number of times an error occurred during the initial table
|
|
|
|
synchronization
|
2021-11-30 04:24:30 +01:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
2022-03-01 01:47:52 +01:00
|
|
|
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
|
2021-11-30 04:24:30 +01:00
|
|
|
</para>
|
|
|
|
<para>
|
2022-03-01 01:47:52 +01:00
|
|
|
Time at which these statistics were last reset
|
2021-11-30 04:24:30 +01:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
<sect2 id="monitoring-pg-stat-ssl-view">
|
|
|
|
<title><structname>pg_stat_ssl</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_ssl</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2017-01-19 18:00:00 +01:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_stat_ssl</structname> view will contain one row per
|
|
|
|
backend or WAL sender process, showing statistics about SSL usage on
|
|
|
|
this connection. It can be joined to <structname>pg_stat_activity</structname>
|
|
|
|
or <structname>pg_stat_replication</structname> on the
|
|
|
|
<structfield>pid</structfield> column to get more details about the
|
|
|
|
connection.
|
2017-01-19 18:00:00 +01:00
|
|
|
</para>
|
|
|
|
|
2015-04-12 19:07:46 +02:00
|
|
|
<table id="pg-stat-ssl-view" xreflabel="pg_stat_ssl">
|
|
|
|
<title><structname>pg_stat_ssl</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2015-04-12 19:07:46 +02:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2015-04-12 19:07:46 +02:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of a backend or WAL sender process
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>ssl</structfield> <type>boolean</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
True if SSL is used on this connection
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>version</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Version of SSL in use, or NULL if SSL is not in use
|
|
|
|
on this connection
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>cipher</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of SSL cipher in use, or NULL if SSL is not in use
|
|
|
|
on this connection
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>bits</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of bits in the encryption algorithm used, or NULL
|
|
|
|
if SSL is not used on this connection
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>client_dn</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Distinguished Name (DN) field from the client certificate
|
|
|
|
used, or NULL if no client certificate was supplied or if SSL
|
|
|
|
is not in use on this connection. This field is truncated if the
|
|
|
|
DN field is longer than <symbol>NAMEDATALEN</symbol> (64 characters
|
|
|
|
in a standard build).
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>client_serial</structfield> <type>numeric</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Serial number of the client certificate, or NULL if no client
|
|
|
|
certificate was supplied or if SSL is not in use on this connection. The
|
|
|
|
combination of certificate serial number and certificate issuer uniquely
|
|
|
|
identifies a certificate (unless the issuer erroneously reuses serial
|
|
|
|
numbers).
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>issuer_dn</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
DN of the issuer of the client certificate, or NULL if no client
|
|
|
|
certificate was supplied or if SSL is not in use on this connection.
|
|
|
|
This field is truncated like <structfield>client_dn</structfield>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2015-04-12 19:07:46 +02:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-gssapi-view">
|
|
|
|
<title><structname>pg_stat_gssapi</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_gssapi</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2015-04-12 19:07:46 +02:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_stat_gssapi</structname> view will contain one row per
|
|
|
|
backend, showing information about GSSAPI usage on this connection. It can
|
|
|
|
be joined to <structname>pg_stat_activity</structname> or
|
|
|
|
<structname>pg_stat_replication</structname> on the
|
2015-04-12 19:07:46 +02:00
|
|
|
<structfield>pid</structfield> column to get more details about the
|
|
|
|
connection.
|
|
|
|
</para>
|
|
|
|
|
GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
|
|
|
<table id="pg-stat-gssapi-view" xreflabel="pg_stat_gssapi">
|
|
|
|
<title><structname>pg_stat_gssapi</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of a backend
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>gss_authenticated</structfield> <type>boolean</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
True if GSSAPI authentication was used for this connection
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>principal</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Principal used to authenticate this connection, or NULL
|
|
|
|
if GSSAPI was not used to authenticate this connection. This
|
|
|
|
field is truncated if the principal is longer than
|
|
|
|
<symbol>NAMEDATALEN</symbol> (64 characters in a standard build).
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>encrypted</structfield> <type>boolean</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
True if GSSAPI encryption is in use on this connection
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-archiver-view">
|
|
|
|
<title><structname>pg_stat_archiver</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_archiver</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_stat_archiver</structname> view will always have a
|
|
|
|
single row, containing data about the archiver process of the cluster.
|
GSSAPI encryption support
On both the frontend and backend, prepare for GSSAPI encryption
support by moving common code for error handling into a separate file.
Fix a TODO for handling multiple status messages in the process.
Eliminate the OIDs, which have not been needed for some time.
Add frontend and backend encryption support functions. Keep the
context initiation for authentication-only separate on both the
frontend and backend in order to avoid concerns about changing the
requested flags to include encryption support.
In postmaster, pull GSSAPI authorization checking into a shared
function. Also share the initiator name between the encryption and
non-encryption codepaths.
For HBA, add "hostgssenc" and "hostnogssenc" entries that behave
similarly to their SSL counterparts. "hostgssenc" requires either
"gss", "trust", or "reject" for its authentication.
Similarly, add a "gssencmode" parameter to libpq. Supported values are
"disable", "require", and "prefer". Notably, negotiation will only be
attempted if credentials can be acquired. Move credential acquisition
into its own function to support this behavior.
Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring
if GSSAPI authentication was used, what principal was used, and if
encryption is being used on the connection.
Finally, add documentation for everything new, and update existing
documentation on connection security.
Thanks to Michael Paquier for the Windows fixes.
Author: Robbie Harwood, with changes to the read/write functions by me.
Reviewed in various forms and at different times by: Michael Paquier,
Andres Freund, David Steele.
Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
|
|
|
</para>
|
|
|
|
|
2014-01-28 18:58:22 +01:00
|
|
|
<table id="pg-stat-archiver-view" xreflabel="pg_stat_archiver">
|
|
|
|
<title><structname>pg_stat_archiver</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2014-01-28 18:58:22 +01:00
|
|
|
<thead>
|
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2014-01-28 18:58:22 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>archived_count</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of WAL files that have been successfully archived
|
|
|
|
</para></entry>
|
2014-01-28 18:58:22 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2014-01-28 18:58:22 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_archived_wal</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
Fix race condition in 028_pitr_timelines.pl test, add note to docs.
The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL
segment that was just filled up to be archived. It was because the
test used 'pg_stat_archiver.last_archived_wal' to check if a file was
archived, but the order that WAL files are archived when a standby is
promoted is not fully deterministic, and 'last_archived_wal' tracks
the last segment that was archived, not the highest-numbered WAL
segment. Because of that, if the archiver archived segment 3, and then
2, 'last_archived_wal' say 2, and the test query would think that 3
has not been archived yet.
Normally, WAL files are marked ready for archival in order, and the
archiver process will process them in order, so that issue doesn't
arise. We have used the same query on 'last_archived_wal' in a few
other tests with no problem. But when a standby is promoted, things
are a bit chaotic. After promotion, the server will try to archive all
the WAL segments from the old timeline that are in pg_wal, as well as
the history file and any new WAL segments on the new timeline. The
end-of-recovery checkpoint will create the .ready files for all the
WAL files on the old timeline, but at the same time, the new timeline
is opened up for business. A file from the new timeline can therefore
be archived before the files from the old timeline have been marked as
ready for archival.
It turns out that we don't really need to wait for the archival in
this particular test, because the standby server is about to be
stopped, and stopping a server will wait for the end-of-recovery
checkpoint and all WAL archivals to finish, anyway. So we can just
remove it from the test.
Add a note to the docs on 'pg_stat_archiver' view that files can be
archived out of order.
Reviewed-by: Tom Lane
Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
|
|
|
Name of the WAL file most recently successfully archived
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
2014-01-28 18:58:22 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2014-01-28 18:58:22 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_archived_time</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
Fix race condition in 028_pitr_timelines.pl test, add note to docs.
The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL
segment that was just filled up to be archived. It was because the
test used 'pg_stat_archiver.last_archived_wal' to check if a file was
archived, but the order that WAL files are archived when a standby is
promoted is not fully deterministic, and 'last_archived_wal' tracks
the last segment that was archived, not the highest-numbered WAL
segment. Because of that, if the archiver archived segment 3, and then
2, 'last_archived_wal' say 2, and the test query would think that 3
has not been archived yet.
Normally, WAL files are marked ready for archival in order, and the
archiver process will process them in order, so that issue doesn't
arise. We have used the same query on 'last_archived_wal' in a few
other tests with no problem. But when a standby is promoted, things
are a bit chaotic. After promotion, the server will try to archive all
the WAL segments from the old timeline that are in pg_wal, as well as
the history file and any new WAL segments on the new timeline. The
end-of-recovery checkpoint will create the .ready files for all the
WAL files on the old timeline, but at the same time, the new timeline
is opened up for business. A file from the new timeline can therefore
be archived before the files from the old timeline have been marked as
ready for archival.
It turns out that we don't really need to wait for the archival in
this particular test, because the standby server is about to be
stopped, and stopping a server will wait for the end-of-recovery
checkpoint and all WAL archivals to finish, anyway. So we can just
remove it from the test.
Add a note to the docs on 'pg_stat_archiver' view that files can be
archived out of order.
Reviewed-by: Tom Lane
Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
|
|
|
Time of the most recent successful archive operation
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
2014-01-28 18:58:22 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2014-01-28 18:58:22 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>failed_count</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of failed attempts for archiving WAL files
|
|
|
|
</para></entry>
|
2014-01-28 18:58:22 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2014-01-28 18:58:22 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_failed_wal</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
Fix race condition in 028_pitr_timelines.pl test, add note to docs.
The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL
segment that was just filled up to be archived. It was because the
test used 'pg_stat_archiver.last_archived_wal' to check if a file was
archived, but the order that WAL files are archived when a standby is
promoted is not fully deterministic, and 'last_archived_wal' tracks
the last segment that was archived, not the highest-numbered WAL
segment. Because of that, if the archiver archived segment 3, and then
2, 'last_archived_wal' say 2, and the test query would think that 3
has not been archived yet.
Normally, WAL files are marked ready for archival in order, and the
archiver process will process them in order, so that issue doesn't
arise. We have used the same query on 'last_archived_wal' in a few
other tests with no problem. But when a standby is promoted, things
are a bit chaotic. After promotion, the server will try to archive all
the WAL segments from the old timeline that are in pg_wal, as well as
the history file and any new WAL segments on the new timeline. The
end-of-recovery checkpoint will create the .ready files for all the
WAL files on the old timeline, but at the same time, the new timeline
is opened up for business. A file from the new timeline can therefore
be archived before the files from the old timeline have been marked as
ready for archival.
It turns out that we don't really need to wait for the archival in
this particular test, because the standby server is about to be
stopped, and stopping a server will wait for the end-of-recovery
checkpoint and all WAL archivals to finish, anyway. So we can just
remove it from the test.
Add a note to the docs on 'pg_stat_archiver' view that files can be
archived out of order.
Reviewed-by: Tom Lane
Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
|
|
|
Name of the WAL file of the most recent failed archival operation
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
2014-01-28 18:58:22 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2014-01-28 18:58:22 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_failed_time</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
Fix race condition in 028_pitr_timelines.pl test, add note to docs.
The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL
segment that was just filled up to be archived. It was because the
test used 'pg_stat_archiver.last_archived_wal' to check if a file was
archived, but the order that WAL files are archived when a standby is
promoted is not fully deterministic, and 'last_archived_wal' tracks
the last segment that was archived, not the highest-numbered WAL
segment. Because of that, if the archiver archived segment 3, and then
2, 'last_archived_wal' say 2, and the test query would think that 3
has not been archived yet.
Normally, WAL files are marked ready for archival in order, and the
archiver process will process them in order, so that issue doesn't
arise. We have used the same query on 'last_archived_wal' in a few
other tests with no problem. But when a standby is promoted, things
are a bit chaotic. After promotion, the server will try to archive all
the WAL segments from the old timeline that are in pg_wal, as well as
the history file and any new WAL segments on the new timeline. The
end-of-recovery checkpoint will create the .ready files for all the
WAL files on the old timeline, but at the same time, the new timeline
is opened up for business. A file from the new timeline can therefore
be archived before the files from the old timeline have been marked as
ready for archival.
It turns out that we don't really need to wait for the archival in
this particular test, because the standby server is about to be
stopped, and stopping a server will wait for the end-of-recovery
checkpoint and all WAL archivals to finish, anyway. So we can just
remove it from the test.
Add a note to the docs on 'pg_stat_archiver' view that files can be
archived out of order.
Reviewed-by: Tom Lane
Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
|
|
|
Time of the most recent failed archival operation
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
2014-01-28 18:58:22 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2014-01-28 18:58:22 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time at which these statistics were last reset
|
|
|
|
</para></entry>
|
2014-01-28 18:58:22 +01:00
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
Fix race condition in 028_pitr_timelines.pl test, add note to docs.
The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL
segment that was just filled up to be archived. It was because the
test used 'pg_stat_archiver.last_archived_wal' to check if a file was
archived, but the order that WAL files are archived when a standby is
promoted is not fully deterministic, and 'last_archived_wal' tracks
the last segment that was archived, not the highest-numbered WAL
segment. Because of that, if the archiver archived segment 3, and then
2, 'last_archived_wal' say 2, and the test query would think that 3
has not been archived yet.
Normally, WAL files are marked ready for archival in order, and the
archiver process will process them in order, so that issue doesn't
arise. We have used the same query on 'last_archived_wal' in a few
other tests with no problem. But when a standby is promoted, things
are a bit chaotic. After promotion, the server will try to archive all
the WAL segments from the old timeline that are in pg_wal, as well as
the history file and any new WAL segments on the new timeline. The
end-of-recovery checkpoint will create the .ready files for all the
WAL files on the old timeline, but at the same time, the new timeline
is opened up for business. A file from the new timeline can therefore
be archived before the files from the old timeline have been marked as
ready for archival.
It turns out that we don't really need to wait for the archival in
this particular test, because the standby server is about to be
stopped, and stopping a server will wait for the end-of-recovery
checkpoint and all WAL archivals to finish, anyway. So we can just
remove it from the test.
Add a note to the docs on 'pg_stat_archiver' view that files can be
archived out of order.
Reviewed-by: Tom Lane
Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
|
|
|
<para>
|
|
|
|
Normally, WAL files are archived in order, oldest to newest, but that is
|
|
|
|
not guaranteed, and does not hold under special circumstances like when
|
|
|
|
promoting a standby or after crash recovery. Therefore it is not safe to
|
|
|
|
assume that all files older than
|
|
|
|
<structfield>last_archived_wal</structfield> have also been successfully
|
|
|
|
archived.
|
|
|
|
</para>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-bgwriter-view">
|
|
|
|
<title><structname>pg_stat_bgwriter</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_bgwriter</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2014-01-28 18:58:22 +01:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_stat_bgwriter</structname> view will always have a
|
|
|
|
single row, containing global data for the cluster.
|
2014-01-28 18:58:22 +01:00
|
|
|
</para>
|
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<table id="pg-stat-bgwriter-view" xreflabel="pg_stat_bgwriter">
|
2012-06-07 23:06:20 +02:00
|
|
|
<title><structname>pg_stat_bgwriter</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2012-02-25 15:20:49 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>checkpoints_timed</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of scheduled checkpoints that have been performed
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>checkpoints_req</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of requested checkpoints that have been performed
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2012-04-05 20:03:21 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>checkpoint_write_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total amount of time that has been spent in the portion of
|
|
|
|
checkpoint processing where files are written to disk, in milliseconds
|
|
|
|
</para></entry>
|
2012-04-05 20:03:21 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2012-04-05 20:03:21 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>checkpoint_sync_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total amount of time that has been spent in the portion of
|
|
|
|
checkpoint processing where files are synchronized to disk, in
|
|
|
|
milliseconds
|
|
|
|
</para></entry>
|
2012-04-05 20:03:21 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>buffers_checkpoint</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of buffers written during checkpoints
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>buffers_clean</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of buffers written by the background writer
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>maxwritten_clean</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times the background writer stopped a cleaning
|
|
|
|
scan because it had written too many buffers
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>buffers_backend</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of buffers written directly by a backend
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>buffers_backend_fsync</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times a backend had to execute its own
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>fsync</function> call (normally the background writer handles those
|
2020-05-14 05:03:39 +02:00
|
|
|
even when the backend does its own write)
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>buffers_alloc</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of buffers allocated
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time at which these statistics were last reset
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
|
|
|
</tbody>
|
2020-05-14 05:03:39 +02:00
|
|
|
</tgroup>
|
2012-02-25 15:20:49 +01:00
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
2020-10-02 03:17:11 +02:00
|
|
|
<sect2 id="monitoring-pg-stat-wal-view">
|
|
|
|
<title><structname>pg_stat_wal</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_wal</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The <structname>pg_stat_wal</structname> view will always have a
|
|
|
|
single row, containing data about WAL activity of the cluster.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-wal-view" xreflabel="pg_stat_wal">
|
|
|
|
<title><structname>pg_stat_wal</structname> View</title>
|
|
|
|
<tgroup cols="1">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
2020-12-02 05:00:15 +01:00
|
|
|
<structfield>wal_records</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total number of WAL records generated
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>wal_fpi</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total number of WAL full page images generated
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>wal_bytes</structfield> <type>numeric</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2020-12-24 09:05:49 +01:00
|
|
|
Total amount of WAL generated in bytes
|
2020-12-02 05:00:15 +01:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
2020-10-02 03:17:11 +02:00
|
|
|
<structfield>wal_buffers_full</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2020-12-24 09:05:49 +01:00
|
|
|
Number of times WAL data was written to disk because WAL buffers became full
|
2020-10-02 03:17:11 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
Track total amounts of times spent writing and syncing WAL data to disk.
This commit adds new GUC track_wal_io_timing. When this is enabled,
the total amounts of time XLogWrite writes and issue_xlog_fsync syncs
WAL data to disk are counted in pg_stat_wal. This information would be
useful to check how much WAL write and sync affect the performance.
Enabling track_wal_io_timing will make the server query the operating
system for the current time every time WAL is written or synced,
which may cause significant overhead on some platforms. To avoid such
additional overhead in the server with track_io_timing enabled,
this commit introduces track_wal_io_timing as a separate parameter from
track_io_timing.
Note that WAL write and sync activity by walreceiver has not been tracked yet.
This commit makes the server also track the numbers of times XLogWrite
writes and issue_xlog_fsync syncs WAL data to disk, in pg_stat_wal,
regardless of the setting of track_wal_io_timing. This counters can be
used to calculate the WAL write and sync time per request, for example.
Bump PGSTAT_FILE_FORMAT_ID.
Bump catalog version.
Author: Masahiro Ikeda
Reviewed-By: Japin Li, Hayato Kuroda, Masahiko Sawada, David Johnston, Fujii Masao
Discussion: https://postgr.es/m/0509ad67b585a5b86a83d445dfa75392@oss.nttdata.com
2021-03-09 08:52:06 +01:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>wal_write</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times WAL buffers were written out to disk via
|
|
|
|
<function>XLogWrite</function> request.
|
|
|
|
See <xref linkend="wal-configuration"/> for more information about
|
|
|
|
the internal WAL function <function>XLogWrite</function>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>wal_sync</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times WAL files were synced to disk via
|
|
|
|
<function>issue_xlog_fsync</function> request
|
|
|
|
(if <xref linkend="guc-fsync"/> is <literal>on</literal> and
|
|
|
|
<xref linkend="guc-wal-sync-method"/> is either
|
|
|
|
<literal>fdatasync</literal>, <literal>fsync</literal> or
|
|
|
|
<literal>fsync_writethrough</literal>, otherwise zero).
|
|
|
|
See <xref linkend="wal-configuration"/> for more information about
|
|
|
|
the internal WAL function <function>issue_xlog_fsync</function>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>wal_write_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total amount of time spent writing WAL buffers to disk via
|
|
|
|
<function>XLogWrite</function> request, in milliseconds
|
|
|
|
(if <xref linkend="guc-track-wal-io-timing"/> is enabled,
|
|
|
|
otherwise zero). This includes the sync time when
|
|
|
|
<varname>wal_sync_method</varname> is either
|
|
|
|
<literal>open_datasync</literal> or <literal>open_sync</literal>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>wal_sync_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total amount of time spent syncing WAL files to disk via
|
|
|
|
<function>issue_xlog_fsync</function> request, in milliseconds
|
|
|
|
(if <varname>track_wal_io_timing</varname> is enabled,
|
|
|
|
<varname>fsync</varname> is <literal>on</literal>, and
|
|
|
|
<varname>wal_sync_method</varname> is either
|
|
|
|
<literal>fdatasync</literal>, <literal>fsync</literal> or
|
|
|
|
<literal>fsync_writethrough</literal>, otherwise zero).
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
2020-10-02 03:17:11 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time at which these statistics were last reset
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
<sect2 id="monitoring-pg-stat-database-view">
|
|
|
|
<title><structname>pg_stat_database</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_database</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_stat_database</structname> view will contain one row
|
|
|
|
for each database in the cluster, plus one for shared objects, showing
|
|
|
|
database-wide statistics.
|
2012-02-25 15:20:49 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-database-view" xreflabel="pg_stat_database">
|
2012-06-07 23:06:20 +02:00
|
|
|
<title><structname>pg_stat_database</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2012-02-25 15:20:49 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of this database, or 0 for objects belonging to a shared
|
|
|
|
relation
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of this database, or <literal>NULL</literal> for shared
|
|
|
|
objects.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>numbackends</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of backends currently connected to this database, or
|
|
|
|
<literal>NULL</literal> for shared objects. This is the only column
|
|
|
|
in this view that returns a value reflecting current state; all other
|
|
|
|
columns return the accumulated values since the last reset.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>xact_commit</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of transactions in this database that have been
|
|
|
|
committed
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>xact_rollback</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of transactions in this database that have been
|
|
|
|
rolled back
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blks_read</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of disk blocks read in this database
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blks_hit</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times disk blocks were found already in the buffer
|
|
|
|
cache, so that a read was not necessary (this only includes hits in the
|
|
|
|
PostgreSQL buffer cache, not the operating system's file system cache)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tup_returned</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2021-06-30 13:57:07 +02:00
|
|
|
Number of live rows fetched by sequential scans and index entries returned by index scans in this database
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tup_fetched</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2021-06-30 13:57:07 +02:00
|
|
|
Number of live rows fetched by index scans in this database
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tup_inserted</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of rows inserted by queries in this database
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tup_updated</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of rows updated by queries in this database
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tup_deleted</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of rows deleted by queries in this database
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>conflicts</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of queries canceled due to conflicts with recovery
|
|
|
|
in this database. (Conflicts occur only on standby servers; see
|
2020-05-29 10:14:33 +02:00
|
|
|
<link linkend="monitoring-pg-stat-database-conflicts-view">
|
|
|
|
<structname>pg_stat_database_conflicts</structname></link> for details.)
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>temp_files</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of temporary files created by queries in this database.
|
|
|
|
All temporary files are counted, regardless of why the temporary file
|
|
|
|
was created (e.g., sorting or hashing), and regardless of the
|
|
|
|
<xref linkend="guc-log-temp-files"/> setting.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>temp_bytes</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total amount of data written to temporary files by queries in
|
|
|
|
this database. All temporary files are counted, regardless of why
|
|
|
|
the temporary file was created, and
|
|
|
|
regardless of the <xref linkend="guc-log-temp-files"/> setting.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>deadlocks</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of deadlocks detected in this database
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>checksum_failures</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of data page checksum failures detected in this
|
|
|
|
database (or on a shared object), or NULL if data checksums are not
|
|
|
|
enabled.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>checksum_last_failure</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time at which the last data page checksum failure was detected in
|
|
|
|
this database (or on a shared object), or NULL if data checksums are not
|
|
|
|
enabled.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blk_read_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time spent reading data file blocks by backends in this database,
|
2020-05-22 16:33:58 +02:00
|
|
|
in milliseconds (if <xref linkend="guc-track-io-timing"/> is enabled,
|
|
|
|
otherwise zero)
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blk_write_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time spent writing data file blocks by backends in this database,
|
2020-05-22 16:33:58 +02:00
|
|
|
in milliseconds (if <xref linkend="guc-track-io-timing"/> is enabled,
|
|
|
|
otherwise zero)
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
2021-01-17 13:34:09 +01:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>session_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time spent by database sessions in this database, in milliseconds
|
|
|
|
(note that statistics are only updated when the state of a session
|
|
|
|
changes, so if sessions have been idle for a long time, this idle time
|
|
|
|
won't be included)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>active_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time spent executing SQL statements in this database, in milliseconds
|
|
|
|
(this corresponds to the states <literal>active</literal> and
|
|
|
|
<literal>fastpath function call</literal> in
|
|
|
|
<link linkend="monitoring-pg-stat-activity-view">
|
|
|
|
<structname>pg_stat_activity</structname></link>)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>idle_in_transaction_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time spent idling while in a transaction in this database, in milliseconds
|
|
|
|
(this corresponds to the states <literal>idle in transaction</literal> and
|
|
|
|
<literal>idle in transaction (aborted)</literal> in
|
|
|
|
<link linkend="monitoring-pg-stat-activity-view">
|
|
|
|
<structname>pg_stat_activity</structname></link>)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sessions</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total number of sessions established to this database
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sessions_abandoned</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of database sessions to this database that were terminated
|
|
|
|
because connection to the client was lost
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sessions_fatal</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of database sessions to this database that were terminated
|
|
|
|
by fatal errors
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sessions_killed</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of database sessions to this database that were terminated
|
|
|
|
by operator intervention
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time at which these statistics were last reset
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2012-02-25 15:20:49 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-database-conflicts-view">
|
|
|
|
<title><structname>pg_stat_database_conflicts</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_database_conflicts</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_stat_database_conflicts</structname> view will contain
|
|
|
|
one row per database, showing database-wide statistics about
|
|
|
|
query cancels occurring due to conflicts with recovery on standby servers.
|
|
|
|
This view will only contain information on standby servers, since
|
2020-06-15 19:12:58 +02:00
|
|
|
conflicts do not occur on primary servers.
|
2012-02-25 15:20:49 +01:00
|
|
|
</para>
|
|
|
|
|
2014-11-16 13:47:44 +01:00
|
|
|
<table id="pg-stat-database-conflicts-view" xreflabel="pg_stat_database_conflicts">
|
|
|
|
<title><structname>pg_stat_database_conflicts</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2014-11-16 13:47:44 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2014-11-16 13:47:44 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of a database
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of this database
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>confl_tablespace</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of queries in this database that have been canceled due to
|
|
|
|
dropped tablespaces
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>confl_lock</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of queries in this database that have been canceled due to
|
|
|
|
lock timeouts
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>confl_snapshot</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of queries in this database that have been canceled due to
|
|
|
|
old snapshots
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>confl_bufferpin</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of queries in this database that have been canceled due to
|
|
|
|
pinned buffers
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>confl_deadlock</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of queries in this database that have been canceled due to
|
|
|
|
deadlocks
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2014-11-16 13:47:44 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-all-tables-view">
|
|
|
|
<title><structname>pg_stat_all_tables</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_all_tables</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2014-11-16 13:47:44 +01:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_stat_all_tables</structname> view will contain
|
|
|
|
one row for each table in the current database (including TOAST
|
|
|
|
tables), showing statistics about accesses to that specific table. The
|
|
|
|
<structname>pg_stat_user_tables</structname> and
|
|
|
|
<structname>pg_stat_sys_tables</structname> views
|
|
|
|
contain the same information,
|
|
|
|
but filtered to only show user and system tables respectively.
|
2014-11-16 13:47:44 +01:00
|
|
|
</para>
|
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<table id="pg-stat-all-tables-view" xreflabel="pg_stat_all_tables">
|
2012-06-07 23:06:20 +02:00
|
|
|
<title><structname>pg_stat_all_tables</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2012-02-25 15:20:49 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of a table
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>schemaname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the schema that this table is in
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of this table
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>seq_scan</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of sequential scans initiated on this table
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>seq_tup_read</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of live rows fetched by sequential scans
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>idx_scan</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of index scans initiated on this table
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>idx_tup_fetch</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of live rows fetched by index scans
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>n_tup_ins</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of rows inserted
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>n_tup_upd</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of rows updated (includes HOT updated rows)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>n_tup_del</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of rows deleted
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>n_tup_hot_upd</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of rows HOT updated (i.e., with no separate index
|
|
|
|
update required)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>n_live_tup</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Estimated number of live rows
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>n_dead_tup</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Estimated number of dead rows
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>n_mod_since_analyze</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Estimated number of rows modified since this table was last analyzed
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>n_ins_since_vacuum</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Estimated number of rows inserted since this table was last vacuumed
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_vacuum</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last time at which this table was manually vacuumed
|
|
|
|
(not counting <command>VACUUM FULL</command>)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_autovacuum</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last time at which this table was vacuumed by the autovacuum
|
|
|
|
daemon
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_analyze</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last time at which this table was manually analyzed
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>last_autoanalyze</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Last time at which this table was analyzed by the autovacuum
|
|
|
|
daemon
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>vacuum_count</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times this table has been manually vacuumed
|
|
|
|
(not counting <command>VACUUM FULL</command>)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>autovacuum_count</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times this table has been vacuumed by the autovacuum
|
|
|
|
daemon
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>analyze_count</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times this table has been manually analyzed
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>autoanalyze_count</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times this table has been analyzed by the autovacuum
|
|
|
|
daemon
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2012-02-25 15:20:49 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-all-indexes-view">
|
|
|
|
<title><structname>pg_stat_all_indexes</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_all_indexes</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_stat_all_indexes</structname> view will contain
|
|
|
|
one row for each index in the current database,
|
|
|
|
showing statistics about accesses to that specific index. The
|
|
|
|
<structname>pg_stat_user_indexes</structname> and
|
|
|
|
<structname>pg_stat_sys_indexes</structname> views
|
2012-04-29 19:24:44 +02:00
|
|
|
contain the same information,
|
2020-05-29 10:14:33 +02:00
|
|
|
but filtered to only show user and system indexes respectively.
|
2012-02-25 15:20:49 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-all-indexes-view" xreflabel="pg_stat_all_indexes">
|
2012-06-07 23:06:20 +02:00
|
|
|
<title><structname>pg_stat_all_indexes</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2012-02-25 15:20:49 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the table for this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>indexrelid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>schemaname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the schema this index is in
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the table for this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>indexrelname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>idx_scan</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of index scans initiated on this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>idx_tup_read</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of index entries returned by scans on this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>idx_tup_fetch</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of live table rows fetched by simple index scans using this
|
|
|
|
index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2012-02-25 15:20:49 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2012-04-29 19:24:44 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Indexes can be used by simple index scans, <quote>bitmap</quote> index scans,
|
2015-03-20 03:38:12 +01:00
|
|
|
and the optimizer. In a bitmap scan
|
2012-04-29 19:24:44 +02:00
|
|
|
the output of several indexes can be combined via AND or OR rules,
|
|
|
|
so it is difficult to associate individual heap row fetches
|
|
|
|
with specific indexes when a bitmap scan is used. Therefore, a bitmap
|
|
|
|
scan increments the
|
2017-10-09 03:44:17 +02:00
|
|
|
<structname>pg_stat_all_indexes</structname>.<structfield>idx_tup_read</structfield>
|
2012-04-29 19:24:44 +02:00
|
|
|
count(s) for the index(es) it uses, and it increments the
|
2017-10-09 03:44:17 +02:00
|
|
|
<structname>pg_stat_all_tables</structname>.<structfield>idx_tup_fetch</structfield>
|
2012-04-29 19:24:44 +02:00
|
|
|
count for the table, but it does not affect
|
2017-10-09 03:44:17 +02:00
|
|
|
<structname>pg_stat_all_indexes</structname>.<structfield>idx_tup_fetch</structfield>.
|
2015-03-20 03:38:12 +01:00
|
|
|
The optimizer also accesses indexes to check for supplied constants
|
|
|
|
whose values are outside the recorded range of the optimizer statistics
|
|
|
|
because the optimizer statistics might be stale.
|
2012-04-29 19:24:44 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<note>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <structfield>idx_tup_read</structfield> and <structfield>idx_tup_fetch</structfield> counts
|
2012-04-29 19:24:44 +02:00
|
|
|
can be different even without any use of bitmap scans,
|
2017-10-09 03:44:17 +02:00
|
|
|
because <structfield>idx_tup_read</structfield> counts
|
|
|
|
index entries retrieved from the index while <structfield>idx_tup_fetch</structfield>
|
2012-04-29 19:24:44 +02:00
|
|
|
counts live rows fetched from the table. The latter will be less if any
|
|
|
|
dead or not-yet-committed rows are fetched using the index, or if any
|
|
|
|
heap fetches are avoided by means of an index-only scan.
|
|
|
|
</para>
|
|
|
|
</note>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-statio-all-tables-view">
|
|
|
|
<title><structname>pg_statio_all_tables</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_statio_all_tables</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The <structname>pg_statio_all_tables</structname> view will contain
|
|
|
|
one row for each table in the current database (including TOAST
|
|
|
|
tables), showing statistics about I/O on that specific table. The
|
|
|
|
<structname>pg_statio_user_tables</structname> and
|
|
|
|
<structname>pg_statio_sys_tables</structname> views
|
|
|
|
contain the same information,
|
|
|
|
but filtered to only show user and system tables respectively.
|
|
|
|
</para>
|
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<table id="pg-statio-all-tables-view" xreflabel="pg_statio_all_tables">
|
2012-06-07 23:06:20 +02:00
|
|
|
<title><structname>pg_statio_all_tables</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2012-02-25 15:20:49 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of a table
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>schemaname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the schema that this table is in
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of this table
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>heap_blks_read</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of disk blocks read from this table
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>heap_blks_hit</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of buffer hits in this table
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>idx_blks_read</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of disk blocks read from all indexes on this table
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>idx_blks_hit</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of buffer hits in all indexes on this table
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>toast_blks_read</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of disk blocks read from this table's TOAST table (if any)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>toast_blks_hit</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of buffer hits in this table's TOAST table (if any)
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tidx_blks_read</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of disk blocks read from this table's TOAST table indexes (if any)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tidx_blks_hit</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of buffer hits in this table's TOAST table indexes (if any)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2012-02-25 15:20:49 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-statio-all-indexes-view">
|
|
|
|
<title><structname>pg_statio_all_indexes</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_statio_all_indexes</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_statio_all_indexes</structname> view will contain
|
|
|
|
one row for each index in the current database,
|
|
|
|
showing statistics about I/O on that specific index. The
|
|
|
|
<structname>pg_statio_user_indexes</structname> and
|
|
|
|
<structname>pg_statio_sys_indexes</structname> views
|
2012-04-29 19:24:44 +02:00
|
|
|
contain the same information,
|
2020-05-29 10:14:33 +02:00
|
|
|
but filtered to only show user and system indexes respectively.
|
2012-02-25 15:20:49 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-statio-all-indexes-view" xreflabel="pg_statio_all_indexes">
|
2012-06-07 23:06:20 +02:00
|
|
|
<title><structname>pg_statio_all_indexes</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2012-02-25 15:20:49 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the table for this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>indexrelid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>schemaname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the schema this index is in
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the table for this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>indexrelname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>idx_blks_read</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of disk blocks read from this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>idx_blks_hit</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of buffer hits in this index
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2012-02-25 15:20:49 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-statio-all-sequences-view">
|
|
|
|
<title><structname>pg_statio_all_sequences</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_statio_all_sequences</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_statio_all_sequences</structname> view will contain
|
|
|
|
one row for each sequence in the current database,
|
|
|
|
showing statistics about I/O on that specific sequence.
|
2012-02-25 15:20:49 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-statio-all-sequences-view" xreflabel="pg_statio_all_sequences">
|
2012-06-07 23:06:20 +02:00
|
|
|
<title><structname>pg_statio_all_sequences</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2012-02-25 15:20:49 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of a sequence
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>schemaname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the schema this sequence is in
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of this sequence
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blks_read</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of disk blocks read from this sequence
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blks_hit</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of buffer hits in this sequence
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2012-02-25 15:20:49 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-user-functions-view">
|
|
|
|
<title><structname>pg_stat_user_functions</structname></title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_user_functions</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2012-02-25 15:20:49 +01:00
|
|
|
<para>
|
2020-05-29 10:14:33 +02:00
|
|
|
The <structname>pg_stat_user_functions</structname> view will contain
|
|
|
|
one row for each tracked function, showing statistics about executions of
|
|
|
|
that function. The <xref linkend="guc-track-functions"/> parameter
|
|
|
|
controls exactly which functions are tracked.
|
2012-02-25 15:20:49 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-user-functions-view" xreflabel="pg_stat_user_functions">
|
2012-06-07 23:06:20 +02:00
|
|
|
<title><structname>pg_stat_user_functions</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2012-02-25 15:20:49 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2012-02-25 15:20:49 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>funcid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of a function
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>schemaname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the schema this function is in
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>funcname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of this function
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>calls</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times this function has been called
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>total_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total time spent in this function and all other functions
|
|
|
|
called by it, in milliseconds
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>self_time</structfield> <type>double precision</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total time spent in this function itself, not including
|
|
|
|
other functions called by it, in milliseconds
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2012-02-25 15:20:49 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="monitoring-pg-stat-slru-view">
|
|
|
|
<title><structname>pg_stat_slru</structname></title>
|
|
|
|
|
Rename SLRU structures and associated LWLocks.
Originally, the names assigned to SLRUs had no purpose other than
being shmem lookup keys, so not a lot of thought went into them.
As of v13, though, we're exposing them in the pg_stat_slru view and
the pg_stat_reset_slru function, so it seems advisable to take a bit
more care. Rename them to names based on the associated on-disk
storage directories (which fortunately we *did* think about, to some
extent; since those are also visible to DBAs, consistency seems like
a good thing). Also rename the associated LWLocks, since those names
are likewise user-exposed now as wait event names.
For the most part I only touched symbols used in the respective modules'
SimpleLruInit() calls, not the names of other related objects. This
renaming could have been taken further, and maybe someday we will do so.
But for now it seems undesirable to change the names of any globally
visible functions or structs, so some inconsistency is unavoidable.
(But I *did* terminate "oldserxid" with prejudice, as I found that
name both unreadable and not descriptive of the SLRU's contents.)
Table 27.12 needs re-alphabetization now, but I'll leave that till
after the other LWLock renamings I have in mind.
Discussion: https://postgr.es/m/28683.1589405363@sss.pgh.pa.us
2020-05-15 20:28:19 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>SLRU</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2020-05-29 10:14:33 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_slru</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
Collect statistics about SLRU caches
There's a number of SLRU caches used to access important data like clog,
commit timestamps, multixact, asynchronous notifications, etc. Until now
we had no easy way to monitor these shared caches, compute hit ratios,
number of reads/writes etc.
This commit extends the statistics collector to track this information
for a predefined list of SLRUs, and also introduces a new system view
pg_stat_slru displaying the data.
The list of built-in SLRUs is fixed, but additional SLRUs may be defined
in extensions. Unfortunately, there's no suitable registry of SLRUs, so
this patch simply defines a fixed list of SLRUs with entries for the
built-in ones and one entry for all additional SLRUs. Extensions adding
their own SLRU are fairly rare, so this seems acceptable.
This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
are still fixed (hard-coded in the code) and it's not entirely clear
which of the SLRUs might need a GUC to tune size. In a way, allowing us
to determine that is one of the goals of this patch.
Bump catversion as the patch introduces new functions and system view.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
2020-04-02 02:11:38 +02:00
|
|
|
<para>
|
Rename SLRU structures and associated LWLocks.
Originally, the names assigned to SLRUs had no purpose other than
being shmem lookup keys, so not a lot of thought went into them.
As of v13, though, we're exposing them in the pg_stat_slru view and
the pg_stat_reset_slru function, so it seems advisable to take a bit
more care. Rename them to names based on the associated on-disk
storage directories (which fortunately we *did* think about, to some
extent; since those are also visible to DBAs, consistency seems like
a good thing). Also rename the associated LWLocks, since those names
are likewise user-exposed now as wait event names.
For the most part I only touched symbols used in the respective modules'
SimpleLruInit() calls, not the names of other related objects. This
renaming could have been taken further, and maybe someday we will do so.
But for now it seems undesirable to change the names of any globally
visible functions or structs, so some inconsistency is unavoidable.
(But I *did* terminate "oldserxid" with prejudice, as I found that
name both unreadable and not descriptive of the SLRU's contents.)
Table 27.12 needs re-alphabetization now, but I'll leave that till
after the other LWLock renamings I have in mind.
Discussion: https://postgr.es/m/28683.1589405363@sss.pgh.pa.us
2020-05-15 20:28:19 +02:00
|
|
|
<productname>PostgreSQL</productname> accesses certain on-disk information
|
|
|
|
via <firstterm>SLRU</firstterm> (simple least-recently-used) caches.
|
Collect statistics about SLRU caches
There's a number of SLRU caches used to access important data like clog,
commit timestamps, multixact, asynchronous notifications, etc. Until now
we had no easy way to monitor these shared caches, compute hit ratios,
number of reads/writes etc.
This commit extends the statistics collector to track this information
for a predefined list of SLRUs, and also introduces a new system view
pg_stat_slru displaying the data.
The list of built-in SLRUs is fixed, but additional SLRUs may be defined
in extensions. Unfortunately, there's no suitable registry of SLRUs, so
this patch simply defines a fixed list of SLRUs with entries for the
built-in ones and one entry for all additional SLRUs. Extensions adding
their own SLRU are fairly rare, so this seems acceptable.
This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
are still fixed (hard-coded in the code) and it's not entirely clear
which of the SLRUs might need a GUC to tune size. In a way, allowing us
to determine that is one of the goals of this patch.
Bump catversion as the patch introduces new functions and system view.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
2020-04-02 02:11:38 +02:00
|
|
|
The <structname>pg_stat_slru</structname> view will contain
|
|
|
|
one row for each tracked SLRU cache, showing statistics about access
|
|
|
|
to cached pages.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-slru-view" xreflabel="pg_stat_slru">
|
|
|
|
<title><structname>pg_stat_slru</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
Collect statistics about SLRU caches
There's a number of SLRU caches used to access important data like clog,
commit timestamps, multixact, asynchronous notifications, etc. Until now
we had no easy way to monitor these shared caches, compute hit ratios,
number of reads/writes etc.
This commit extends the statistics collector to track this information
for a predefined list of SLRUs, and also introduces a new system view
pg_stat_slru displaying the data.
The list of built-in SLRUs is fixed, but additional SLRUs may be defined
in extensions. Unfortunately, there's no suitable registry of SLRUs, so
this patch simply defines a fixed list of SLRUs with entries for the
built-in ones and one entry for all additional SLRUs. Extensions adding
their own SLRU are fairly rare, so this seems acceptable.
This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
are still fixed (hard-coded in the code) and it's not entirely clear
which of the SLRUs might need a GUC to tune size. In a way, allowing us
to determine that is one of the goals of this patch.
Bump catversion as the patch introduces new functions and system view.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
2020-04-02 02:11:38 +02:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
Collect statistics about SLRU caches
There's a number of SLRU caches used to access important data like clog,
commit timestamps, multixact, asynchronous notifications, etc. Until now
we had no easy way to monitor these shared caches, compute hit ratios,
number of reads/writes etc.
This commit extends the statistics collector to track this information
for a predefined list of SLRUs, and also introduces a new system view
pg_stat_slru displaying the data.
The list of built-in SLRUs is fixed, but additional SLRUs may be defined
in extensions. Unfortunately, there's no suitable registry of SLRUs, so
this patch simply defines a fixed list of SLRUs with entries for the
built-in ones and one entry for all additional SLRUs. Extensions adding
their own SLRU are fairly rare, so this seems acceptable.
This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
are still fixed (hard-coded in the code) and it's not entirely clear
which of the SLRUs might need a GUC to tune size. In a way, allowing us
to determine that is one of the goals of this patch.
Bump catversion as the patch introduces new functions and system view.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
2020-04-02 02:11:38 +02:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>name</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2020-08-22 15:26:10 +02:00
|
|
|
Name of the SLRU
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blks_zeroed</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of blocks zeroed during initializations
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blks_hit</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of times disk blocks were found already in the SLRU,
|
|
|
|
so that a read was not necessary (this only includes hits in the
|
|
|
|
SLRU, not the operating system's file system cache)
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blks_read</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of disk blocks read for this SLRU
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blks_written</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of disk blocks written for this SLRU
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blks_exists</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of blocks checked for existence for this SLRU
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>flushes</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of flushes of dirty data for this SLRU
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>truncates</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of truncates for this SLRU
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Time at which these statistics were last reset
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
Collect statistics about SLRU caches
There's a number of SLRU caches used to access important data like clog,
commit timestamps, multixact, asynchronous notifications, etc. Until now
we had no easy way to monitor these shared caches, compute hit ratios,
number of reads/writes etc.
This commit extends the statistics collector to track this information
for a predefined list of SLRUs, and also introduces a new system view
pg_stat_slru displaying the data.
The list of built-in SLRUs is fixed, but additional SLRUs may be defined
in extensions. Unfortunately, there's no suitable registry of SLRUs, so
this patch simply defines a fixed list of SLRUs with entries for the
built-in ones and one entry for all additional SLRUs. Extensions adding
their own SLRU are fairly rare, so this seems acceptable.
This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
are still fixed (hard-coded in the code) and it's not entirely clear
which of the SLRUs might need a GUC to tune size. In a way, allowing us
to determine that is one of the goals of this patch.
Bump catversion as the patch introduces new functions and system view.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
2020-04-02 02:11:38 +02:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2012-04-29 19:24:44 +02:00
|
|
|
</sect2>
|
2012-01-19 14:19:20 +01:00
|
|
|
|
2012-04-29 19:24:44 +02:00
|
|
|
<sect2 id="monitoring-stats-functions">
|
|
|
|
<title>Statistics Functions</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Other ways of looking at the statistics can be set up by writing
|
2012-04-29 21:35:57 +02:00
|
|
|
queries that use the same underlying statistics access functions used by
|
|
|
|
the standard views shown above. For details such as the functions' names,
|
|
|
|
consult the definitions of the standard views. (For example, in
|
2017-10-09 03:44:17 +02:00
|
|
|
<application>psql</application> you could issue <literal>\d+ pg_stat_activity</literal>.)
|
2012-04-29 21:35:57 +02:00
|
|
|
The access functions for per-database statistics take a database OID as an
|
|
|
|
argument to identify which database to report on.
|
|
|
|
The per-table and per-index functions take a table or index OID.
|
|
|
|
The functions for per-function statistics take a function OID.
|
|
|
|
Note that only tables, indexes, and functions in the current database
|
|
|
|
can be seen with these functions.
|
2012-04-29 19:24:44 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2022-04-08 06:35:35 +02:00
|
|
|
Additional functions related to the cumulative statistics system are listed
|
|
|
|
in <xref linkend="monitoring-stats-funcs-table"/>.
|
2012-04-29 19:24:44 +02:00
|
|
|
</para>
|
2001-10-17 01:57:06 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<table id="monitoring-stats-funcs-table">
|
|
|
|
<title>Additional Statistics Functions</title>
|
|
|
|
<tgroup cols="1">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
Function
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
2010-08-08 18:27:06 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
2008-05-07 16:41:56 +02:00
|
|
|
<!-- See also the entry for this in func.sgml -->
|
2020-05-07 20:25:18 +02:00
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<function>pg_backend_pid</function> ()
|
|
|
|
<returnvalue>integer</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the process ID of the server process attached to the current
|
|
|
|
session.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2002-08-04 21:51:30 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_activity</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_activity</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>setof record</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns a record of information about the backend with the specified
|
|
|
|
process ID, or one record for each active backend in the system
|
|
|
|
if <literal>NULL</literal> is specified. The fields returned are a
|
|
|
|
subset of those in the <structname>pg_stat_activity</structname> view.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2008-05-07 16:41:56 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_snapshot_timestamp</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_snapshot_timestamp</function> ()
|
|
|
|
<returnvalue>timestamp with time zone</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
2022-04-07 06:29:46 +02:00
|
|
|
Returns the timestamp of the current statistics snapshot, or NULL if
|
2022-04-08 06:35:35 +02:00
|
|
|
no statistics snapshot has been taken. A snapshot is taken the first
|
|
|
|
time cumulative statistics are accessed in a transaction if
|
|
|
|
<varname>stats_fetch_consistency</varname> is set to
|
|
|
|
<literal>snapshot</literal>
|
2020-05-07 20:25:18 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
2015-02-20 03:36:50 +01:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_clear_snapshot</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_clear_snapshot</function> ()
|
|
|
|
<returnvalue>void</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
2022-04-08 06:35:35 +02:00
|
|
|
Discards the current statistics snapshot or cached information.
|
2020-05-07 20:25:18 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
2007-02-08 00:11:30 +01:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_reset</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_reset</function> ()
|
|
|
|
<returnvalue>void</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Resets all statistics counters for the current database to zero.
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
This function is restricted to superusers by default, but other users
|
|
|
|
can be granted EXECUTE to run the function.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2010-01-19 15:11:32 +01:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_reset_shared</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_reset_shared</function> ( <type>text</type> )
|
|
|
|
<returnvalue>void</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Resets some cluster-wide statistics counters to zero, depending on the
|
|
|
|
argument. The argument can be <literal>bgwriter</literal> to reset
|
|
|
|
all the counters shown in
|
|
|
|
the <structname>pg_stat_bgwriter</structname>
|
2020-10-02 03:17:11 +02:00
|
|
|
view, <literal>archiver</literal> to reset all the counters shown in
|
2022-04-07 09:28:40 +02:00
|
|
|
the <structname>pg_stat_archiver</structname> view,
|
|
|
|
<literal>wal</literal> to reset all the counters shown in the
|
|
|
|
<structname>pg_stat_wal</structname> view or
|
|
|
|
<literal>recovery_prefetch</literal> to reset all the counters shown
|
|
|
|
in the <structname>pg_stat_recovery_prefetch</structname> view.
|
2020-05-07 20:25:18 +02:00
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
This function is restricted to superusers by default, but other users
|
|
|
|
can be granted EXECUTE to run the function.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2010-01-28 15:25:41 +01:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_reset_single_table_counters</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_reset_single_table_counters</function> ( <type>oid</type> )
|
|
|
|
<returnvalue>void</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Resets statistics for a single table or index in the current database
|
2021-09-02 07:01:06 +02:00
|
|
|
or shared across all databases in the cluster to zero.
|
2020-05-07 20:25:18 +02:00
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
This function is restricted to superusers by default, but other users
|
|
|
|
can be granted EXECUTE to run the function.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2010-01-28 15:25:41 +01:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_reset_single_function_counters</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_reset_single_function_counters</function> ( <type>oid</type> )
|
|
|
|
<returnvalue>void</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Resets statistics for a single function in the current database to
|
|
|
|
zero.
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
This function is restricted to superusers by default, but other users
|
|
|
|
can be granted EXECUTE to run the function.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
Collect statistics about SLRU caches
There's a number of SLRU caches used to access important data like clog,
commit timestamps, multixact, asynchronous notifications, etc. Until now
we had no easy way to monitor these shared caches, compute hit ratios,
number of reads/writes etc.
This commit extends the statistics collector to track this information
for a predefined list of SLRUs, and also introduces a new system view
pg_stat_slru displaying the data.
The list of built-in SLRUs is fixed, but additional SLRUs may be defined
in extensions. Unfortunately, there's no suitable registry of SLRUs, so
this patch simply defines a fixed list of SLRUs with entries for the
built-in ones and one entry for all additional SLRUs. Extensions adding
their own SLRU are fairly rare, so this seems acceptable.
This patch only allows monitoring of SLRUs, not tuning. The SLRU sizes
are still fixed (hard-coded in the code) and it's not entirely clear
which of the SLRUs might need a GUC to tune size. In a way, allowing us
to determine that is one of the goals of this patch.
Bump catversion as the patch introduces new functions and system view.
Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Discussion: https://www.postgresql.org/message-id/flat/20200119143707.gyinppnigokesjok@development
2020-04-02 02:11:38 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_reset_slru</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_reset_slru</function> ( <type>text</type> )
|
|
|
|
<returnvalue>void</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Resets statistics to zero for a single SLRU cache, or for all SLRUs in
|
|
|
|
the cluster. If the argument is NULL, all counters shown in
|
|
|
|
the <structname>pg_stat_slru</structname> view for all SLRU caches are
|
Rename SLRU structures and associated LWLocks.
Originally, the names assigned to SLRUs had no purpose other than
being shmem lookup keys, so not a lot of thought went into them.
As of v13, though, we're exposing them in the pg_stat_slru view and
the pg_stat_reset_slru function, so it seems advisable to take a bit
more care. Rename them to names based on the associated on-disk
storage directories (which fortunately we *did* think about, to some
extent; since those are also visible to DBAs, consistency seems like
a good thing). Also rename the associated LWLocks, since those names
are likewise user-exposed now as wait event names.
For the most part I only touched symbols used in the respective modules'
SimpleLruInit() calls, not the names of other related objects. This
renaming could have been taken further, and maybe someday we will do so.
But for now it seems undesirable to change the names of any globally
visible functions or structs, so some inconsistency is unavoidable.
(But I *did* terminate "oldserxid" with prejudice, as I found that
name both unreadable and not descriptive of the SLRU's contents.)
Table 27.12 needs re-alphabetization now, but I'll leave that till
after the other LWLock renamings I have in mind.
Discussion: https://postgr.es/m/28683.1589405363@sss.pgh.pa.us
2020-05-15 20:28:19 +02:00
|
|
|
reset. The argument can be one of
|
|
|
|
<literal>CommitTs</literal>,
|
|
|
|
<literal>MultiXactMember</literal>,
|
|
|
|
<literal>MultiXactOffset</literal>,
|
|
|
|
<literal>Notify</literal>,
|
|
|
|
<literal>Serial</literal>,
|
|
|
|
<literal>Subtrans</literal>, or
|
|
|
|
<literal>Xact</literal>
|
|
|
|
to reset the counters for only that entry.
|
2020-05-07 20:25:18 +02:00
|
|
|
If the argument is <literal>other</literal> (or indeed, any
|
|
|
|
unrecognized name), then the counters for all other SLRU caches, such
|
|
|
|
as extension-defined caches, are reset.
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
This function is restricted to superusers by default, but other users
|
|
|
|
can be granted EXECUTE to run the function.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2020-10-08 05:39:08 +02:00
|
|
|
|
|
|
|
<row>
|
2021-01-29 06:24:49 +01:00
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
2020-10-08 05:39:08 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_reset_replication_slot</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_reset_replication_slot</function> ( <type>text</type> )
|
|
|
|
<returnvalue>void</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
2021-01-29 06:24:49 +01:00
|
|
|
Resets statistics of the replication slot defined by the argument. If
|
|
|
|
the argument is <literal>NULL</literal>, resets statistics for all
|
|
|
|
the replication slots.
|
2020-10-08 05:39:08 +02:00
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
This function is restricted to superusers by default, but other users
|
|
|
|
can be granted EXECUTE to run the function.
|
2021-11-30 04:24:30 +01:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
2022-03-01 01:47:52 +01:00
|
|
|
<primary>pg_stat_reset_subscription_stats</primary>
|
2021-11-30 04:24:30 +01:00
|
|
|
</indexterm>
|
2022-03-01 01:47:52 +01:00
|
|
|
<function>pg_stat_reset_subscription_stats</function> ( <type>oid</type> )
|
2021-11-30 04:24:30 +01:00
|
|
|
<returnvalue>void</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
2022-03-01 01:47:52 +01:00
|
|
|
Resets statistics for a single subscription shown in the
|
|
|
|
<structname>pg_stat_subscription_stats</structname> view to zero. If
|
|
|
|
the argument is <literal>NULL</literal>, reset statistics for all
|
|
|
|
subscriptions.
|
2021-11-30 04:24:30 +01:00
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
This function is restricted to superusers by default, but other users
|
|
|
|
can be granted EXECUTE to run the function.
|
2020-10-08 05:39:08 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
2020-05-07 20:25:18 +02:00
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
2001-10-17 01:57:06 +02:00
|
|
|
|
2012-04-29 19:24:44 +02:00
|
|
|
<para>
|
2012-04-29 21:35:57 +02:00
|
|
|
<function>pg_stat_get_activity</function>, the underlying function of
|
2017-10-09 03:44:17 +02:00
|
|
|
the <structname>pg_stat_activity</structname> view, returns a set of records
|
2012-04-29 21:35:57 +02:00
|
|
|
containing all the available information about each backend process.
|
|
|
|
Sometimes it may be more convenient to obtain just a subset of this
|
|
|
|
information. In such cases, an older set of per-backend statistics
|
|
|
|
access functions can be used; these are shown in <xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="monitoring-stats-backend-funcs-table"/>.
|
2012-04-29 21:35:57 +02:00
|
|
|
These access functions use a backend ID number, which ranges from one
|
|
|
|
to the number of currently active backends.
|
2012-04-29 19:24:44 +02:00
|
|
|
The function <function>pg_stat_get_backend_idset</function> provides a
|
2012-04-29 21:35:57 +02:00
|
|
|
convenient way to generate one row for each active backend for
|
2017-10-09 03:44:17 +02:00
|
|
|
invoking these functions. For example, to show the <acronym>PID</acronym>s and
|
2012-04-29 21:35:57 +02:00
|
|
|
current queries of all backends:
|
2012-04-29 19:24:44 +02:00
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
SELECT pg_stat_get_backend_pid(s.backendid) AS pid,
|
|
|
|
pg_stat_get_backend_activity(s.backendid) AS query
|
|
|
|
FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<table id="monitoring-stats-backend-funcs-table">
|
|
|
|
<title>Per-Backend Statistics Functions</title>
|
|
|
|
<tgroup cols="1">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
Function
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_idset</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_idset</function> ()
|
|
|
|
<returnvalue>setof integer</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the set of currently active backend ID numbers (from 1 to the
|
|
|
|
number of active backends).
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_activity</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_activity</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>text</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the text of this backend's most recent query.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_activity_start</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_activity_start</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>timestamp with time zone</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the time when the backend's most recent query was started.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_client_addr</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_client_addr</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>inet</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the IP address of the client connected to this backend.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_client_port</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_client_port</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>integer</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the TCP port number that the client is using for communication.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_dbid</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_dbid</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>oid</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the OID of the database this backend is connected to.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_pid</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_pid</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>integer</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the process ID of this backend.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_start</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_start</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>timestamp with time zone</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the time when this process was started.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2016-03-10 18:44:09 +01:00
|
|
|
<row>
|
2020-05-07 20:25:18 +02:00
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_userid</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_userid</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>oid</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the OID of the user logged into this backend.
|
|
|
|
</para></entry>
|
2016-03-10 18:44:09 +01:00
|
|
|
</row>
|
2016-05-05 19:27:59 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_wait_event_type</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_wait_event_type</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>text</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the wait event type name if this backend is currently waiting,
|
|
|
|
otherwise NULL. See <xref linkend="wait-event-table"/> for details.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_wait_event</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_wait_event</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>text</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the wait event name if this backend is currently waiting,
|
2020-05-14 05:36:58 +02:00
|
|
|
otherwise NULL. See <xref linkend="wait-event-activity-table"/> through
|
|
|
|
<xref linkend="wait-event-timeout-table"/>.
|
2020-05-07 20:25:18 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2020-05-07 20:25:18 +02:00
|
|
|
<row>
|
|
|
|
<entry role="func_table_entry"><para role="func_signature">
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_get_backend_xact_start</primary>
|
|
|
|
</indexterm>
|
|
|
|
<function>pg_stat_get_backend_xact_start</function> ( <type>integer</type> )
|
|
|
|
<returnvalue>timestamp with time zone</returnvalue>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Returns the time when the backend's current transaction was started.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
2012-04-29 21:35:57 +02:00
|
|
|
|
2001-10-17 01:57:06 +02:00
|
|
|
</sect2>
|
|
|
|
</sect1>
|
2002-08-17 15:04:19 +02:00
|
|
|
|
2002-11-11 21:14:04 +01:00
|
|
|
<sect1 id="monitoring-locks">
|
|
|
|
<title>Viewing Locks</title>
|
|
|
|
|
2003-08-31 19:32:24 +02:00
|
|
|
<indexterm zone="monitoring-locks">
|
|
|
|
<primary>lock</primary>
|
|
|
|
<secondary>monitoring</secondary>
|
|
|
|
</indexterm>
|
|
|
|
|
2002-11-11 21:14:04 +01:00
|
|
|
<para>
|
|
|
|
Another useful tool for monitoring database activity is the
|
2003-10-18 00:38:20 +02:00
|
|
|
<structname>pg_locks</structname> system table. It allows the
|
2002-11-11 21:14:04 +01:00
|
|
|
database administrator to view information about the outstanding
|
|
|
|
locks in the lock manager. For example, this capability can be used
|
|
|
|
to:
|
2004-12-28 20:08:58 +01:00
|
|
|
|
2002-11-11 21:14:04 +01:00
|
|
|
<itemizedlist>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
View all the locks currently outstanding, all the locks on
|
|
|
|
relations in a particular database, all the locks on a
|
|
|
|
particular relation, or all the locks held by a particular
|
|
|
|
<productname>PostgreSQL</productname> session.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2003-03-24 15:32:51 +01:00
|
|
|
Determine the relation in the current database with the most
|
2002-11-11 21:14:04 +01:00
|
|
|
ungranted locks (which might be a source of contention among
|
|
|
|
database clients).
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Determine the effect of lock contention on overall database
|
|
|
|
performance, as well as the extent to which contention varies
|
|
|
|
with overall database traffic.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
|
2003-10-18 00:38:20 +02:00
|
|
|
Details of the <structname>pg_locks</structname> view appear in
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="view-pg-locks"/>.
|
2002-11-11 21:14:04 +01:00
|
|
|
For more information on locking and managing concurrency with
|
2017-11-23 15:39:47 +01:00
|
|
|
<productname>PostgreSQL</productname>, refer to <xref linkend="mvcc"/>.
|
2002-11-11 21:14:04 +01:00
|
|
|
</para>
|
|
|
|
</sect1>
|
2006-12-02 01:42:54 +01:00
|
|
|
|
2016-03-15 18:31:18 +01:00
|
|
|
<sect1 id="progress-reporting">
|
|
|
|
<title>Progress Reporting</title>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<productname>PostgreSQL</productname> has the ability to report the progress of
|
2019-03-25 15:59:04 +01:00
|
|
|
certain commands during command execution. Currently, the only commands
|
2020-01-15 15:02:09 +01:00
|
|
|
which support progress reporting are <command>ANALYZE</command>,
|
|
|
|
<command>CLUSTER</command>,
|
2020-03-03 04:03:43 +01:00
|
|
|
<command>CREATE INDEX</command>, <command>VACUUM</command>,
|
2021-01-06 21:46:26 +01:00
|
|
|
<command>COPY</command>,
|
2020-03-03 04:03:43 +01:00
|
|
|
and <xref linkend="protocol-replication-base-backup"/> (i.e., replication
|
|
|
|
command that <xref linkend="app-pgbasebackup"/> issues to take
|
|
|
|
a base backup).
|
2020-01-15 15:02:09 +01:00
|
|
|
This may be expanded in the future.
|
2016-03-15 18:31:18 +01:00
|
|
|
</para>
|
|
|
|
|
2020-01-15 15:02:09 +01:00
|
|
|
<sect2 id="analyze-progress-reporting">
|
|
|
|
<title>ANALYZE Progress Reporting</title>
|
|
|
|
|
2020-12-01 09:19:23 +01:00
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_progress_analyze</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2020-01-15 15:02:09 +01:00
|
|
|
<para>
|
|
|
|
Whenever <command>ANALYZE</command> is running, the
|
|
|
|
<structname>pg_stat_progress_analyze</structname> view will contain a
|
|
|
|
row for each backend that is currently running that command. The tables
|
|
|
|
below describe the information that will be reported and provide
|
|
|
|
information about how to interpret it.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-progress-analyze-view" xreflabel="pg_stat_progress_analyze">
|
|
|
|
<title><structname>pg_stat_progress_analyze</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2020-01-15 15:02:09 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2020-01-15 15:02:09 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of backend.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the database to which this backend is connected.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the database to which this backend is connected.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the table being analyzed.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>phase</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Current processing phase. See <xref linkend="analyze-phases"/>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sample_blks_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total number of heap blocks that will be sampled.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>sample_blks_scanned</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of heap blocks scanned.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>ext_stats_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of extended statistics.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>ext_stats_computed</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of extended statistics computed. This counter only advances
|
|
|
|
when the phase is <literal>computing extended statistics</literal>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>child_tables_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of child tables.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>child_tables_done</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of child tables scanned. This counter only advances when the
|
|
|
|
phase is <literal>acquiring inherited sample rows</literal>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>current_child_table_relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the child table currently being scanned. This field is
|
|
|
|
only valid when the phase is
|
|
|
|
<literal>acquiring inherited sample rows</literal>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2020-01-15 15:02:09 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="analyze-phases">
|
|
|
|
<title>ANALYZE phases</title>
|
|
|
|
<tgroup cols="2">
|
2020-05-06 18:23:43 +02:00
|
|
|
<colspec colname="col1" colwidth="1*"/>
|
|
|
|
<colspec colname="col2" colwidth="2*"/>
|
2020-01-15 15:02:09 +01:00
|
|
|
<thead>
|
2020-01-23 09:01:46 +01:00
|
|
|
<row>
|
2020-01-15 15:02:09 +01:00
|
|
|
<entry>Phase</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
2020-01-23 09:01:46 +01:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>initializing</literal></entry>
|
|
|
|
<entry>
|
2020-01-15 15:02:09 +01:00
|
|
|
The command is preparing to begin scanning the heap. This phase is
|
|
|
|
expected to be very brief.
|
2020-01-23 09:01:46 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>acquiring sample rows</literal></entry>
|
|
|
|
<entry>
|
2020-01-15 15:02:09 +01:00
|
|
|
The command is currently scanning the table given by
|
2020-01-23 09:01:46 +01:00
|
|
|
<structfield>relid</structfield> to obtain sample rows.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>acquiring inherited sample rows</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The command is currently scanning child tables to obtain sample rows.
|
|
|
|
Columns <structfield>child_tables_total</structfield>,
|
2020-01-15 15:02:09 +01:00
|
|
|
<structfield>child_tables_done</structfield>, and
|
2020-01-23 09:01:46 +01:00
|
|
|
<structfield>current_child_table_relid</structfield> contain the
|
|
|
|
progress information for this phase.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>computing statistics</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The command is computing statistics from the sample rows obtained
|
|
|
|
during the table scan.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>computing extended statistics</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The command is computing extended statistics from the sample rows
|
|
|
|
obtained during the table scan.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>finalizing analyze</literal></entry>
|
|
|
|
<entry>
|
2020-08-22 15:26:10 +02:00
|
|
|
The command is updating <structname>pg_class</structname>. When this
|
|
|
|
phase is completed, <command>ANALYZE</command> will end.
|
2020-01-23 09:01:46 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2020-01-15 15:02:09 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<note>
|
|
|
|
<para>
|
|
|
|
Note that when <command>ANALYZE</command> is run on a partitioned table,
|
Improve <xref> vs. <command> formatting in the documentation
SQL commands are generally marked up as <command>, except when a link
to a reference page is used using <xref>. But the latter doesn't
create monospace markup, so this looks strange especially when a
paragraph contains a mix of links and non-links.
We considered putting <command> in the <refentrytitle> on the target
side, but that creates some formatting side effects elsewhere.
Generally, it seems safer to solve this on the link source side.
We can't put the <xref> inside the <command>; the DTD doesn't allow
this. DocBook 5 would allow the <command> to have the linkend
attribute itself, but we are not there yet.
So to solve this for now, convert the <xref>s to <link> plus
<command>. This gives the correct look and also gives some more
flexibility what we can put into the link text (e.g., subcommands or
other clauses). In the future, these could then be converted to
DocBook 5 style.
I haven't converted absolutely all xrefs to SQL command reference
pages, only those where we care about the appearance of the link text
or where it was otherwise appropriate to make the appearance match a
bit better. Also in some cases, the links where repetitive, so in
those cases the links where just removed and replaced by a plain
<command>. In cases where we just want the link and don't
specifically care about the generated link text (typically phrased
"for further information see <xref ...>") the xref is kept.
Reported-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://www.postgresql.org/message-id/flat/87o8pco34z.fsf@wibble.ilmari.org
2020-10-03 16:16:51 +02:00
|
|
|
all of its partitions are also recursively analyzed.
|
|
|
|
In that case, <command>ANALYZE</command>
|
2020-01-15 15:02:09 +01:00
|
|
|
progress is reported first for the parent table, whereby its inheritance
|
|
|
|
statistics are collected, followed by that for each partition.
|
|
|
|
</para>
|
|
|
|
</note>
|
|
|
|
</sect2>
|
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<sect2 id="create-index-progress-reporting">
|
|
|
|
<title>CREATE INDEX Progress Reporting</title>
|
|
|
|
|
2020-12-01 09:19:23 +01:00
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_progress_create_index</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<para>
|
2019-04-07 11:30:14 +02:00
|
|
|
Whenever <command>CREATE INDEX</command> or <command>REINDEX</command> is running, the
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<structname>pg_stat_progress_create_index</structname> view will contain
|
|
|
|
one row for each backend that is currently creating indexes. The tables
|
|
|
|
below describe the information that will be reported and provide information
|
|
|
|
about how to interpret it.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-progress-create-index-view" xreflabel="pg_stat_progress_create_index">
|
|
|
|
<title><structname>pg_stat_progress_create_index</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<thead>
|
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of backend.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the database to which this backend is connected.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the database to which this backend is connected.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the table on which the index is being created.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2019-04-07 11:30:14 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>index_relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the index being created or reindexed. During a
|
|
|
|
non-concurrent <command>CREATE INDEX</command>, this is 0.
|
|
|
|
</para></entry>
|
2019-04-07 11:30:14 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
2019-06-04 09:16:02 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>command</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2019-06-04 09:16:02 +02:00
|
|
|
The command that is running: <literal>CREATE INDEX</literal>,
|
|
|
|
<literal>CREATE INDEX CONCURRENTLY</literal>,
|
|
|
|
<literal>REINDEX</literal>, or <literal>REINDEX CONCURRENTLY</literal>.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
2019-06-04 09:16:02 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>phase</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Current processing phase of index creation. See <xref linkend="create-index-phases"/>.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>lockers_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total number of lockers to wait for, when applicable.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>lockers_done</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of lockers already waited for.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>current_locker_pid</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of the locker currently being waited for.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blocks_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total number of blocks to be processed in the current phase.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>blocks_done</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of blocks already processed in the current phase.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tuples_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total number of tuples to be processed in the current phase.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tuples_done</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of tuples already processed in the current phase.
|
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>partitions_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
When creating an index on a partitioned table, this column is set to
|
|
|
|
the total number of partitions on which the index is to be created.
|
2021-02-20 02:25:14 +01:00
|
|
|
This field is <literal>0</literal> during a <literal>REINDEX</literal>.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
2020-05-14 05:03:39 +02:00
|
|
|
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<row>
|
2020-05-14 05:03:39 +02:00
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>partitions_done</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
When creating an index on a partitioned table, this column is set to
|
2021-04-09 06:53:07 +02:00
|
|
|
the number of partitions on which the index has been created.
|
2021-02-20 02:25:14 +01:00
|
|
|
This field is <literal>0</literal> during a <literal>REINDEX</literal>.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="create-index-phases">
|
2019-09-08 10:26:35 +02:00
|
|
|
<title>CREATE INDEX Phases</title>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<tgroup cols="2">
|
2020-05-06 18:23:43 +02:00
|
|
|
<colspec colname="col1" colwidth="1*"/>
|
|
|
|
<colspec colname="col2" colwidth="2*"/>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry>Phase</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>initializing</literal></entry>
|
|
|
|
<entry>
|
2019-04-07 11:30:14 +02:00
|
|
|
<command>CREATE INDEX</command> or <command>REINDEX</command> is preparing to create the index. This
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
phase is expected to be very brief.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2019-04-07 11:30:14 +02:00
|
|
|
<entry><literal>waiting for writers before build</literal></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<entry>
|
2019-04-07 11:30:14 +02:00
|
|
|
<command>CREATE INDEX CONCURRENTLY</command> or <command>REINDEX CONCURRENTLY</command> is waiting for transactions
|
|
|
|
with write locks that can potentially see the table to finish.
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
This phase is skipped when not in concurrent mode.
|
|
|
|
Columns <structname>lockers_total</structname>, <structname>lockers_done</structname>
|
2019-04-08 22:27:35 +02:00
|
|
|
and <structname>current_locker_pid</structname> contain the progress
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
information for this phase.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>building index</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The index is being built by the access method-specific code. In this phase,
|
|
|
|
access methods that support progress reporting fill in their own progress data,
|
|
|
|
and the subphase is indicated in this column. Typically,
|
|
|
|
<structname>blocks_total</structname> and <structname>blocks_done</structname>
|
|
|
|
will contain progress data, as well as potentially
|
|
|
|
<structname>tuples_total</structname> and <structname>tuples_done</structname>.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2019-04-07 11:30:14 +02:00
|
|
|
<entry><literal>waiting for writers before validation</literal></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<entry>
|
2019-04-07 11:30:14 +02:00
|
|
|
<command>CREATE INDEX CONCURRENTLY</command> or <command>REINDEX CONCURRENTLY</command> is waiting for transactions
|
|
|
|
with write locks that can potentially write into the table to finish.
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
This phase is skipped when not in concurrent mode.
|
|
|
|
Columns <structname>lockers_total</structname>, <structname>lockers_done</structname>
|
2019-04-08 22:27:35 +02:00
|
|
|
and <structname>current_locker_pid</structname> contain the progress
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
information for this phase.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>index validation: scanning index</literal></entry>
|
|
|
|
<entry>
|
|
|
|
<command>CREATE INDEX CONCURRENTLY</command> is scanning the index searching
|
|
|
|
for tuples that need to be validated.
|
|
|
|
This phase is skipped when not in concurrent mode.
|
|
|
|
Columns <structname>blocks_total</structname> (set to the total size of the index)
|
|
|
|
and <structname>blocks_done</structname> contain the progress information for this phase.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>index validation: sorting tuples</literal></entry>
|
|
|
|
<entry>
|
|
|
|
<command>CREATE INDEX CONCURRENTLY</command> is sorting the output of the
|
|
|
|
index scanning phase.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>index validation: scanning table</literal></entry>
|
|
|
|
<entry>
|
|
|
|
<command>CREATE INDEX CONCURRENTLY</command> is scanning the table
|
|
|
|
to validate the index tuples collected in the previous two phases.
|
|
|
|
This phase is skipped when not in concurrent mode.
|
|
|
|
Columns <structname>blocks_total</structname> (set to the total size of the table)
|
|
|
|
and <structname>blocks_done</structname> contain the progress information for this phase.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2019-04-07 11:30:14 +02:00
|
|
|
<entry><literal>waiting for old snapshots</literal></entry>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
<entry>
|
2019-04-07 11:30:14 +02:00
|
|
|
<command>CREATE INDEX CONCURRENTLY</command> or <command>REINDEX CONCURRENTLY</command> is waiting for transactions
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
that can potentially see the table to release their snapshots. This
|
|
|
|
phase is skipped when not in concurrent mode.
|
|
|
|
Columns <structname>lockers_total</structname>, <structname>lockers_done</structname>
|
2019-04-08 22:27:35 +02:00
|
|
|
and <structname>current_locker_pid</structname> contain the progress
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
information for this phase.
|
|
|
|
</entry>
|
|
|
|
</row>
|
2019-04-07 11:30:14 +02:00
|
|
|
<row>
|
|
|
|
<entry><literal>waiting for readers before marking dead</literal></entry>
|
|
|
|
<entry>
|
|
|
|
<command>REINDEX CONCURRENTLY</command> is waiting for transactions
|
|
|
|
with read locks on the table to finish, before marking the old index dead.
|
|
|
|
This phase is skipped when not in concurrent mode.
|
|
|
|
Columns <structname>lockers_total</structname>, <structname>lockers_done</structname>
|
2019-04-08 22:27:35 +02:00
|
|
|
and <structname>current_locker_pid</structname> contain the progress
|
2019-04-07 11:30:14 +02:00
|
|
|
information for this phase.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>waiting for readers before dropping</literal></entry>
|
|
|
|
<entry>
|
|
|
|
<command>REINDEX CONCURRENTLY</command> is waiting for transactions
|
|
|
|
with read locks on the table to finish, before dropping the old index.
|
|
|
|
This phase is skipped when not in concurrent mode.
|
|
|
|
Columns <structname>lockers_total</structname>, <structname>lockers_done</structname>
|
2019-04-08 22:27:35 +02:00
|
|
|
and <structname>current_locker_pid</structname> contain the progress
|
2019-04-07 11:30:14 +02:00
|
|
|
information for this phase.
|
|
|
|
</entry>
|
|
|
|
</row>
|
Report progress of CREATE INDEX operations
This uses the progress reporting infrastructure added by c16dc1aca5e0,
adding support for CREATE INDEX and CREATE INDEX CONCURRENTLY.
There are two pieces to this: one is index-AM-agnostic, and the other is
AM-specific. The latter is fairly elaborate for btrees, including
reportage for parallel index builds and the separate phases that btree
index creation uses; other index AMs, which are much simpler in their
building procedures, have simplistic reporting only, but that seems
sufficient, at least for non-concurrent builds.
The index-AM-agnostic part is fairly complete, providing insight into
the CONCURRENTLY wait phases as well as block-based progress during the
index validation table scan. (The index validation index scan requires
patching each AM, which has not been included here.)
Reviewers: Rahila Syed, Pavan Deolasee, Tatsuro Yamada
Discussion: https://postgr.es/m/20181220220022.mg63bhk26zdpvmcj@alvherre.pgsql
2019-04-02 20:18:08 +02:00
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
2016-03-15 18:31:18 +01:00
|
|
|
<sect2 id="vacuum-progress-reporting">
|
|
|
|
<title>VACUUM Progress Reporting</title>
|
|
|
|
|
2020-12-01 09:19:23 +01:00
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_progress_vacuum</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2016-03-15 18:31:18 +01:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Whenever <command>VACUUM</command> is running, the
|
2016-03-15 18:31:18 +01:00
|
|
|
<structname>pg_stat_progress_vacuum</structname> view will contain
|
|
|
|
one row for each backend (including autovacuum worker processes) that is
|
|
|
|
currently vacuuming. The tables below describe the information
|
|
|
|
that will be reported and provide information about how to interpret it.
|
2019-03-25 15:59:04 +01:00
|
|
|
Progress for <command>VACUUM FULL</command> commands is reported via
|
|
|
|
<structname>pg_stat_progress_cluster</structname>
|
2019-04-08 22:27:35 +02:00
|
|
|
because both <command>VACUUM FULL</command> and <command>CLUSTER</command>
|
|
|
|
rewrite the table, while regular <command>VACUUM</command> only modifies it
|
2019-03-25 15:59:04 +01:00
|
|
|
in place. See <xref linkend='cluster-progress-reporting'/>.
|
2016-03-15 18:31:18 +01:00
|
|
|
</para>
|
|
|
|
|
2016-05-05 19:27:59 +02:00
|
|
|
<table id="pg-stat-progress-vacuum-view" xreflabel="pg_stat_progress_vacuum">
|
2016-03-15 18:31:18 +01:00
|
|
|
<title><structname>pg_stat_progress_vacuum</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2016-03-15 18:31:18 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2016-03-15 18:31:18 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of backend.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the database to which this backend is connected.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the database to which this backend is connected.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the table being vacuumed.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>phase</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Current processing phase of vacuum. See <xref linkend="vacuum-phases"/>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>heap_blks_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2016-03-15 18:31:18 +01:00
|
|
|
Total number of heap blocks in the table. This number is reported
|
|
|
|
as of the beginning of the scan; blocks added later will not be (and
|
2017-10-09 03:44:17 +02:00
|
|
|
need not be) visited by this <command>VACUUM</command>.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>heap_blks_scanned</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2016-03-15 18:31:18 +01:00
|
|
|
Number of heap blocks scanned. Because the
|
2017-10-09 03:44:17 +02:00
|
|
|
<link linkend="storage-vm">visibility map</link> is used to optimize scans,
|
2016-03-15 18:31:18 +01:00
|
|
|
some blocks will be skipped without inspection; skipped blocks are
|
2016-03-22 14:04:30 +01:00
|
|
|
included in this total, so that this number will eventually become
|
2017-10-09 03:44:17 +02:00
|
|
|
equal to <structfield>heap_blks_total</structfield> when the vacuum is complete.
|
|
|
|
This counter only advances when the phase is <literal>scanning heap</literal>.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>heap_blks_vacuumed</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2016-03-15 18:31:18 +01:00
|
|
|
Number of heap blocks vacuumed. Unless the table has no indexes, this
|
2017-10-09 03:44:17 +02:00
|
|
|
counter only advances when the phase is <literal>vacuuming heap</literal>.
|
2016-03-15 18:31:18 +01:00
|
|
|
Blocks that contain no dead tuples are skipped, so the counter may
|
|
|
|
sometimes skip forward in large increments.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>index_vacuum_count</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2016-03-16 16:38:30 +01:00
|
|
|
Number of completed index vacuum cycles.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>max_dead_tuples</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of dead tuples that we can store before needing to perform
|
|
|
|
an index vacuum cycle, based on
|
|
|
|
<xref linkend="guc-maintenance-work-mem"/>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>num_dead_tuples</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2016-03-15 18:31:18 +01:00
|
|
|
Number of dead tuples collected since the last index vacuum cycle.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2016-03-15 18:31:18 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2016-08-03 04:34:45 +02:00
|
|
|
<table id="vacuum-phases">
|
2019-09-08 10:26:35 +02:00
|
|
|
<title>VACUUM Phases</title>
|
2016-03-15 18:31:18 +01:00
|
|
|
<tgroup cols="2">
|
2020-05-06 18:23:43 +02:00
|
|
|
<colspec colname="col1" colwidth="1*"/>
|
|
|
|
<colspec colname="col2" colwidth="2*"/>
|
2016-03-15 18:31:18 +01:00
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry>Phase</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>initializing</literal></entry>
|
|
|
|
<entry>
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>VACUUM</command> is preparing to begin scanning the heap. This
|
2016-03-15 18:31:18 +01:00
|
|
|
phase is expected to be very brief.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>scanning heap</literal></entry>
|
|
|
|
<entry>
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>VACUUM</command> is currently scanning the heap. It will prune and
|
2016-03-15 18:31:18 +01:00
|
|
|
defragment each page if required, and possibly perform freezing
|
2017-10-09 03:44:17 +02:00
|
|
|
activity. The <structfield>heap_blks_scanned</structfield> column can be used
|
2016-03-15 18:31:18 +01:00
|
|
|
to monitor the progress of the scan.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>vacuuming indexes</literal></entry>
|
|
|
|
<entry>
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>VACUUM</command> is currently vacuuming the indexes. If a table has
|
2016-03-15 18:31:18 +01:00
|
|
|
any indexes, this will happen at least once per vacuum, after the heap
|
|
|
|
has been completely scanned. It may happen multiple times per vacuum
|
2021-09-25 08:10:24 +02:00
|
|
|
if <xref linkend="guc-maintenance-work-mem"/> (or, in the case of autovacuum,
|
|
|
|
<xref linkend="guc-autovacuum-work-mem"/> if set) is insufficient to store
|
|
|
|
the number of dead tuples found.
|
2016-03-15 18:31:18 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>vacuuming heap</literal></entry>
|
|
|
|
<entry>
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>VACUUM</command> is currently vacuuming the heap. Vacuuming the heap
|
2016-03-15 18:31:18 +01:00
|
|
|
is distinct from scanning the heap, and occurs after each instance of
|
2017-10-09 03:44:17 +02:00
|
|
|
vacuuming indexes. If <structfield>heap_blks_scanned</structfield> is less than
|
|
|
|
<structfield>heap_blks_total</structfield>, the system will return to scanning
|
2016-03-15 18:31:18 +01:00
|
|
|
the heap after this phase is completed; otherwise, it will begin
|
|
|
|
cleaning up indexes after this phase is completed.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>cleaning up indexes</literal></entry>
|
|
|
|
<entry>
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>VACUUM</command> is currently cleaning up indexes. This occurs after
|
2016-03-15 18:31:18 +01:00
|
|
|
the heap has been completely scanned and all vacuuming of the indexes
|
|
|
|
and the heap has been completed.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>truncating heap</literal></entry>
|
|
|
|
<entry>
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>VACUUM</command> is currently truncating the heap so as to return
|
2016-03-15 18:31:18 +01:00
|
|
|
empty pages at the end of the relation to the operating system. This
|
|
|
|
occurs after cleaning up indexes.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>performing final cleanup</literal></entry>
|
|
|
|
<entry>
|
2017-10-09 03:44:17 +02:00
|
|
|
<command>VACUUM</command> is performing final cleanup. During this phase,
|
|
|
|
<command>VACUUM</command> will vacuum the free space map, update statistics
|
2022-04-08 06:35:35 +02:00
|
|
|
in <literal>pg_class</literal>, and report statistics to the cumulative
|
|
|
|
statistics system. When this phase is completed, <command>VACUUM</command> will end.
|
2016-03-15 18:31:18 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2019-03-25 15:59:04 +01:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="cluster-progress-reporting">
|
|
|
|
<title>CLUSTER Progress Reporting</title>
|
|
|
|
|
2020-12-01 09:19:23 +01:00
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_progress_cluster</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2019-03-25 15:59:04 +01:00
|
|
|
<para>
|
|
|
|
Whenever <command>CLUSTER</command> or <command>VACUUM FULL</command> is
|
|
|
|
running, the <structname>pg_stat_progress_cluster</structname> view will
|
2019-04-08 22:27:35 +02:00
|
|
|
contain a row for each backend that is currently running either command.
|
2019-03-25 15:59:04 +01:00
|
|
|
The tables below describe the information that will be reported and
|
|
|
|
provide information about how to interpret it.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-progress-cluster-view" xreflabel="pg_stat_progress_cluster">
|
|
|
|
<title><structname>pg_stat_progress_cluster</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2019-03-25 15:59:04 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2019-03-25 15:59:04 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of backend.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the database to which this backend is connected.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the database to which this backend is connected.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the table being clustered.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>command</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
The command that is running. Either <literal>CLUSTER</literal> or <literal>VACUUM FULL</literal>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>phase</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Current processing phase. See <xref linkend="cluster-phases"/>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>cluster_index_relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2019-03-25 15:59:04 +01:00
|
|
|
If the table is being scanned using an index, this is the OID of the
|
|
|
|
index being used; otherwise, it is zero.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>heap_tuples_scanned</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2019-03-25 15:59:04 +01:00
|
|
|
Number of heap tuples scanned.
|
|
|
|
This counter only advances when the phase is
|
|
|
|
<literal>seq scanning heap</literal>,
|
|
|
|
<literal>index scanning heap</literal>
|
|
|
|
or <literal>writing new heap</literal>.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>heap_tuples_written</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2019-03-25 15:59:04 +01:00
|
|
|
Number of heap tuples written.
|
|
|
|
This counter only advances when the phase is
|
|
|
|
<literal>seq scanning heap</literal>,
|
|
|
|
<literal>index scanning heap</literal>
|
|
|
|
or <literal>writing new heap</literal>.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>heap_blks_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2019-03-25 15:59:04 +01:00
|
|
|
Total number of heap blocks in the table. This number is reported
|
|
|
|
as of the beginning of <literal>seq scanning heap</literal>.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>heap_blks_scanned</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2019-03-25 15:59:04 +01:00
|
|
|
Number of heap blocks scanned. This counter only advances when the
|
|
|
|
phase is <literal>seq scanning heap</literal>.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>index_rebuild_count</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2019-03-25 15:59:04 +01:00
|
|
|
Number of indexes rebuilt. This counter only advances when the phase
|
|
|
|
is <literal>rebuilding index</literal>.
|
2020-05-14 05:03:39 +02:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2019-03-25 15:59:04 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="cluster-phases">
|
2019-09-08 10:26:35 +02:00
|
|
|
<title>CLUSTER and VACUUM FULL Phases</title>
|
2019-03-25 15:59:04 +01:00
|
|
|
<tgroup cols="2">
|
2020-05-06 18:23:43 +02:00
|
|
|
<colspec colname="col1" colwidth="1*"/>
|
|
|
|
<colspec colname="col2" colwidth="2*"/>
|
2019-03-25 15:59:04 +01:00
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry>Phase</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>initializing</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The command is preparing to begin scanning the heap. This phase is
|
|
|
|
expected to be very brief.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>seq scanning heap</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The command is currently scanning the table using a sequential scan.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>index scanning heap</literal></entry>
|
|
|
|
<entry>
|
|
|
|
<command>CLUSTER</command> is currently scanning the table using an index scan.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>sorting tuples</literal></entry>
|
|
|
|
<entry>
|
2019-04-08 22:27:35 +02:00
|
|
|
<command>CLUSTER</command> is currently sorting tuples.
|
2019-03-25 15:59:04 +01:00
|
|
|
</entry>
|
|
|
|
</row>
|
2019-10-28 06:23:42 +01:00
|
|
|
<row>
|
|
|
|
<entry><literal>writing new heap</literal></entry>
|
|
|
|
<entry>
|
|
|
|
<command>CLUSTER</command> is currently writing the new heap.
|
|
|
|
</entry>
|
|
|
|
</row>
|
2019-03-25 15:59:04 +01:00
|
|
|
<row>
|
|
|
|
<entry><literal>swapping relation files</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The command is currently swapping newly-built files into place.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>rebuilding index</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The command is currently rebuilding an index.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>performing final cleanup</literal></entry>
|
|
|
|
<entry>
|
2019-04-08 22:27:35 +02:00
|
|
|
The command is performing final cleanup. When this phase is
|
2019-03-25 15:59:04 +01:00
|
|
|
completed, <command>CLUSTER</command>
|
|
|
|
or <command>VACUUM FULL</command> will end.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
2020-03-03 04:03:43 +01:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="basebackup-progress-reporting">
|
|
|
|
<title>Base Backup Progress Reporting</title>
|
|
|
|
|
2020-12-01 09:19:23 +01:00
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_progress_basebackup</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2020-03-03 04:03:43 +01:00
|
|
|
<para>
|
|
|
|
Whenever an application like <application>pg_basebackup</application>
|
|
|
|
is taking a base backup, the
|
|
|
|
<structname>pg_stat_progress_basebackup</structname>
|
|
|
|
view will contain a row for each WAL sender process that is currently
|
2020-04-14 07:45:43 +02:00
|
|
|
running the <command>BASE_BACKUP</command> replication command
|
2020-03-03 04:03:43 +01:00
|
|
|
and streaming the backup. The tables below describe the information
|
|
|
|
that will be reported and provide information about how to interpret it.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-progress-basebackup-view" xreflabel="pg_stat_progress_basebackup">
|
|
|
|
<title><structname>pg_stat_progress_basebackup</structname> View</title>
|
2020-05-14 05:03:39 +02:00
|
|
|
<tgroup cols="1">
|
2020-03-03 04:03:43 +01:00
|
|
|
<thead>
|
2020-05-14 05:03:39 +02:00
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
2020-03-03 04:03:43 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
2020-05-14 05:03:39 +02:00
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of a WAL sender process.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>phase</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Current processing phase. See <xref linkend="basebackup-phases"/>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>backup_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total amount of data that will be streamed. This is estimated and
|
|
|
|
reported as of the beginning of
|
|
|
|
<literal>streaming database files</literal> phase. Note that
|
|
|
|
this is only an approximation since the database
|
|
|
|
may change during <literal>streaming database files</literal> phase
|
|
|
|
and WAL log may be included in the backup later. This is always
|
|
|
|
the same value as <structfield>backup_streamed</structfield>
|
|
|
|
once the amount of data streamed exceeds the estimated
|
|
|
|
total size. If the estimation is disabled in
|
|
|
|
<application>pg_basebackup</application>
|
|
|
|
(i.e., <literal>--no-estimate-size</literal> option is specified),
|
|
|
|
this is <literal>NULL</literal>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>backup_streamed</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Amount of data streamed. This counter only advances
|
|
|
|
when the phase is <literal>streaming database files</literal> or
|
|
|
|
<literal>transferring wal files</literal>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tablespaces_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Total number of tablespaces that will be streamed.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tablespaces_streamed</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of tablespaces streamed. This counter only
|
|
|
|
advances when the phase is <literal>streaming database files</literal>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
2020-03-03 04:03:43 +01:00
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
<table id="basebackup-phases">
|
|
|
|
<title>Base backup phases</title>
|
|
|
|
<tgroup cols="2">
|
2020-05-06 18:23:43 +02:00
|
|
|
<colspec colname="col1" colwidth="1*"/>
|
|
|
|
<colspec colname="col2" colwidth="2*"/>
|
2020-03-03 04:03:43 +01:00
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry>Phase</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry><literal>initializing</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The WAL sender process is preparing to begin the backup.
|
|
|
|
This phase is expected to be very brief.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>waiting for checkpoint to finish</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The WAL sender process is currently performing
|
2022-04-06 20:41:03 +02:00
|
|
|
<function>pg_backup_start</function> to prepare to
|
2020-09-21 18:43:42 +02:00
|
|
|
take a base backup, and waiting for the start-of-backup
|
2020-03-03 04:03:43 +01:00
|
|
|
checkpoint to finish.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>estimating backup size</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The WAL sender process is currently estimating the total amount
|
|
|
|
of database files that will be streamed as a base backup.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>streaming database files</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The WAL sender process is currently streaming database files
|
|
|
|
as a base backup.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>waiting for wal archiving to finish</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The WAL sender process is currently performing
|
2022-04-06 20:41:03 +02:00
|
|
|
<function>pg_backup_stop</function> to finish the backup,
|
2020-03-03 04:03:43 +01:00
|
|
|
and waiting for all the WAL files required for the base backup
|
|
|
|
to be successfully archived.
|
|
|
|
If either <literal>--wal-method=none</literal> or
|
|
|
|
<literal>--wal-method=stream</literal> is specified in
|
|
|
|
<application>pg_basebackup</application>, the backup will end
|
|
|
|
when this phase is completed.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
|
|
|
<entry><literal>transferring wal files</literal></entry>
|
|
|
|
<entry>
|
|
|
|
The WAL sender process is currently transferring all WAL logs
|
|
|
|
generated during the backup. This phase occurs after
|
|
|
|
<literal>waiting for wal archiving to finish</literal> phase if
|
|
|
|
<literal>--wal-method=fetch</literal> is specified in
|
|
|
|
<application>pg_basebackup</application>. The backup will end
|
|
|
|
when this phase is completed.
|
|
|
|
</entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
2019-03-25 15:59:04 +01:00
|
|
|
|
2016-03-15 18:31:18 +01:00
|
|
|
</sect2>
|
2021-01-06 21:46:26 +01:00
|
|
|
|
|
|
|
<sect2 id="copy-progress-reporting">
|
|
|
|
<title>COPY Progress Reporting</title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>pg_stat_progress_copy</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Whenever <command>COPY</command> is running, the
|
|
|
|
<structname>pg_stat_progress_copy</structname> view will contain one row
|
2021-01-07 17:31:58 +01:00
|
|
|
for each backend that is currently running a <command>COPY</command> command.
|
|
|
|
The table below describes the information that will be reported and provides
|
|
|
|
information about how to interpret it.
|
2021-01-06 21:46:26 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="pg-stat-progress-copy-view" xreflabel="pg_stat_progress_copy">
|
|
|
|
<title><structname>pg_stat_progress_copy</structname> View</title>
|
|
|
|
<tgroup cols="1">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
Column Type
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Description
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>pid</structfield> <type>integer</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Process ID of backend.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
2021-01-07 17:31:58 +01:00
|
|
|
<structfield>datid</structfield> <type>oid</type>
|
2021-01-06 21:46:26 +01:00
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
OID of the database to which this backend is connected.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>datname</structfield> <type>name</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Name of the database to which this backend is connected.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>relid</structfield> <type>oid</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
2021-03-09 06:21:03 +01:00
|
|
|
OID of the table on which the <command>COPY</command> command is
|
|
|
|
executed. It is set to <literal>0</literal> if copying from a
|
|
|
|
<command>SELECT</command> query.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>command</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
The command that is running: <literal>COPY FROM</literal>, or
|
|
|
|
<literal>COPY TO</literal>.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>type</structfield> <type>text</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
The io type that the data is read from or written to:
|
|
|
|
<literal>FILE</literal>, <literal>PROGRAM</literal>,
|
|
|
|
<literal>PIPE</literal> (for <command>COPY FROM STDIN</command> and
|
|
|
|
<command>COPY TO STDOUT</command>), or <literal>CALLBACK</literal>
|
|
|
|
(used for example during the initial table synchronization in
|
|
|
|
logical replication).
|
2021-01-06 21:46:26 +01:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>bytes_processed</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of bytes already processed by <command>COPY</command> command.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>bytes_total</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Size of source file for <command>COPY FROM</command> command in bytes.
|
2021-03-09 06:21:03 +01:00
|
|
|
It is set to <literal>0</literal> if not available.
|
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
|
|
|
<structfield>tuples_processed</structfield> <type>bigint</type>
|
|
|
|
</para>
|
|
|
|
<para>
|
|
|
|
Number of tuples already processed by <command>COPY</command> command.
|
2021-01-06 21:46:26 +01:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
|
|
|
|
<row>
|
|
|
|
<entry role="catalog_table_entry"><para role="column_definition">
|
2021-03-09 06:21:03 +01:00
|
|
|
<structfield>tuples_excluded</structfield> <type>bigint</type>
|
2021-01-06 21:46:26 +01:00
|
|
|
</para>
|
|
|
|
<para>
|
2021-03-09 06:21:03 +01:00
|
|
|
Number of tuples not processed because they were excluded by the
|
|
|
|
<command>WHERE</command> clause of the <command>COPY</command> command.
|
2021-01-06 21:46:26 +01:00
|
|
|
</para></entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
</sect2>
|
|
|
|
|
2016-03-15 18:31:18 +01:00
|
|
|
</sect1>
|
|
|
|
|
2006-12-02 01:42:54 +01:00
|
|
|
<sect1 id="dynamic-trace">
|
|
|
|
<title>Dynamic Tracing</title>
|
|
|
|
|
|
|
|
<indexterm zone="dynamic-trace">
|
|
|
|
<primary>DTrace</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
<productname>PostgreSQL</productname> provides facilities to support
|
|
|
|
dynamic tracing of the database server. This allows an external
|
|
|
|
utility to be called at specific points in the code and thereby trace
|
2009-03-10 01:08:28 +01:00
|
|
|
execution.
|
2006-12-02 01:42:54 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2009-03-10 01:08:28 +01:00
|
|
|
A number of probes or trace points are already inserted into the source
|
|
|
|
code. These probes are intended to be used by database developers and
|
2009-03-23 02:52:38 +01:00
|
|
|
administrators. By default the probes are not compiled into
|
|
|
|
<productname>PostgreSQL</productname>; the user needs to explicitly tell
|
|
|
|
the configure script to make the probes available.
|
2006-12-02 01:42:54 +01:00
|
|
|
</para>
|
|
|
|
|
2009-03-23 02:52:38 +01:00
|
|
|
<para>
|
2013-04-11 05:05:13 +02:00
|
|
|
Currently, the
|
|
|
|
<ulink url="https://en.wikipedia.org/wiki/DTrace">DTrace</ulink>
|
|
|
|
utility is supported, which, at the time of this writing, is available
|
Refer to OS X as "macOS", except for the port name which is still "darwin".
We weren't terribly consistent about whether to call Apple's OS "OS X"
or "Mac OS X", and the former is probably confusing to people who aren't
Apple users. Now that Apple has rebranded it "macOS", follow their lead
to establish a consistent naming pattern. Also, avoid the use of the
ancient project name "Darwin", except as the port code name which does not
seem desirable to change. (In short, this patch touches documentation and
comments, but no actual code.)
I didn't touch contrib/start-scripts/osx/, either. I suspect those are
obsolete and due for a rewrite, anyway.
I dithered about whether to apply this edit to old release notes, but
those were responsible for quite a lot of the inconsistencies, so I ended
up changing them too. Anyway, Apple's being ahistorical about this,
so why shouldn't we be?
2016-09-25 21:40:57 +02:00
|
|
|
on Solaris, macOS, FreeBSD, NetBSD, and Oracle Linux. The
|
2020-07-18 15:43:35 +02:00
|
|
|
<ulink url="https://sourceware.org/systemtap/">SystemTap</ulink> project
|
2013-04-11 05:05:13 +02:00
|
|
|
for Linux provides a DTrace equivalent and can also be used. Supporting other dynamic
|
2009-03-23 02:52:38 +01:00
|
|
|
tracing utilities is theoretically possible by changing the definitions for
|
2017-10-09 03:44:17 +02:00
|
|
|
the macros in <filename>src/include/utils/probes.h</filename>.
|
2006-12-02 01:42:54 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<sect2 id="compiling-for-trace">
|
2006-12-02 10:29:51 +01:00
|
|
|
<title>Compiling for Dynamic Tracing</title>
|
2006-12-02 01:42:54 +01:00
|
|
|
|
|
|
|
<para>
|
2008-03-25 23:50:27 +01:00
|
|
|
By default, probes are not available, so you will need to
|
2006-12-02 01:42:54 +01:00
|
|
|
explicitly tell the configure script to make the probes available
|
|
|
|
in <productname>PostgreSQL</productname>. To include DTrace support
|
2017-10-09 03:44:17 +02:00
|
|
|
specify <option>--enable-dtrace</option> to configure. See <xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="install-procedure"/> for further information.
|
2006-12-08 20:16:17 +01:00
|
|
|
</para>
|
2006-12-02 01:42:54 +01:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="trace-points">
|
2008-03-25 23:50:27 +01:00
|
|
|
<title>Built-in Probes</title>
|
2006-12-02 01:42:54 +01:00
|
|
|
|
|
|
|
<para>
|
2009-03-23 02:52:38 +01:00
|
|
|
A number of standard probes are provided in the source code,
|
2017-11-23 15:39:47 +01:00
|
|
|
as shown in <xref linkend="dtrace-probe-point-table"/>;
|
|
|
|
<xref linkend="typedefs-table"/>
|
2011-05-04 19:24:07 +02:00
|
|
|
shows the types used in the probes. More probes can certainly be
|
2017-10-09 03:44:17 +02:00
|
|
|
added to enhance <productname>PostgreSQL</productname>'s observability.
|
2006-12-02 01:42:54 +01:00
|
|
|
</para>
|
|
|
|
|
2009-03-23 02:52:38 +01:00
|
|
|
<table id="dtrace-probe-point-table">
|
|
|
|
<title>Built-in DTrace Probes</title>
|
2006-12-02 01:42:54 +01:00
|
|
|
<tgroup cols="3">
|
2020-05-06 18:23:43 +02:00
|
|
|
<colspec colname="col1" colwidth="2*"/>
|
|
|
|
<colspec colname="col2" colwidth="3*"/>
|
|
|
|
<colspec colname="col3" colwidth="3*"/>
|
2006-12-02 01:42:54 +01:00
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry>Name</entry>
|
|
|
|
<entry>Parameters</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
<entry>Description</entry>
|
2006-12-02 01:42:54 +01:00
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
2009-03-10 01:08:28 +01:00
|
|
|
|
2006-12-02 01:42:54 +01:00
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>transaction-start</literal></entry>
|
|
|
|
<entry><literal>(LocalTransactionId)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires at the start of a new transaction.
|
2010-04-15 22:56:13 +02:00
|
|
|
arg0 is the transaction ID.</entry>
|
2006-12-02 01:42:54 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>transaction-commit</literal></entry>
|
|
|
|
<entry><literal>(LocalTransactionId)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a transaction completes successfully.
|
2010-04-15 22:56:13 +02:00
|
|
|
arg0 is the transaction ID.</entry>
|
2006-12-02 01:42:54 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>transaction-abort</literal></entry>
|
|
|
|
<entry><literal>(LocalTransactionId)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a transaction completes unsuccessfully.
|
2010-04-15 22:56:13 +02:00
|
|
|
arg0 is the transaction ID.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>query-start</literal></entry>
|
|
|
|
<entry><literal>(const char *)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the processing of a query is started.
|
|
|
|
arg0 is the query string.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>query-done</literal></entry>
|
|
|
|
<entry><literal>(const char *)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the processing of a query is complete.
|
|
|
|
arg0 is the query string.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>query-parse-start</literal></entry>
|
|
|
|
<entry><literal>(const char *)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the parsing of a query is started.
|
|
|
|
arg0 is the query string.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>query-parse-done</literal></entry>
|
|
|
|
<entry><literal>(const char *)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the parsing of a query is complete.
|
|
|
|
arg0 is the query string.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>query-rewrite-start</literal></entry>
|
|
|
|
<entry><literal>(const char *)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the rewriting of a query is started.
|
|
|
|
arg0 is the query string.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>query-rewrite-done</literal></entry>
|
|
|
|
<entry><literal>(const char *)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the rewriting of a query is complete.
|
|
|
|
arg0 is the query string.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>query-plan-start</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
<entry>Probe that fires when the planning of a query is started.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>query-plan-done</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
<entry>Probe that fires when the planning of a query is complete.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>query-execute-start</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
<entry>Probe that fires when the execution of a query is started.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>query-execute-done</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
<entry>Probe that fires when the execution of a query is complete.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>statement-status</literal></entry>
|
|
|
|
<entry><literal>(const char *)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires anytime the server process updates its
|
2017-10-09 03:44:17 +02:00
|
|
|
<structname>pg_stat_activity</structname>.<structfield>status</structfield>.
|
2009-03-23 02:52:38 +01:00
|
|
|
arg0 is the new status string.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>checkpoint-start</literal></entry>
|
|
|
|
<entry><literal>(int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a checkpoint is started.
|
|
|
|
arg0 holds the bitwise flags used to distinguish different checkpoint
|
|
|
|
types, such as shutdown, immediate or force.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>checkpoint-done</literal></entry>
|
|
|
|
<entry><literal>(int, int, int, int, int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a checkpoint is complete.
|
|
|
|
(The probes listed next fire in sequence during checkpoint processing.)
|
|
|
|
arg0 is the number of buffers written. arg1 is the total number of
|
2015-12-12 17:31:28 +01:00
|
|
|
buffers. arg2, arg3 and arg4 contain the number of WAL files added,
|
2009-03-23 02:52:38 +01:00
|
|
|
removed and recycled respectively.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>clog-checkpoint-start</literal></entry>
|
|
|
|
<entry><literal>(bool)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the CLOG portion of a checkpoint is started.
|
|
|
|
arg0 is true for normal checkpoint, false for shutdown
|
|
|
|
checkpoint.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>clog-checkpoint-done</literal></entry>
|
|
|
|
<entry><literal>(bool)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the CLOG portion of a checkpoint is
|
2015-12-12 17:31:28 +01:00
|
|
|
complete. arg0 has the same meaning as for <literal>clog-checkpoint-start</literal>.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>subtrans-checkpoint-start</literal></entry>
|
|
|
|
<entry><literal>(bool)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the SUBTRANS portion of a checkpoint is
|
|
|
|
started.
|
|
|
|
arg0 is true for normal checkpoint, false for shutdown
|
|
|
|
checkpoint.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>subtrans-checkpoint-done</literal></entry>
|
|
|
|
<entry><literal>(bool)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the SUBTRANS portion of a checkpoint is
|
|
|
|
complete. arg0 has the same meaning as for
|
2015-12-12 17:31:28 +01:00
|
|
|
<literal>subtrans-checkpoint-start</literal>.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>multixact-checkpoint-start</literal></entry>
|
|
|
|
<entry><literal>(bool)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the MultiXact portion of a checkpoint is
|
|
|
|
started.
|
|
|
|
arg0 is true for normal checkpoint, false for shutdown
|
|
|
|
checkpoint.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>multixact-checkpoint-done</literal></entry>
|
|
|
|
<entry><literal>(bool)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the MultiXact portion of a checkpoint is
|
|
|
|
complete. arg0 has the same meaning as for
|
2015-12-12 17:31:28 +01:00
|
|
|
<literal>multixact-checkpoint-start</literal>.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-checkpoint-start</literal></entry>
|
|
|
|
<entry><literal>(int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the buffer-writing portion of a checkpoint
|
|
|
|
is started.
|
|
|
|
arg0 holds the bitwise flags used to distinguish different checkpoint
|
|
|
|
types, such as shutdown, immediate or force.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-sync-start</literal></entry>
|
|
|
|
<entry><literal>(int, int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when we begin to write dirty buffers during
|
|
|
|
checkpoint (after identifying which buffers must be written).
|
|
|
|
arg0 is the total number of buffers.
|
|
|
|
arg1 is the number that are currently dirty and need to be written.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-sync-written</literal></entry>
|
|
|
|
<entry><literal>(int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires after each buffer is written during checkpoint.
|
|
|
|
arg0 is the ID number of the buffer.</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-sync-done</literal></entry>
|
|
|
|
<entry><literal>(int, int, int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when all dirty buffers have been written.
|
|
|
|
arg0 is the total number of buffers.
|
|
|
|
arg1 is the number of buffers actually written by the checkpoint process.
|
|
|
|
arg2 is the number that were expected to be written (arg1 of
|
2015-12-12 17:31:28 +01:00
|
|
|
<literal>buffer-sync-start</literal>); any difference reflects other processes flushing
|
2009-03-23 02:52:38 +01:00
|
|
|
buffers during the checkpoint.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-checkpoint-sync-start</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires after dirty buffers have been written to the
|
|
|
|
kernel, and before starting to issue fsync requests.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-checkpoint-done</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when syncing of buffers to disk is
|
|
|
|
complete.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>twophase-checkpoint-start</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the two-phase portion of a checkpoint is
|
|
|
|
started.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>twophase-checkpoint-done</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when the two-phase portion of a checkpoint is
|
|
|
|
complete.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-read-start</literal></entry>
|
|
|
|
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a buffer read is started.
|
|
|
|
arg0 and arg1 contain the fork and block numbers of the page (but
|
|
|
|
arg1 will be -1 if this is a relation extension request).
|
|
|
|
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
|
|
|
|
identifying the relation.
|
2010-08-14 04:22:10 +02:00
|
|
|
arg5 is the ID of the backend which created the temporary relation for a
|
2015-12-12 17:31:28 +01:00
|
|
|
local buffer, or <symbol>InvalidBackendId</symbol> (-1) for a shared buffer.
|
2009-03-23 02:52:38 +01:00
|
|
|
arg6 is true for a relation extension request, false for normal
|
|
|
|
read.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-read-done</literal></entry>
|
|
|
|
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool, bool)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a buffer read is complete.
|
|
|
|
arg0 and arg1 contain the fork and block numbers of the page (if this
|
|
|
|
is a relation extension request, arg1 now contains the block number
|
|
|
|
of the newly added block).
|
|
|
|
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
|
|
|
|
identifying the relation.
|
2010-08-14 04:22:10 +02:00
|
|
|
arg5 is the ID of the backend which created the temporary relation for a
|
2015-12-12 17:31:28 +01:00
|
|
|
local buffer, or <symbol>InvalidBackendId</symbol> (-1) for a shared buffer.
|
2009-03-23 02:52:38 +01:00
|
|
|
arg6 is true for a relation extension request, false for normal
|
|
|
|
read.
|
|
|
|
arg7 is true if the buffer was found in the pool, false if not.</entry>
|
2006-12-02 01:42:54 +01:00
|
|
|
</row>
|
2009-03-10 01:08:28 +01:00
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-flush-start</literal></entry>
|
|
|
|
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires before issuing any write request for a shared
|
|
|
|
buffer.
|
|
|
|
arg0 and arg1 contain the fork and block numbers of the page.
|
|
|
|
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
|
|
|
|
identifying the relation.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-flush-done</literal></entry>
|
|
|
|
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a write request is complete. (Note
|
|
|
|
that this just reflects the time to pass the data to the kernel;
|
|
|
|
it's typically not actually been written to disk yet.)
|
2015-12-12 17:31:28 +01:00
|
|
|
The arguments are the same as for <literal>buffer-flush-start</literal>.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-write-dirty-start</literal></entry>
|
|
|
|
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a server process begins to write a dirty
|
|
|
|
buffer. (If this happens often, it implies that
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="guc-shared-buffers"/> is too
|
2015-12-12 17:31:28 +01:00
|
|
|
small or the background writer control parameters need adjustment.)
|
2009-03-23 02:52:38 +01:00
|
|
|
arg0 and arg1 contain the fork and block numbers of the page.
|
|
|
|
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
|
|
|
|
identifying the relation.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>buffer-write-dirty-done</literal></entry>
|
|
|
|
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a dirty-buffer write is complete.
|
2015-12-12 17:31:28 +01:00
|
|
|
The arguments are the same as for <literal>buffer-write-dirty-start</literal>.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>wal-buffer-write-dirty-start</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2012-04-10 08:30:58 +02:00
|
|
|
<entry>Probe that fires when a server process begins to write a
|
2009-03-23 02:52:38 +01:00
|
|
|
dirty WAL buffer because no more WAL buffer space is available.
|
|
|
|
(If this happens often, it implies that
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="guc-wal-buffers"/> is too small.)</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>wal-buffer-write-dirty-done</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a dirty WAL buffer write is complete.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-02-09 22:40:19 +01:00
|
|
|
<entry><literal>wal-insert</literal></entry>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>(unsigned char, unsigned char)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a WAL record is inserted.
|
|
|
|
arg0 is the resource manager (rmid) for the record.
|
|
|
|
arg1 contains the info flags.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-02-09 22:40:19 +01:00
|
|
|
<entry><literal>wal-switch</literal></entry>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a WAL segment switch is requested.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>smgr-md-read-start</literal></entry>
|
|
|
|
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when beginning to read a block from a relation.
|
|
|
|
arg0 and arg1 contain the fork and block numbers of the page.
|
|
|
|
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
|
2010-08-14 04:22:10 +02:00
|
|
|
identifying the relation.
|
|
|
|
arg5 is the ID of the backend which created the temporary relation for a
|
2015-12-12 17:31:28 +01:00
|
|
|
local buffer, or <symbol>InvalidBackendId</symbol> (-1) for a shared buffer.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>smgr-md-read-done</literal></entry>
|
|
|
|
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a block read is complete.
|
|
|
|
arg0 and arg1 contain the fork and block numbers of the page.
|
|
|
|
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
|
|
|
|
identifying the relation.
|
2010-08-14 04:22:10 +02:00
|
|
|
arg5 is the ID of the backend which created the temporary relation for a
|
2015-12-12 17:31:28 +01:00
|
|
|
local buffer, or <symbol>InvalidBackendId</symbol> (-1) for a shared buffer.
|
2010-08-14 04:22:10 +02:00
|
|
|
arg6 is the number of bytes actually read, while arg7 is the number
|
2009-03-23 02:52:38 +01:00
|
|
|
requested (if these are different it indicates trouble).</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>smgr-md-write-start</literal></entry>
|
|
|
|
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when beginning to write a block to a relation.
|
|
|
|
arg0 and arg1 contain the fork and block numbers of the page.
|
|
|
|
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
|
2010-08-14 04:22:10 +02:00
|
|
|
identifying the relation.
|
|
|
|
arg5 is the ID of the backend which created the temporary relation for a
|
2015-12-12 17:31:28 +01:00
|
|
|
local buffer, or <symbol>InvalidBackendId</symbol> (-1) for a shared buffer.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>smgr-md-write-done</literal></entry>
|
|
|
|
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a block write is complete.
|
|
|
|
arg0 and arg1 contain the fork and block numbers of the page.
|
|
|
|
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
|
|
|
|
identifying the relation.
|
2010-08-14 04:22:10 +02:00
|
|
|
arg5 is the ID of the backend which created the temporary relation for a
|
2015-12-12 17:31:28 +01:00
|
|
|
local buffer, or <symbol>InvalidBackendId</symbol> (-1) for a shared buffer.
|
2010-08-14 04:22:10 +02:00
|
|
|
arg6 is the number of bytes actually written, while arg7 is the number
|
2009-03-23 02:52:38 +01:00
|
|
|
requested (if these are different it indicates trouble).</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>sort-start</literal></entry>
|
Support parallel btree index builds.
To make this work, tuplesort.c and logtape.c must also support
parallelism, so this patch adds that infrastructure and then applies
it to the particular case of parallel btree index builds. Testing
to date shows that this can often be 2-3x faster than a serial
index build.
The model for deciding how many workers to use is fairly primitive
at present, but it's better than not having the feature. We can
refine it as we get more experience.
Peter Geoghegan with some help from Rushabh Lathia. While Heikki
Linnakangas is not an author of this patch, he wrote other patches
without which this feature would not have been possible, and
therefore the release notes should possibly credit him as an author
of this feature. Reviewed by Claudio Freire, Heikki Linnakangas,
Thomas Munro, Tels, Amit Kapila, me.
Discussion: http://postgr.es/m/CAM3SWZQKM=Pzc=CAHzRixKjp2eO5Q0Jg1SoFQqeXFQ647JiwqQ@mail.gmail.com
Discussion: http://postgr.es/m/CAH2-Wz=AxWqDoVvGU7dq856S4r6sJAj6DBn7VMtigkB33N5eyg@mail.gmail.com
2018-02-02 19:25:55 +01:00
|
|
|
<entry><literal>(int, bool, int, int, bool, int)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a sort operation is started.
|
|
|
|
arg0 indicates heap, index or datum sort.
|
|
|
|
arg1 is true for unique-value enforcement.
|
|
|
|
arg2 is the number of key columns.
|
|
|
|
arg3 is the number of kilobytes of work memory allowed.
|
Support parallel btree index builds.
To make this work, tuplesort.c and logtape.c must also support
parallelism, so this patch adds that infrastructure and then applies
it to the particular case of parallel btree index builds. Testing
to date shows that this can often be 2-3x faster than a serial
index build.
The model for deciding how many workers to use is fairly primitive
at present, but it's better than not having the feature. We can
refine it as we get more experience.
Peter Geoghegan with some help from Rushabh Lathia. While Heikki
Linnakangas is not an author of this patch, he wrote other patches
without which this feature would not have been possible, and
therefore the release notes should possibly credit him as an author
of this feature. Reviewed by Claudio Freire, Heikki Linnakangas,
Thomas Munro, Tels, Amit Kapila, me.
Discussion: http://postgr.es/m/CAM3SWZQKM=Pzc=CAHzRixKjp2eO5Q0Jg1SoFQqeXFQ647JiwqQ@mail.gmail.com
Discussion: http://postgr.es/m/CAH2-Wz=AxWqDoVvGU7dq856S4r6sJAj6DBn7VMtigkB33N5eyg@mail.gmail.com
2018-02-02 19:25:55 +01:00
|
|
|
arg4 is true if random access to the sort result is required.
|
|
|
|
arg5 indicates serial when <literal>0</literal>, parallel worker when
|
|
|
|
<literal>1</literal>, or parallel leader when <literal>2</literal>.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>sort-done</literal></entry>
|
|
|
|
<entry><literal>(bool, long)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a sort is complete.
|
|
|
|
arg0 is true for external sort, false for internal sort.
|
|
|
|
arg1 is the number of disk blocks used for an external sort,
|
|
|
|
or kilobytes of memory used for an internal sort.</entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
2006-12-02 01:42:54 +01:00
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>lwlock-acquire</literal></entry>
|
2016-12-16 17:29:23 +01:00
|
|
|
<entry><literal>(char *, LWLockMode)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when an LWLock has been acquired.
|
2014-01-27 17:07:44 +01:00
|
|
|
arg0 is the LWLock's tranche.
|
2016-12-16 17:29:23 +01:00
|
|
|
arg1 is the requested lock mode, either exclusive or shared.</entry>
|
2006-12-02 01:42:54 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>lwlock-release</literal></entry>
|
2016-12-16 17:29:23 +01:00
|
|
|
<entry><literal>(char *)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when an LWLock has been released (but note
|
|
|
|
that any released waiters have not yet been awakened).
|
2016-12-16 17:29:23 +01:00
|
|
|
arg0 is the LWLock's tranche.</entry>
|
2006-12-02 01:42:54 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>lwlock-wait-start</literal></entry>
|
2016-12-16 17:29:23 +01:00
|
|
|
<entry><literal>(char *, LWLockMode)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when an LWLock was not immediately available and
|
|
|
|
a server process has begun to wait for the lock to become available.
|
2014-01-27 17:07:44 +01:00
|
|
|
arg0 is the LWLock's tranche.
|
2016-12-16 17:29:23 +01:00
|
|
|
arg1 is the requested lock mode, either exclusive or shared.</entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
</row>
|
2006-12-02 01:42:54 +01:00
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>lwlock-wait-done</literal></entry>
|
2016-12-16 17:29:23 +01:00
|
|
|
<entry><literal>(char *, LWLockMode)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a server process has been released from its
|
|
|
|
wait for an LWLock (it does not actually have the lock yet).
|
2014-01-27 17:07:44 +01:00
|
|
|
arg0 is the LWLock's tranche.
|
2016-12-16 17:29:23 +01:00
|
|
|
arg1 is the requested lock mode, either exclusive or shared.</entry>
|
2006-12-02 01:42:54 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>lwlock-condacquire</literal></entry>
|
2016-12-16 17:29:23 +01:00
|
|
|
<entry><literal>(char *, LWLockMode)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when an LWLock was successfully acquired when the
|
|
|
|
caller specified no waiting.
|
2014-01-27 17:07:44 +01:00
|
|
|
arg0 is the LWLock's tranche.
|
2016-12-16 17:29:23 +01:00
|
|
|
arg1 is the requested lock mode, either exclusive or shared.</entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
</row>
|
2006-12-02 01:42:54 +01:00
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>lwlock-condacquire-fail</literal></entry>
|
2016-12-16 17:29:23 +01:00
|
|
|
<entry><literal>(char *, LWLockMode)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when an LWLock was not successfully acquired when
|
|
|
|
the caller specified no waiting.
|
2014-01-27 17:07:44 +01:00
|
|
|
arg0 is the LWLock's tranche.
|
2016-12-16 17:29:23 +01:00
|
|
|
arg1 is the requested lock mode, either exclusive or shared.</entry>
|
2006-12-02 01:42:54 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>lock-wait-start</literal></entry>
|
|
|
|
<entry><literal>(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a request for a heavyweight lock (lmgr lock)
|
|
|
|
has begun to wait because the lock is not available.
|
|
|
|
arg0 through arg3 are the tag fields identifying the object being
|
|
|
|
locked. arg4 indicates the type of object being locked.
|
|
|
|
arg5 indicates the lock type being requested.</entry>
|
2006-12-02 01:42:54 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>lock-wait-done</literal></entry>
|
|
|
|
<entry><literal>(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a request for a heavyweight lock (lmgr lock)
|
|
|
|
has finished waiting (i.e., has acquired the lock).
|
2015-12-12 17:31:28 +01:00
|
|
|
The arguments are the same as for <literal>lock-wait-start</literal>.</entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><literal>deadlock-found</literal></entry>
|
|
|
|
<entry><literal>()</literal></entry>
|
2009-03-23 02:52:38 +01:00
|
|
|
<entry>Probe that fires when a deadlock is found by the deadlock
|
|
|
|
detector.</entry>
|
2006-12-02 01:42:54 +01:00
|
|
|
</row>
|
2009-03-23 02:52:38 +01:00
|
|
|
|
2006-12-02 01:42:54 +01:00
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
2009-03-10 01:08:28 +01:00
|
|
|
|
|
|
|
<table id="typedefs-table">
|
|
|
|
<title>Defined Types Used in Probe Parameters</title>
|
|
|
|
<tgroup cols="2">
|
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry>Type</entry>
|
|
|
|
<entry>Definition</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
|
|
|
|
<tbody>
|
|
|
|
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><type>LocalTransactionId</type></entry>
|
|
|
|
<entry><type>unsigned int</type></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><type>LWLockMode</type></entry>
|
|
|
|
<entry><type>int</type></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><type>LOCKMODE</type></entry>
|
|
|
|
<entry><type>int</type></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><type>BlockNumber</type></entry>
|
|
|
|
<entry><type>unsigned int</type></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><type>Oid</type></entry>
|
|
|
|
<entry><type>unsigned int</type></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><type>ForkNumber</type></entry>
|
|
|
|
<entry><type>int</type></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
<row>
|
2015-12-12 17:31:28 +01:00
|
|
|
<entry><type>bool</type></entry>
|
2019-11-06 17:11:40 +01:00
|
|
|
<entry><type>unsigned char</type></entry>
|
2009-03-10 01:08:28 +01:00
|
|
|
</row>
|
|
|
|
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
|
|
|
|
2006-12-02 01:42:54 +01:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="using-trace-points">
|
2008-03-25 23:50:27 +01:00
|
|
|
<title>Using Probes</title>
|
2006-12-02 01:42:54 +01:00
|
|
|
|
|
|
|
<para>
|
|
|
|
The example below shows a DTrace script for analyzing transaction
|
2009-03-23 02:52:38 +01:00
|
|
|
counts in the system, as an alternative to snapshotting
|
2017-10-09 03:44:17 +02:00
|
|
|
<structname>pg_stat_database</structname> before and after a performance test:
|
2006-12-02 01:42:54 +01:00
|
|
|
<programlisting>
|
2009-03-23 02:52:38 +01:00
|
|
|
#!/usr/sbin/dtrace -qs
|
2006-12-02 01:42:54 +01:00
|
|
|
|
|
|
|
postgresql$1:::transaction-start
|
|
|
|
{
|
|
|
|
@start["Start"] = count();
|
|
|
|
self->ts = timestamp;
|
|
|
|
}
|
|
|
|
|
|
|
|
postgresql$1:::transaction-abort
|
|
|
|
{
|
|
|
|
@abort["Abort"] = count();
|
|
|
|
}
|
|
|
|
|
|
|
|
postgresql$1:::transaction-commit
|
|
|
|
/self->ts/
|
|
|
|
{
|
|
|
|
@commit["Commit"] = count();
|
|
|
|
@time["Total time (ns)"] = sum(timestamp - self->ts);
|
|
|
|
self->ts=0;
|
|
|
|
}
|
|
|
|
</programlisting>
|
|
|
|
When executed, the example D script gives output such as:
|
2006-12-02 10:29:51 +01:00
|
|
|
<screen>
|
2008-03-25 23:50:27 +01:00
|
|
|
# ./txn_count.d `pgrep -n postgres` or ./txn_count.d <PID>
|
2006-12-02 01:42:54 +01:00
|
|
|
^C
|
|
|
|
|
|
|
|
Start 71
|
|
|
|
Commit 70
|
|
|
|
Total time (ns) 2312105013
|
2006-12-02 10:29:51 +01:00
|
|
|
</screen>
|
2006-12-02 01:42:54 +01:00
|
|
|
</para>
|
2009-06-27 00:08:17 +02:00
|
|
|
|
|
|
|
<note>
|
|
|
|
<para>
|
|
|
|
SystemTap uses a different notation for trace scripts than DTrace does,
|
|
|
|
even though the underlying trace points are compatible. One point worth
|
|
|
|
noting is that at this writing, SystemTap scripts must reference probe
|
2010-02-03 18:25:06 +01:00
|
|
|
names using double underscores in place of hyphens. This is expected to
|
2009-06-27 00:08:17 +02:00
|
|
|
be fixed in future SystemTap releases.
|
|
|
|
</para>
|
|
|
|
</note>
|
|
|
|
|
2006-12-02 01:42:54 +01:00
|
|
|
<para>
|
2008-03-25 23:50:27 +01:00
|
|
|
You should remember that DTrace scripts need to be carefully written and
|
2009-03-23 02:52:38 +01:00
|
|
|
debugged, otherwise the trace information collected might
|
|
|
|
be meaningless. In most cases where problems are found it is the
|
|
|
|
instrumentation that is at fault, not the underlying system. When
|
2006-12-02 01:42:54 +01:00
|
|
|
discussing information found using dynamic tracing, be sure to enclose
|
|
|
|
the script used to allow that too to be checked and discussed.
|
|
|
|
</para>
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="defining-trace-points">
|
2008-03-25 23:50:27 +01:00
|
|
|
<title>Defining New Probes</title>
|
2006-12-02 01:42:54 +01:00
|
|
|
|
|
|
|
<para>
|
2008-03-25 23:50:27 +01:00
|
|
|
New probes can be defined within the code wherever the developer
|
|
|
|
desires, though this will require a recompilation. Below are the steps
|
|
|
|
for inserting new probes:
|
2006-12-02 01:42:54 +01:00
|
|
|
</para>
|
|
|
|
|
2008-03-25 23:50:27 +01:00
|
|
|
<procedure>
|
|
|
|
<step>
|
|
|
|
<para>
|
|
|
|
Decide on probe names and data to be made available through the probes
|
|
|
|
</para>
|
|
|
|
</step>
|
|
|
|
|
|
|
|
<step>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Add the probe definitions to <filename>src/backend/utils/probes.d</filename>
|
2008-03-25 23:50:27 +01:00
|
|
|
</para>
|
|
|
|
</step>
|
|
|
|
|
|
|
|
<step>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Include <filename>pg_trace.h</filename> if it is not already present in the
|
2009-03-28 01:10:23 +01:00
|
|
|
module(s) containing the probe points, and insert
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>TRACE_POSTGRESQL</literal> probe macros at the desired locations
|
2009-03-28 01:10:23 +01:00
|
|
|
in the source code
|
2008-03-25 23:50:27 +01:00
|
|
|
</para>
|
|
|
|
</step>
|
2009-03-23 02:52:38 +01:00
|
|
|
|
2008-03-25 23:50:27 +01:00
|
|
|
<step>
|
|
|
|
<para>
|
|
|
|
Recompile and verify that the new probes are available
|
|
|
|
</para>
|
|
|
|
</step>
|
|
|
|
</procedure>
|
|
|
|
|
|
|
|
<formalpara>
|
|
|
|
<title>Example:</title>
|
|
|
|
<para>
|
2009-03-23 02:52:38 +01:00
|
|
|
Here is an example of how you would add a probe to trace all new
|
|
|
|
transactions by transaction ID.
|
2008-03-25 23:50:27 +01:00
|
|
|
</para>
|
|
|
|
</formalpara>
|
|
|
|
|
|
|
|
<procedure>
|
|
|
|
<step>
|
2009-03-23 02:52:38 +01:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Decide that the probe will be named <literal>transaction-start</literal> and
|
2015-12-12 17:31:28 +01:00
|
|
|
requires a parameter of type <type>LocalTransactionId</type>
|
2008-03-25 23:50:27 +01:00
|
|
|
</para>
|
|
|
|
</step>
|
|
|
|
|
|
|
|
<step>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Add the probe definition to <filename>src/backend/utils/probes.d</filename>:
|
2006-12-02 01:42:54 +01:00
|
|
|
<programlisting>
|
2010-07-29 21:34:41 +02:00
|
|
|
probe transaction__start(LocalTransactionId);
|
2006-12-02 01:42:54 +01:00
|
|
|
</programlisting>
|
2009-03-23 02:52:38 +01:00
|
|
|
Note the use of the double underline in the probe name. In a DTrace
|
|
|
|
script using the probe, the double underline needs to be replaced with a
|
2017-10-09 03:44:17 +02:00
|
|
|
hyphen, so <literal>transaction-start</literal> is the name to document for
|
2009-03-28 01:10:23 +01:00
|
|
|
users.
|
2008-03-25 23:50:27 +01:00
|
|
|
</para>
|
|
|
|
</step>
|
|
|
|
|
|
|
|
<step>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
At compile time, <literal>transaction__start</literal> is converted to a macro
|
|
|
|
called <literal>TRACE_POSTGRESQL_TRANSACTION_START</literal> (notice the
|
2009-03-28 01:10:23 +01:00
|
|
|
underscores are single here), which is available by including
|
2017-10-09 03:44:17 +02:00
|
|
|
<filename>pg_trace.h</filename>. Add the macro call to the appropriate location
|
2009-03-28 01:10:23 +01:00
|
|
|
in the source code. In this case, it looks like the following:
|
2008-03-25 23:50:27 +01:00
|
|
|
|
2006-12-02 01:42:54 +01:00
|
|
|
<programlisting>
|
2010-07-29 21:34:41 +02:00
|
|
|
TRACE_POSTGRESQL_TRANSACTION_START(vxid.localTransactionId);
|
2006-12-02 01:42:54 +01:00
|
|
|
</programlisting>
|
2008-03-25 23:50:27 +01:00
|
|
|
</para>
|
|
|
|
</step>
|
2006-12-02 01:42:54 +01:00
|
|
|
|
2008-03-25 23:50:27 +01:00
|
|
|
<step>
|
|
|
|
<para>
|
|
|
|
After recompiling and running the new binary, check that your newly added
|
2009-03-23 02:52:38 +01:00
|
|
|
probe is available by executing the following DTrace command. You
|
|
|
|
should see similar output:
|
2008-03-25 23:50:27 +01:00
|
|
|
<screen>
|
|
|
|
# dtrace -ln transaction-start
|
|
|
|
ID PROVIDER MODULE FUNCTION NAME
|
|
|
|
18705 postgresql49878 postgres StartTransactionCommand transaction-start
|
|
|
|
18755 postgresql49877 postgres StartTransactionCommand transaction-start
|
|
|
|
18805 postgresql49876 postgres StartTransactionCommand transaction-start
|
|
|
|
18855 postgresql49875 postgres StartTransactionCommand transaction-start
|
|
|
|
18986 postgresql49873 postgres StartTransactionCommand transaction-start
|
|
|
|
</screen>
|
|
|
|
</para>
|
|
|
|
</step>
|
|
|
|
</procedure>
|
2006-12-02 01:42:54 +01:00
|
|
|
|
2009-03-28 01:10:23 +01:00
|
|
|
<para>
|
|
|
|
There are a few things to be careful about when adding trace macros
|
|
|
|
to the C code:
|
|
|
|
|
|
|
|
<itemizedlist>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
You should take care that the data types specified for a probe's
|
|
|
|
parameters match the data types of the variables used in the macro.
|
|
|
|
Otherwise, you will get compilation errors.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
On most platforms, if <productname>PostgreSQL</productname> is
|
2017-10-09 03:44:17 +02:00
|
|
|
built with <option>--enable-dtrace</option>, the arguments to a trace
|
2009-03-28 01:10:23 +01:00
|
|
|
macro will be evaluated whenever control passes through the
|
2017-10-09 03:44:17 +02:00
|
|
|
macro, <emphasis>even if no tracing is being done</emphasis>. This is
|
2009-03-28 01:10:23 +01:00
|
|
|
usually not worth worrying about if you are just reporting the
|
|
|
|
values of a few local variables. But beware of putting expensive
|
|
|
|
function calls into the arguments. If you need to do that,
|
|
|
|
consider protecting the macro with a check to see if the trace
|
|
|
|
is actually enabled:
|
|
|
|
|
|
|
|
<programlisting>
|
2010-07-29 21:34:41 +02:00
|
|
|
if (TRACE_POSTGRESQL_TRANSACTION_START_ENABLED())
|
|
|
|
TRACE_POSTGRESQL_TRANSACTION_START(some_function(...));
|
2009-03-28 01:10:23 +01:00
|
|
|
</programlisting>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
Each trace macro has a corresponding <literal>ENABLED</literal> macro.
|
2009-03-28 01:10:23 +01:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
|
|
|
|
</para>
|
|
|
|
|
2006-12-02 01:42:54 +01:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
</sect1>
|
|
|
|
|
2001-10-17 01:57:06 +02:00
|
|
|
</chapter>
|