postgresql/doc/src/sgml/monitoring.sgml

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

7286 lines
247 KiB
Plaintext
Raw Normal View History

2010-09-20 22:08:53 +02:00
<!-- doc/src/sgml/monitoring.sgml -->
<chapter id="monitoring">
<title>Monitoring Database Activity</title>
2003-08-31 19:32:24 +02:00
<indexterm zone="monitoring">
<primary>monitoring</primary>
<secondary>database activity</secondary>
</indexterm>
<indexterm zone="monitoring">
<primary>database activity</primary>
<secondary>monitoring</secondary>
</indexterm>
<para>
A database administrator frequently wonders, <quote>What is the system
doing right now?</quote>
This chapter discusses how to find that out.
</para>
<para>
Several tools are available for monitoring database activity and
analyzing performance. Most of this chapter is devoted to describing
<productname>PostgreSQL</productname>'s cumulative statistics system,
but one should not neglect regular Unix monitoring programs such as
<command>ps</command>, <command>top</command>, <command>iostat</command>, and <command>vmstat</command>.
2004-12-28 20:08:58 +01:00
Also, once one has identified a
poorly-performing query, further investigation might be needed using
Improve <xref> vs. <command> formatting in the documentation SQL commands are generally marked up as <command>, except when a link to a reference page is used using <xref>. But the latter doesn't create monospace markup, so this looks strange especially when a paragraph contains a mix of links and non-links. We considered putting <command> in the <refentrytitle> on the target side, but that creates some formatting side effects elsewhere. Generally, it seems safer to solve this on the link source side. We can't put the <xref> inside the <command>; the DTD doesn't allow this. DocBook 5 would allow the <command> to have the linkend attribute itself, but we are not there yet. So to solve this for now, convert the <xref>s to <link> plus <command>. This gives the correct look and also gives some more flexibility what we can put into the link text (e.g., subcommands or other clauses). In the future, these could then be converted to DocBook 5 style. I haven't converted absolutely all xrefs to SQL command reference pages, only those where we care about the appearance of the link text or where it was otherwise appropriate to make the appearance match a bit better. Also in some cases, the links where repetitive, so in those cases the links where just removed and replaced by a plain <command>. In cases where we just want the link and don't specifically care about the generated link text (typically phrased "for further information see <xref ...>") the xref is kept. Reported-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Discussion: https://www.postgresql.org/message-id/flat/87o8pco34z.fsf@wibble.ilmari.org
2020-10-03 16:16:51 +02:00
<productname>PostgreSQL</productname>'s <link linkend="sql-explain"><command>EXPLAIN</command></link> command.
<xref linkend="using-explain"/> discusses <command>EXPLAIN</command>
and other methods for understanding the behavior of an individual
query.
</para>
<sect1 id="monitoring-ps">
<title>Standard Unix Tools</title>
2001-11-12 20:19:39 +01:00
<indexterm zone="monitoring-ps">
<primary>ps</primary>
<secondary>to monitor activity</secondary>
</indexterm>
<para>
On most Unix platforms, <productname>PostgreSQL</productname> modifies its
command title as reported by <command>ps</command>, so that individual server
processes can readily be identified. A sample display is
<screen>
$ ps auxww | grep ^postgres
postgres 15551 0.0 0.1 57536 7132 pts/0 S 18:02 0:00 postgres -i
postgres 15554 0.0 0.0 57536 1184 ? Ss 18:02 0:00 postgres: background writer
postgres 15555 0.0 0.0 57536 916 ? Ss 18:02 0:00 postgres: checkpointer
postgres 15556 0.0 0.0 57536 916 ? Ss 18:02 0:00 postgres: walwriter
postgres 15557 0.0 0.0 58504 2244 ? Ss 18:02 0:00 postgres: autovacuum launcher
postgres 15582 0.0 0.0 58772 3080 ? Ss 18:04 0:00 postgres: joe runbug 127.0.0.1 idle
postgres 15606 0.0 0.0 58772 3052 ? Ss 18:07 0:00 postgres: tgl regression [local] SELECT waiting
postgres 15610 0.0 0.0 58772 3056 ? Ss 18:07 0:00 postgres: tgl regression [local] idle in transaction
</screen>
(The appropriate invocation of <command>ps</command> varies across different
platforms, as do the details of what is shown. This example is from a
recent Linux system.) The first process listed here is the
primary server process. The command arguments
shown for it are the same ones used when it was launched. The next four
processes are background worker processes automatically launched by the
primary process. (The <quote>autovacuum launcher</quote> process will not
be present if you have set the system not to run autovacuum.)
Each of the remaining
processes is a server process handling one client connection. Each such
process sets its command line display in the form
<screen>
postgres: <replaceable>user</replaceable> <replaceable>database</replaceable> <replaceable>host</replaceable> <replaceable>activity</replaceable>
</screen>
The user, database, and (client) host items remain the same for
the life of the client connection, but the activity indicator changes.
The activity can be <literal>idle</literal> (i.e., waiting for a client command),
<literal>idle in transaction</literal> (waiting for client inside a <command>BEGIN</command> block),
or a command type name such as <literal>SELECT</literal>. Also,
<literal>waiting</literal> is appended if the server process is presently waiting
on a lock held by another session. In the above example we can infer
that process 15606 is waiting for process 15610 to complete its transaction
and thereby release some lock. (Process 15610 must be the blocker, because
there is no other active session. In more complicated cases it would be
necessary to look into the
<link linkend="view-pg-locks"><structname>pg_locks</structname></link>
system view to determine who is blocking whom.)
</para>
<para>
If <xref linkend="guc-cluster-name"/> has been configured the
cluster name will also be shown in <command>ps</command> output:
<screen>
$ psql -c 'SHOW cluster_name'
cluster_name
--------------
server1
(1 row)
$ ps aux|grep server1
postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: server1: background writer
...
</screen>
</para>
<para>
If you have turned off <xref linkend="guc-update-process-title"/> then the
activity indicator is not updated; the process title is set only once
when a new process is launched. On some platforms this saves a measurable
amount of per-command overhead; on others it's insignificant.
</para>
<tip>
<para>
<productname>Solaris</productname> requires special handling. You must
use <command>/usr/ucb/ps</command>, rather than
<command>/bin/ps</command>. You also must use two <option>w</option>
flags, not just one. In addition, your original invocation of the
<command>postgres</command> command must have a shorter
<command>ps</command> status display than that provided by each
server process. If you fail to do all three things, the <command>ps</command>
output for each server process will be the original <command>postgres</command>
2001-11-29 21:43:43 +01:00
command line.
</para>
</tip>
</sect1>
<sect1 id="monitoring-stats">
<title>The Cumulative Statistics System</title>
2001-11-12 20:19:39 +01:00
<indexterm zone="monitoring-stats">
<primary>statistics</primary>
</indexterm>
<para>
<productname>PostgreSQL</productname>'s <firstterm>cumulative statistics
system</firstterm> supports collection and reporting of information about
server activity. Presently, accesses to tables and indexes in both
disk-block and individual-row terms are counted. The total number of rows
in each table, and information about vacuum and analyze actions for each
table are also counted. If enabled, calls to user-defined functions and
the total time spent in each one are counted as well.
</para>
<para>
<productname>PostgreSQL</productname> also supports reporting dynamic
information about exactly what is going on in the system right now, such as
the exact command currently being executed by other server processes, and
which other connections exist in the system. This facility is independent
of the cumulative statistics system.
</para>
<sect2 id="monitoring-stats-setup">
<title>Statistics Collection Configuration</title>
<para>
Since collection of statistics adds some overhead to query execution,
the system can be configured to collect or not collect information.
This is controlled by configuration parameters that are normally set in
<filename>postgresql.conf</filename>. (See <xref linkend="runtime-config"/> for
details about setting configuration parameters.)
</para>
<para>
The parameter <xref linkend="guc-track-activities"/> enables monitoring
of the current command being executed by any server process.
</para>
<para>
The parameter <xref linkend="guc-track-counts"/> controls whether
cumulative statistics are collected about table and index accesses.
</para>
<para>
The parameter <xref linkend="guc-track-functions"/> enables tracking of
usage of user-defined functions.
</para>
<para>
The parameter <xref linkend="guc-track-io-timing"/> enables monitoring
of block read and write times.
</para>
<para>
The parameter <xref linkend="guc-track-wal-io-timing"/> enables monitoring
of WAL write times.
</para>
<para>
Normally these parameters are set in <filename>postgresql.conf</filename> so
that they apply to all server processes, but it is possible to turn
them on or off in individual sessions using the <xref
linkend="sql-set"/> command. (To prevent
ordinary users from hiding their activity from the administrator,
only superusers are allowed to change these parameters with
<command>SET</command>.)
</para>
<para>
Cumulative statistics are collected in shared memory. Every
<productname>PostgreSQL</productname> process collects statistics locally,
then updates the shared data at appropriate intervals. When a server,
including a physical replica, shuts down cleanly, a permanent copy of the
statistics data is stored in the <filename>pg_stat</filename> subdirectory,
so that statistics can be retained across server restarts. In contrast,
when starting from an unclean shutdown (e.g., after an immediate shutdown,
a server crash, starting from a base backup, and point-in-time recovery),
all statistics counters are reset.
</para>
</sect2>
<sect2 id="monitoring-stats-views">
<title>Viewing Statistics</title>
<para>
2004-12-28 20:08:58 +01:00
Several predefined views, listed in <xref
linkend="monitoring-stats-dynamic-views-table"/>, are available to show
the current state of the system. There are also several other
views, listed in <xref
linkend="monitoring-stats-views-table"/>, available to show the accumulated
statistics. Alternatively, one can
build custom views using the underlying cumulative statistics functions, as
discussed in <xref linkend="monitoring-stats-functions"/>.
</para>
<para>
When using the cumulative statistics views and functions to monitor
collected data, it is important to realize that the information does not
update instantaneously. Each individual server process flushes out
accumulated statistics to shared memory just before going idle, but not
more frequently than once per <varname>PGSTAT_MIN_INTERVAL</varname>
milliseconds (1 second unless altered while building the server); so a
query or transaction still in progress does not affect the displayed totals
and the displayed information lags behind actual activity. However,
current-query information collected by <varname>track_activities</varname>
is always up-to-date.
</para>
<para>
Another important point is that when a server process is asked to display
any of the accumulated statistics, accessed values are cached until the end
of its current transaction in the default configuration. So the statistics
will show static information as long as you continue the current
transaction. Similarly, information about the current queries of all
sessions is collected when any such information is first requested within a
transaction, and the same information will be displayed throughout the
transaction. This is a feature, not a bug, because it allows you to perform
several queries on the statistics and correlate the results without
worrying that the numbers are changing underneath you.
When analyzing statistics interactively, or with expensive queries, the
time delta between accesses to individual statistics can lead to
significant skew in the cached statistics. To minimize skew,
<varname>stats_fetch_consistency</varname> can be set to
<literal>snapshot</literal>, at the price of increased memory usage for
caching not-needed statistics data. Conversely, if it's known that
statistics are only accessed once, caching accessed statistics is
unnecessary and can be avoided by setting
<varname>stats_fetch_consistency</varname> to <literal>none</literal>.
You can invoke <function>pg_stat_clear_snapshot()</function> to discard the
current transaction's statistics snapshot or cached values (if any). The
next use of statistical information will (when in snapshot mode) cause a
new snapshot to be built or (when in cache mode) accessed statistics to be
cached.
</para>
<para>
A transaction can also see its own statistics (not yet flushed out to the
shared memory statistics) in the views
<structname>pg_stat_xact_all_tables</structname>,
<structname>pg_stat_xact_sys_tables</structname>,
<structname>pg_stat_xact_user_tables</structname>, and
<structname>pg_stat_xact_user_functions</structname>. These numbers do not act as
stated above; instead they update continuously throughout the transaction.
</para>
<para>
Some of the information in the dynamic statistics views shown in <xref
linkend="monitoring-stats-dynamic-views-table"/> is security restricted.
Ordinary users can only see all the information about their own sessions
(sessions belonging to a role that they are a member of). In rows about
other sessions, many columns will be null. Note, however, that the
existence of a session and its general properties such as its sessions user
and database are visible to all users. Superusers and roles with privileges of
built-in role <literal>pg_read_all_stats</literal> (see also <xref
linkend="predefined-roles"/>) can see all the information about all sessions.
</para>
<table id="monitoring-stats-dynamic-views-table">
<title>Dynamic Statistics Views</title>
<tgroup cols="2">
<thead>
<row>
<entry>View Name</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<structname>pg_stat_activity</structname>
<indexterm><primary>pg_stat_activity</primary></indexterm>
</entry>
<entry>
One row per server process, showing information related to
the current activity of that process, such as state and current query.
See <link linkend="monitoring-pg-stat-activity-view">
<structname>pg_stat_activity</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_replication</structname><indexterm><primary>pg_stat_replication</primary></indexterm></entry>
<entry>One row per WAL sender process, showing statistics about
replication to that sender's connected standby server.
See <link linkend="monitoring-pg-stat-replication-view">
<structname>pg_stat_replication</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_wal_receiver</structname><indexterm><primary>pg_stat_wal_receiver</primary></indexterm></entry>
<entry>Only one row, showing statistics about the WAL receiver from
that receiver's connected server.
See <link linkend="monitoring-pg-stat-wal-receiver-view">
<structname>pg_stat_wal_receiver</structname></link> for details.
</entry>
</row>
Prefetch data referenced by the WAL, take II. Introduce a new GUC recovery_prefetch. When enabled, look ahead in the WAL and try to initiate asynchronous reading of referenced data blocks that are not yet cached in our buffer pool. For now, this is done with posix_fadvise(), which has several caveats. Since not all OSes have that system call, "try" is provided so that it can be enabled where available. Better mechanisms for asynchronous I/O are possible in later work. Set to "try" for now for test coverage. Default setting to be finalized before release. The GUC wal_decode_buffer_size limits the distance we can look ahead in bytes of decoded data. The existing GUC maintenance_io_concurrency is used to limit the number of concurrent I/Os allowed, based on pessimistic heuristics used to infer that I/Os have begun and completed. We'll also not look more than maintenance_io_concurrency * 4 block references ahead. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version) Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version) Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version) Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version) Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version) Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version) Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2022-04-07 09:28:40 +02:00
<row>
<entry><structname>pg_stat_recovery_prefetch</structname><indexterm><primary>pg_stat_recovery_prefetch</primary></indexterm></entry>
<entry>Only one row, showing statistics about blocks prefetched during recovery.
See <link linkend="monitoring-pg-stat-recovery-prefetch">
<structname>pg_stat_recovery_prefetch</structname></link> for details.
Prefetch data referenced by the WAL, take II. Introduce a new GUC recovery_prefetch. When enabled, look ahead in the WAL and try to initiate asynchronous reading of referenced data blocks that are not yet cached in our buffer pool. For now, this is done with posix_fadvise(), which has several caveats. Since not all OSes have that system call, "try" is provided so that it can be enabled where available. Better mechanisms for asynchronous I/O are possible in later work. Set to "try" for now for test coverage. Default setting to be finalized before release. The GUC wal_decode_buffer_size limits the distance we can look ahead in bytes of decoded data. The existing GUC maintenance_io_concurrency is used to limit the number of concurrent I/Os allowed, based on pessimistic heuristics used to infer that I/Os have begun and completed. We'll also not look more than maintenance_io_concurrency * 4 block references ahead. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version) Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version) Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version) Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version) Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version) Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version) Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2022-04-07 09:28:40 +02:00
</entry>
</row>
<row>
<entry><structname>pg_stat_subscription</structname><indexterm><primary>pg_stat_subscription</primary></indexterm></entry>
<entry>At least one row per subscription, showing information about
the subscription workers.
See <link linkend="monitoring-pg-stat-subscription">
<structname>pg_stat_subscription</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_ssl</structname><indexterm><primary>pg_stat_ssl</primary></indexterm></entry>
<entry>One row per connection (regular and replication), showing information about
SSL used on this connection.
See <link linkend="monitoring-pg-stat-ssl-view">
<structname>pg_stat_ssl</structname></link> for details.
</entry>
</row>
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
<row>
<entry><structname>pg_stat_gssapi</structname><indexterm><primary>pg_stat_gssapi</primary></indexterm></entry>
<entry>One row per connection (regular and replication), showing information about
GSSAPI authentication and encryption used on this connection.
See <link linkend="monitoring-pg-stat-gssapi-view">
<structname>pg_stat_gssapi</structname></link> for details.
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
</entry>
</row>
<row>
<entry><structname>pg_stat_progress_analyze</structname><indexterm><primary>pg_stat_progress_analyze</primary></indexterm></entry>
<entry>One row for each backend (including autovacuum worker processes) running
<command>ANALYZE</command>, showing current progress.
See <xref linkend="analyze-progress-reporting"/>.
</entry>
</row>
<row>
<entry><structname>pg_stat_progress_create_index</structname><indexterm><primary>pg_stat_progress_create_index</primary></indexterm></entry>
<entry>One row for each backend running <command>CREATE INDEX</command> or <command>REINDEX</command>, showing
current progress.
See <xref linkend="create-index-progress-reporting"/>.
</entry>
</row>
<row>
<entry><structname>pg_stat_progress_vacuum</structname><indexterm><primary>pg_stat_progress_vacuum</primary></indexterm></entry>
<entry>One row for each backend (including autovacuum worker processes) running
<command>VACUUM</command>, showing current progress.
See <xref linkend="vacuum-progress-reporting"/>.
</entry>
</row>
<row>
<entry><structname>pg_stat_progress_cluster</structname><indexterm><primary>pg_stat_progress_cluster</primary></indexterm></entry>
<entry>One row for each backend running
<command>CLUSTER</command> or <command>VACUUM FULL</command>, showing current progress.
See <xref linkend="cluster-progress-reporting"/>.
</entry>
</row>
<row>
<entry><structname>pg_stat_progress_basebackup</structname><indexterm><primary>pg_stat_progress_basebackup</primary></indexterm></entry>
<entry>One row for each WAL sender process streaming a base backup,
showing current progress.
See <xref linkend="basebackup-progress-reporting"/>.
</entry>
</row>
<row>
<entry><structname>pg_stat_progress_copy</structname><indexterm><primary>pg_stat_progress_copy</primary></indexterm></entry>
<entry>One row for each backend running <command>COPY</command>, showing current progress.
See <xref linkend="copy-progress-reporting"/>.
</entry>
</row>
</tbody>
</tgroup>
</table>
<table id="monitoring-stats-views-table">
<title>Collected Statistics Views</title>
<tgroup cols="2">
<thead>
<row>
<entry>View Name</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<!-- everything related to global objects, alphabetically -->
<row>
<entry><structname>pg_stat_archiver</structname><indexterm><primary>pg_stat_archiver</primary></indexterm></entry>
<entry>One row only, showing statistics about the
WAL archiver process's activity. See
<link linkend="monitoring-pg-stat-archiver-view">
<structname>pg_stat_archiver</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_bgwriter</structname><indexterm><primary>pg_stat_bgwriter</primary></indexterm></entry>
<entry>One row only, showing statistics about the
background writer process's activity. See
<link linkend="monitoring-pg-stat-bgwriter-view">
<structname>pg_stat_bgwriter</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_checkpointer</structname><indexterm><primary>pg_stat_checkpointer</primary></indexterm></entry>
<entry>One row only, showing statistics about the
checkpointer process's activity. See
<link linkend="monitoring-pg-stat-checkpointer-view">
<structname>pg_stat_checkpointer</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_database</structname><indexterm><primary>pg_stat_database</primary></indexterm></entry>
<entry>One row per database, showing database-wide statistics. See
<link linkend="monitoring-pg-stat-database-view">
<structname>pg_stat_database</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_database_conflicts</structname><indexterm><primary>pg_stat_database_conflicts</primary></indexterm></entry>
<entry>
One row per database, showing database-wide statistics about
query cancels due to conflict with recovery on standby servers.
See <link linkend="monitoring-pg-stat-database-conflicts-view">
<structname>pg_stat_database_conflicts</structname></link> for details.
</entry>
</row>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
<row>
<entry><structname>pg_stat_io</structname><indexterm><primary>pg_stat_io</primary></indexterm></entry>
<entry>
One row for each combination of backend type, context, and target object
containing cluster-wide I/O statistics.
See <link linkend="monitoring-pg-stat-io-view">
<structname>pg_stat_io</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_replication_slots</structname><indexterm><primary>pg_stat_replication_slots</primary></indexterm></entry>
<entry>One row per replication slot, showing statistics about the
replication slot's usage. See
<link linkend="monitoring-pg-stat-replication-slots-view">
<structname>pg_stat_replication_slots</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_slru</structname><indexterm><primary>pg_stat_slru</primary></indexterm></entry>
<entry>One row per SLRU, showing statistics of operations. See
<link linkend="monitoring-pg-stat-slru-view">
<structname>pg_stat_slru</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_subscription_stats</structname><indexterm><primary>pg_stat_subscription_stats</primary></indexterm></entry>
<entry>One row per subscription, showing statistics about errors.
See <link linkend="monitoring-pg-stat-subscription-stats">
<structname>pg_stat_subscription_stats</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_wal</structname><indexterm><primary>pg_stat_wal</primary></indexterm></entry>
<entry>One row only, showing statistics about WAL activity. See
<link linkend="monitoring-pg-stat-wal-view">
<structname>pg_stat_wal</structname></link> for details.
</entry>
</row>
<!-- all "stat" for schema objects, by "importance" -->
<row>
<entry><structname>pg_stat_all_tables</structname><indexterm><primary>pg_stat_all_tables</primary></indexterm></entry>
<entry>
One row for each table in the current database, showing statistics
about accesses to that specific table.
See <link linkend="monitoring-pg-stat-all-tables-view">
<structname>pg_stat_all_tables</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_sys_tables</structname><indexterm><primary>pg_stat_sys_tables</primary></indexterm></entry>
<entry>Same as <structname>pg_stat_all_tables</structname>, except that only
system tables are shown.</entry>
</row>
<row>
<entry><structname>pg_stat_user_tables</structname><indexterm><primary>pg_stat_user_tables</primary></indexterm></entry>
<entry>Same as <structname>pg_stat_all_tables</structname>, except that only user
tables are shown.</entry>
</row>
<row>
<entry><structname>pg_stat_xact_all_tables</structname><indexterm><primary>pg_stat_xact_all_tables</primary></indexterm></entry>
<entry>Similar to <structname>pg_stat_all_tables</structname>, but counts actions
taken so far within the current transaction (which are <emphasis>not</emphasis>
yet included in <structname>pg_stat_all_tables</structname> and related views).
The columns for numbers of live and dead rows and vacuum and
analyze actions are not present in this view.</entry>
</row>
<row>
<entry><structname>pg_stat_xact_sys_tables</structname><indexterm><primary>pg_stat_xact_sys_tables</primary></indexterm></entry>
<entry>Same as <structname>pg_stat_xact_all_tables</structname>, except that only
system tables are shown.</entry>
</row>
<row>
<entry><structname>pg_stat_xact_user_tables</structname><indexterm><primary>pg_stat_xact_user_tables</primary></indexterm></entry>
<entry>Same as <structname>pg_stat_xact_all_tables</structname>, except that only
user tables are shown.</entry>
</row>
<row>
<entry><structname>pg_stat_all_indexes</structname><indexterm><primary>pg_stat_all_indexes</primary></indexterm></entry>
<entry>
One row for each index in the current database, showing statistics
about accesses to that specific index.
See <link linkend="monitoring-pg-stat-all-indexes-view">
<structname>pg_stat_all_indexes</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_sys_indexes</structname><indexterm><primary>pg_stat_sys_indexes</primary></indexterm></entry>
<entry>Same as <structname>pg_stat_all_indexes</structname>, except that only
indexes on system tables are shown.</entry>
</row>
<row>
<entry><structname>pg_stat_user_indexes</structname><indexterm><primary>pg_stat_user_indexes</primary></indexterm></entry>
<entry>Same as <structname>pg_stat_all_indexes</structname>, except that only
indexes on user tables are shown.</entry>
</row>
<row>
<entry><structname>pg_stat_user_functions</structname><indexterm><primary>pg_stat_user_functions</primary></indexterm></entry>
<entry>
One row for each tracked function, showing statistics
about executions of that function. See
<link linkend="monitoring-pg-stat-user-functions-view">
<structname>pg_stat_user_functions</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_stat_xact_user_functions</structname><indexterm><primary>pg_stat_xact_user_functions</primary></indexterm></entry>
<entry>Similar to <structname>pg_stat_user_functions</structname>, but counts only
calls during the current transaction (which are <emphasis>not</emphasis>
yet included in <structname>pg_stat_user_functions</structname>).</entry>
</row>
<!-- all "statio" for schema objects, by "importance" -->
<row>
<entry><structname>pg_statio_all_tables</structname><indexterm><primary>pg_statio_all_tables</primary></indexterm></entry>
<entry>
One row for each table in the current database, showing statistics
about I/O on that specific table.
See <link linkend="monitoring-pg-statio-all-tables-view">
<structname>pg_statio_all_tables</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_statio_sys_tables</structname><indexterm><primary>pg_statio_sys_tables</primary></indexterm></entry>
<entry>Same as <structname>pg_statio_all_tables</structname>, except that only
system tables are shown.</entry>
</row>
<row>
<entry><structname>pg_statio_user_tables</structname><indexterm><primary>pg_statio_user_tables</primary></indexterm></entry>
<entry>Same as <structname>pg_statio_all_tables</structname>, except that only
user tables are shown.</entry>
</row>
<row>
<entry><structname>pg_statio_all_indexes</structname><indexterm><primary>pg_statio_all_indexes</primary></indexterm></entry>
<entry>
One row for each index in the current database,
showing statistics about I/O on that specific index.
See <link linkend="monitoring-pg-statio-all-indexes-view">
<structname>pg_statio_all_indexes</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_statio_sys_indexes</structname><indexterm><primary>pg_statio_sys_indexes</primary></indexterm></entry>
<entry>Same as <structname>pg_statio_all_indexes</structname>, except that only
indexes on system tables are shown.</entry>
</row>
<row>
<entry><structname>pg_statio_user_indexes</structname><indexterm><primary>pg_statio_user_indexes</primary></indexterm></entry>
<entry>Same as <structname>pg_statio_all_indexes</structname>, except that only
indexes on user tables are shown.</entry>
</row>
<row>
<entry><structname>pg_statio_all_sequences</structname><indexterm><primary>pg_statio_all_sequences</primary></indexterm></entry>
<entry>
One row for each sequence in the current database,
showing statistics about I/O on that specific sequence.
See <link linkend="monitoring-pg-statio-all-sequences-view">
<structname>pg_statio_all_sequences</structname></link> for details.
</entry>
</row>
<row>
<entry><structname>pg_statio_sys_sequences</structname><indexterm><primary>pg_statio_sys_sequences</primary></indexterm></entry>
<entry>Same as <structname>pg_statio_all_sequences</structname>, except that only
system sequences are shown. (Presently, no system sequences are defined,
so this view is always empty.)</entry>
</row>
<row>
<entry><structname>pg_statio_user_sequences</structname><indexterm><primary>pg_statio_user_sequences</primary></indexterm></entry>
<entry>Same as <structname>pg_statio_all_sequences</structname>, except that only
user sequences are shown.</entry>
</row>
</tbody>
</tgroup>
</table>
<para>
The per-index statistics are particularly useful to determine which
indexes are being used and how effective they are.
</para>
<para>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
The <structname>pg_stat_io</structname> and
<structname>pg_statio_</structname> set of views are useful for determining
the effectiveness of the buffer cache. They can be used to calculate a cache
hit ratio. Note that while <productname>PostgreSQL</productname>'s I/O
statistics capture most instances in which the kernel was invoked in order
to perform I/O, they do not differentiate between data which had to be
fetched from disk and that which already resided in the kernel page cache.
Users are advised to use the <productname>PostgreSQL</productname>
statistics views in combination with operating system utilities for a more
complete picture of their database's I/O performance.
</para>
</sect2>
<sect2 id="monitoring-pg-stat-activity-view">
<title><structname>pg_stat_activity</structname></title>
<indexterm>
<primary>pg_stat_activity</primary>
</indexterm>
<para>
The <structname>pg_stat_activity</structname> view will have one row
per server process, showing information related to
the current activity of that process.
</para>
<table id="pg-stat-activity-view" xreflabel="pg_stat_activity">
<title><structname>pg_stat_activity</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datid</structfield> <type>oid</type>
</para>
<para>
OID of the database this backend is connected to
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datname</structfield> <type>name</type>
</para>
<para>
Name of the database this backend is connected to
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of this backend
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>leader_pid</structfield> <type>integer</type>
</para>
<para>
Process ID of the parallel group leader if this process is a parallel
query worker, or process ID of the leader apply worker if this process
is a parallel apply worker. <literal>NULL</literal> indicates that this
process is a parallel group leader or leader apply worker, or does not
participate in any parallel operation.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>usesysid</structfield> <type>oid</type>
</para>
<para>
OID of the user logged into this backend
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>usename</structfield> <type>name</type>
</para>
<para>
Name of the user logged into this backend
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>application_name</structfield> <type>text</type>
</para>
<para>
Name of the application that is connected
to this backend
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>client_addr</structfield> <type>inet</type>
</para>
<para>
IP address of the client connected to this backend.
If this field is null, it indicates either that the client is
connected via a Unix socket on the server machine or that this is an
internal process such as autovacuum.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>client_hostname</structfield> <type>text</type>
</para>
<para>
Host name of the connected client, as reported by a
reverse DNS lookup of <structfield>client_addr</structfield>. This field will
only be non-null for IP connections, and only when <xref linkend="guc-log-hostname"/> is enabled.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>client_port</structfield> <type>integer</type>
</para>
<para>
TCP port number that the client is using for communication
with this backend, or <literal>-1</literal> if a Unix socket is used.
If this field is null, it indicates that this is an internal server process.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>backend_start</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time when this process was started. For client backends,
this is the time the client connected to the server.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>xact_start</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time when this process' current transaction was started, or null
if no transaction is active. If the current
query is the first of its transaction, this column is equal to the
<structfield>query_start</structfield> column.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>query_start</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time when the currently active query was started, or if
<structfield>state</structfield> is not <literal>active</literal>, when the last query
was started
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>state_change</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time when the <structfield>state</structfield> was last changed
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wait_event_type</structfield> <type>text</type>
</para>
<para>
The type of event for which the backend is waiting, if any;
otherwise NULL. See <xref linkend="wait-event-table"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wait_event</structfield> <type>text</type>
</para>
<para>
Wait event name if backend is currently waiting, otherwise NULL.
See <xref linkend="wait-event-activity-table"/> through
<xref linkend="wait-event-timeout-table"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>state</structfield> <type>text</type>
</para>
<para>
Current overall state of this backend.
Possible values are:
<itemizedlist>
<listitem>
<para>
<literal>active</literal>: The backend is executing a query.
</para>
</listitem>
<listitem>
<para>
<literal>idle</literal>: The backend is waiting for a new client command.
</para>
</listitem>
<listitem>
<para>
<literal>idle in transaction</literal>: The backend is in a transaction,
but is not currently executing a query.
</para>
</listitem>
<listitem>
<para>
<literal>idle in transaction (aborted)</literal>: This state is similar to
<literal>idle in transaction</literal>, except one of the statements in
the transaction caused an error.
</para>
</listitem>
<listitem>
<para>
<literal>fastpath function call</literal>: The backend is executing a
fast-path function.
</para>
</listitem>
<listitem>
<para>
<literal>disabled</literal>: This state is reported if <xref linkend="guc-track-activities"/> is disabled in this backend.
</para>
</listitem>
</itemizedlist>
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>backend_xid</structfield> <type>xid</type>
</para>
<para>
Top-level transaction identifier of this backend, if any; see
<xref linkend="transaction-id"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>backend_xmin</structfield> <type>xid</type>
</para>
<para>
The current backend's <literal>xmin</literal> horizon.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>query_id</structfield> <type>bigint</type>
</para>
<para>
Identifier of this backend's most recent query. If
<structfield>state</structfield> is <literal>active</literal> this
field shows the identifier of the currently executing query. In
all other states, it shows the identifier of last query that was
executed. Query identifiers are not computed by default so this
field will be null unless <xref linkend="guc-compute-query-id"/>
parameter is enabled or a third-party module that computes query
identifiers is configured.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>query</structfield> <type>text</type>
</para>
<para>
Text of this backend's most recent query. If
<structfield>state</structfield> is <literal>active</literal> this field shows the
currently executing query. In all other states, it shows the last query
that was executed. By default the query text is truncated at 1024
bytes; this value can be changed via the parameter
<xref linkend="guc-track-activity-query-size"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>backend_type</structfield> <type>text</type>
</para>
<para>
Type of current backend. Possible types are
<literal>autovacuum launcher</literal>, <literal>autovacuum worker</literal>,
<literal>logical replication launcher</literal>,
<literal>logical replication worker</literal>,
<literal>parallel worker</literal>, <literal>background writer</literal>,
<literal>client backend</literal>, <literal>checkpointer</literal>,
<literal>archiver</literal>, <literal>standalone backend</literal>,
<literal>startup</literal>, <literal>walreceiver</literal>,
<literal>walsender</literal>, <literal>walwriter</literal> and
<literal>walsummarizer</literal>.
In addition, background workers registered by extensions may have
additional types.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
<note>
<para>
The <structfield>wait_event</structfield> and <structfield>state</structfield> columns are
independent. If a backend is in the <literal>active</literal> state,
it may or may not be <literal>waiting</literal> on some event. If the state
is <literal>active</literal> and <structfield>wait_event</structfield> is non-null, it
means that a query is being executed, but is being blocked somewhere
in the system.
</para>
</note>
<table id="wait-event-table">
<title>Wait Event Types</title>
<tgroup cols="2">
<thead>
<row>
<entry>Wait Event Type</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>Activity</literal></entry>
<entry>The server process is idle. This event type indicates a process
waiting for activity in its main processing loop.
<literal>wait_event</literal> will identify the specific wait point;
see <xref linkend="wait-event-activity-table"/>.
</entry>
</row>
<row>
<entry><literal>BufferPin</literal></entry>
<entry>The server process is waiting for exclusive access to
a data buffer. Buffer pin waits can be protracted if
another process holds an open cursor that last read data from the
buffer in question. See <xref linkend="wait-event-bufferpin-table"/>.
</entry>
</row>
<row>
<entry><literal>Client</literal></entry>
<entry>The server process is waiting for activity on a socket
connected to a user application. Thus, the server expects something
to happen that is independent of its internal processes.
<literal>wait_event</literal> will identify the specific wait point;
see <xref linkend="wait-event-client-table"/>.
</entry>
</row>
<row>
<entry><literal>Extension</literal></entry>
<entry>The server process is waiting for some condition defined by an
extension module.
See <xref linkend="wait-event-extension-table"/>.
</entry>
</row>
<row>
<entry><literal>IO</literal></entry>
<entry>The server process is waiting for an I/O operation to complete.
<literal>wait_event</literal> will identify the specific wait point;
see <xref linkend="wait-event-io-table"/>.
</entry>
</row>
<row>
<entry><literal>IPC</literal></entry>
<entry>The server process is waiting for some interaction with
another server process. <literal>wait_event</literal> will
identify the specific wait point;
see <xref linkend="wait-event-ipc-table"/>.
</entry>
</row>
<row>
<entry><literal>Lock</literal></entry>
<entry>The server process is waiting for a heavyweight lock.
Heavyweight locks, also known as lock manager locks or simply locks,
primarily protect SQL-visible objects such as tables. However,
they are also used to ensure mutual exclusion for certain internal
operations such as relation extension. <literal>wait_event</literal>
will identify the type of lock awaited;
see <xref linkend="wait-event-lock-table"/>.
</entry>
</row>
<row>
<entry><literal>LWLock</literal></entry>
<entry> The server process is waiting for a lightweight lock.
Most such locks protect a particular data structure in shared memory.
<literal>wait_event</literal> will contain a name identifying the purpose
of the lightweight lock. (Some locks have specific names; others
are part of a group of locks each with a similar purpose.)
See <xref linkend="wait-event-lwlock-table"/>.
</entry>
</row>
<row>
<entry><literal>Timeout</literal></entry>
<entry>The server process is waiting for a timeout
to expire. <literal>wait_event</literal> will identify the specific wait
point; see <xref linkend="wait-event-timeout-table"/>.
</entry>
</row>
</tbody>
</tgroup>
</table>
Generate automatically code and documentation related to wait events The documentation and the code is generated automatically from a new file called wait_event_names.txt, formatted in sections dedicated to each wait event class (Timeout, Lock, IO, etc.) with three tab-separated fields: - C symbol in enums - Format in the system views - Description in the docs Using this approach has several advantages, as we have proved to be rather bad in maintaining this area of the tree across the years: - The order of each item in the documentation and the code, which should be alphabetical, has become incorrect multiple times, and the script generating the code and documentation has a few rules to enforce that, making the maintenance a no-brainer. - Some wait events were added to the code, but not documented, so this cannot be missed now. - The order of the tables for each wait event class is enforced in the documentation (the input .txt file does so as well for clarity, though this is not mandatory). - Less code, shaving 1.2k lines from the tree, with 1/3 of the savings coming from the code, the rest from the documentation. The wait event types "Lock" and "LWLock" still have their own code path for their code, hence only the documentation is created for them. These classes are listed with a special marker called WAIT_EVENT_DOCONLY in the input file. Adding a new wait event now requires only an update of wait_event_names.txt, with "Lock" and "LWLock" treated as exceptions. This commit has been tested with configure/Makefile, the CI and VPATH build. clean, distclean and maintainer-clean were working fine. Author: Bertrand Drouvot, Michael Paquier Discussion: https://postgr.es/m/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98@gmail.com
2023-07-05 03:53:11 +02:00
&wait_event_types;
<para>
Here are examples of how wait events can be viewed:
<programlisting>
SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event is NOT NULL;
pid | wait_event_type | wait_event
------+-----------------+------------
2540 | Lock | relation
6644 | LWLock | ProcArray
(2 rows)
</programlisting>
<programlisting>
SELECT a.pid, a.wait_event, w.description
FROM pg_stat_activity a JOIN
pg_wait_events w ON (a.wait_event_type = w.type AND
a.wait_event = w.name)
WHERE a.wait_event is NOT NULL and a.state = 'active';
-[ RECORD 1 ]------------------------------------------------------&zwsp;------------
pid | 686674
wait_event | WALInitSync
description | Waiting for a newly initialized WAL file to reach durable storage
</programlisting>
</para>
Generate automatically code and documentation related to wait events The documentation and the code is generated automatically from a new file called wait_event_names.txt, formatted in sections dedicated to each wait event class (Timeout, Lock, IO, etc.) with three tab-separated fields: - C symbol in enums - Format in the system views - Description in the docs Using this approach has several advantages, as we have proved to be rather bad in maintaining this area of the tree across the years: - The order of each item in the documentation and the code, which should be alphabetical, has become incorrect multiple times, and the script generating the code and documentation has a few rules to enforce that, making the maintenance a no-brainer. - Some wait events were added to the code, but not documented, so this cannot be missed now. - The order of the tables for each wait event class is enforced in the documentation (the input .txt file does so as well for clarity, though this is not mandatory). - Less code, shaving 1.2k lines from the tree, with 1/3 of the savings coming from the code, the rest from the documentation. The wait event types "Lock" and "LWLock" still have their own code path for their code, hence only the documentation is created for them. These classes are listed with a special marker called WAIT_EVENT_DOCONLY in the input file. Adding a new wait event now requires only an update of wait_event_names.txt, with "Lock" and "LWLock" treated as exceptions. This commit has been tested with configure/Makefile, the CI and VPATH build. clean, distclean and maintainer-clean were working fine. Author: Bertrand Drouvot, Michael Paquier Discussion: https://postgr.es/m/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98@gmail.com
2023-07-05 03:53:11 +02:00
<note>
<para>
Extensions can add <literal>Extension</literal> and
<literal>LWLock</literal> events
to the lists shown in <xref linkend="wait-event-extension-table"/> and
<xref linkend="wait-event-lwlock-table"/>. In some cases, the name
of an <literal>LWLock</literal> assigned by an extension will not be
available in all server processes. It might be reported as just
<quote><literal>extension</literal></quote> rather than the
Generate automatically code and documentation related to wait events The documentation and the code is generated automatically from a new file called wait_event_names.txt, formatted in sections dedicated to each wait event class (Timeout, Lock, IO, etc.) with three tab-separated fields: - C symbol in enums - Format in the system views - Description in the docs Using this approach has several advantages, as we have proved to be rather bad in maintaining this area of the tree across the years: - The order of each item in the documentation and the code, which should be alphabetical, has become incorrect multiple times, and the script generating the code and documentation has a few rules to enforce that, making the maintenance a no-brainer. - Some wait events were added to the code, but not documented, so this cannot be missed now. - The order of the tables for each wait event class is enforced in the documentation (the input .txt file does so as well for clarity, though this is not mandatory). - Less code, shaving 1.2k lines from the tree, with 1/3 of the savings coming from the code, the rest from the documentation. The wait event types "Lock" and "LWLock" still have their own code path for their code, hence only the documentation is created for them. These classes are listed with a special marker called WAIT_EVENT_DOCONLY in the input file. Adding a new wait event now requires only an update of wait_event_names.txt, with "Lock" and "LWLock" treated as exceptions. This commit has been tested with configure/Makefile, the CI and VPATH build. clean, distclean and maintainer-clean were working fine. Author: Bertrand Drouvot, Michael Paquier Discussion: https://postgr.es/m/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98@gmail.com
2023-07-05 03:53:11 +02:00
extension-assigned name.
</para>
</note>
</sect2>
<sect2 id="monitoring-pg-stat-replication-view">
<title><structname>pg_stat_replication</structname></title>
<indexterm>
<primary>pg_stat_replication</primary>
</indexterm>
<para>
The <structname>pg_stat_replication</structname> view will contain one row
per WAL sender process, showing statistics about replication to that
sender's connected standby server. Only directly connected standbys are
listed; no information is available about downstream standby servers.
</para>
<table id="pg-stat-replication-view" xreflabel="pg_stat_replication">
<title><structname>pg_stat_replication</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of a WAL sender process
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>usesysid</structfield> <type>oid</type>
</para>
<para>
OID of the user logged into this WAL sender process
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>usename</structfield> <type>name</type>
</para>
<para>
Name of the user logged into this WAL sender process
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>application_name</structfield> <type>text</type>
</para>
<para>
Name of the application that is connected
to this WAL sender
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>client_addr</structfield> <type>inet</type>
</para>
<para>
IP address of the client connected to this WAL sender.
If this field is null, it indicates that the client is
connected via a Unix socket on the server machine.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>client_hostname</structfield> <type>text</type>
</para>
<para>
Host name of the connected client, as reported by a
reverse DNS lookup of <structfield>client_addr</structfield>. This field will
only be non-null for IP connections, and only when <xref linkend="guc-log-hostname"/> is enabled.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>client_port</structfield> <type>integer</type>
</para>
<para>
TCP port number that the client is using for communication
with this WAL sender, or <literal>-1</literal> if a Unix socket is used
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>backend_start</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time when this process was started, i.e., when the
client connected to this WAL sender
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>backend_xmin</structfield> <type>xid</type>
</para>
<para>
This standby's <literal>xmin</literal> horizon reported
by <xref linkend="guc-hot-standby-feedback"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>state</structfield> <type>text</type>
</para>
<para>
Current WAL sender state.
Possible values are:
<itemizedlist>
<listitem>
<para>
<literal>startup</literal>: This WAL sender is starting up.
</para>
</listitem>
<listitem>
<para>
<literal>catchup</literal>: This WAL sender's connected standby is
catching up with the primary.
</para>
</listitem>
<listitem>
<para>
<literal>streaming</literal>: This WAL sender is streaming changes
after its connected standby server has caught up with the primary.
</para>
</listitem>
<listitem>
<para>
<literal>backup</literal>: This WAL sender is sending a backup.
</para>
</listitem>
<listitem>
<para>
<literal>stopping</literal>: This WAL sender is stopping.
</para>
</listitem>
</itemizedlist>
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sent_lsn</structfield> <type>pg_lsn</type>
</para>
<para>
Last write-ahead log location sent on this connection
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>write_lsn</structfield> <type>pg_lsn</type>
</para>
<para>
Last write-ahead log location written to disk by this standby
server
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>flush_lsn</structfield> <type>pg_lsn</type>
</para>
<para>
Last write-ahead log location flushed to disk by this standby
server
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>replay_lsn</structfield> <type>pg_lsn</type>
</para>
<para>
Last write-ahead log location replayed into the database on this
standby server
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>write_lag</structfield> <type>interval</type>
</para>
<para>
Time elapsed between flushing recent WAL locally and receiving
notification that this standby server has written it (but not yet
flushed it or applied it). This can be used to gauge the delay that
<literal>synchronous_commit</literal> level
<literal>remote_write</literal> incurred while committing if this
server was configured as a synchronous standby.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>flush_lag</structfield> <type>interval</type>
</para>
<para>
Time elapsed between flushing recent WAL locally and receiving
notification that this standby server has written and flushed it
(but not yet applied it). This can be used to gauge the delay that
<literal>synchronous_commit</literal> level
<literal>on</literal> incurred while committing if this
server was configured as a synchronous standby.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>replay_lag</structfield> <type>interval</type>
</para>
<para>
Time elapsed between flushing recent WAL locally and receiving
notification that this standby server has written, flushed and
applied it. This can be used to gauge the delay that
<literal>synchronous_commit</literal> level
<literal>remote_apply</literal> incurred while committing if this
server was configured as a synchronous standby.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sync_priority</structfield> <type>integer</type>
</para>
<para>
Priority of this standby server for being chosen as the
synchronous standby in a priority-based synchronous replication.
This has no effect in a quorum-based synchronous replication.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sync_state</structfield> <type>text</type>
</para>
<para>
Synchronous state of this standby server.
Possible values are:
<itemizedlist>
<listitem>
<para>
<literal>async</literal>: This standby server is asynchronous.
</para>
</listitem>
<listitem>
<para>
<literal>potential</literal>: This standby server is now asynchronous,
but can potentially become synchronous if one of current
synchronous ones fails.
</para>
</listitem>
<listitem>
<para>
<literal>sync</literal>: This standby server is synchronous.
</para>
</listitem>
<listitem>
<para>
<literal>quorum</literal>: This standby server is considered as a candidate
for quorum standbys.
</para>
</listitem>
</itemizedlist>
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>reply_time</structfield> <type>timestamp with time zone</type>
</para>
<para>
Send time of last reply message received from standby server
</para></entry>
</row>
</tbody>
</tgroup>
</table>
<para>
The lag times reported in the <structname>pg_stat_replication</structname>
view are measurements of the time taken for recent WAL to be written,
flushed and replayed and for the sender to know about it. These times
represent the commit delay that was (or would have been) introduced by each
synchronous commit level, if the remote server was configured as a
synchronous standby. For an asynchronous standby, the
<structfield>replay_lag</structfield> column approximates the delay
before recent transactions became visible to queries. If the standby
server has entirely caught up with the sending server and there is no more
WAL activity, the most recently measured lag times will continue to be
displayed for a short time and then show NULL.
</para>
<para>
Lag times work automatically for physical replication. Logical decoding
plugins may optionally emit tracking messages; if they do not, the tracking
mechanism will simply display NULL lag.
</para>
<note>
<para>
The reported lag times are not predictions of how long it will take for
the standby to catch up with the sending server assuming the current
rate of replay. Such a system would show similar times while new WAL is
being generated, but would differ when the sender becomes idle. In
particular, when the standby has caught up completely,
<structname>pg_stat_replication</structname> shows the time taken to
write, flush and replay the most recent reported WAL location rather than
zero as some users might expect. This is consistent with the goal of
measuring synchronous commit and transaction visibility delays for
recent write transactions.
To reduce confusion for users expecting a different model of lag, the
lag columns revert to NULL after a short time on a fully replayed idle
system. Monitoring systems should choose whether to represent this
as missing data, zero or continue to display the last known value.
</para>
</note>
</sect2>
<sect2 id="monitoring-pg-stat-replication-slots-view">
<title><structname>pg_stat_replication_slots</structname></title>
<indexterm>
<primary>pg_stat_replication_slots</primary>
</indexterm>
<para>
The <structname>pg_stat_replication_slots</structname> view will contain
one row per logical replication slot, showing statistics about its usage.
</para>
<table id="pg-stat-replication-slots-view" xreflabel="pg_stat_replication_slots">
<title><structname>pg_stat_replication_slots</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>slot_name</structfield> <type>text</type>
</para>
<para>
A unique, cluster-wide identifier for the replication slot
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>spill_txns</structfield> <type>bigint</type>
</para>
<para>
Number of transactions spilled to disk once the memory used by
logical decoding to decode changes from WAL has exceeded
<literal>logical_decoding_work_mem</literal>. The counter gets
2021-07-16 10:35:38 +02:00
incremented for both top-level transactions and subtransactions.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>spill_count</structfield> <type>bigint</type>
</para>
<para>
Number of times transactions were spilled to disk while decoding
changes from WAL for this slot. This counter is incremented each time
a transaction is spilled, and the same transaction may be spilled
multiple times.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>spill_bytes</structfield> <type>bigint</type>
</para>
<para>
Amount of decoded transaction data spilled to disk while performing
decoding of changes from WAL for this slot. This and other spill
counters can be used to gauge the I/O which occurred during logical
decoding and allow tuning <literal>logical_decoding_work_mem</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stream_txns</structfield> <type>bigint</type>
</para>
<para>
Number of in-progress transactions streamed to the decoding output
plugin after the memory used by logical decoding to decode changes
from WAL for this slot has exceeded
<literal>logical_decoding_work_mem</literal>. Streaming only
2021-07-16 10:35:38 +02:00
works with top-level transactions (subtransactions can't be streamed
independently), so the counter is not incremented for subtransactions.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stream_count</structfield><type>bigint</type>
</para>
<para>
Number of times in-progress transactions were streamed to the decoding
output plugin while decoding changes from WAL for this slot. This
counter is incremented each time a transaction is streamed, and the
same transaction may be streamed multiple times.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stream_bytes</structfield><type>bigint</type>
</para>
<para>
Amount of transaction data decoded for streaming in-progress
transactions to the decoding output plugin while decoding changes from
WAL for this slot. This and other streaming counters for this slot can
be used to tune <literal>logical_decoding_work_mem</literal>.
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>total_txns</structfield> <type>bigint</type>
</para>
<para>
Number of decoded transactions sent to the decoding output plugin for
2021-07-16 10:35:38 +02:00
this slot. This counts top-level transactions only, and is not incremented
for subtransactions. Note that this includes the transactions that are
streamed and/or spilled.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>total_bytes</structfield><type>bigint</type>
</para>
<para>
Amount of transaction data decoded for sending transactions to the
decoding output plugin while decoding changes from WAL for this slot.
Note that this includes data that is streamed and/or spilled.
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which these statistics were last reset
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-wal-receiver-view">
<title><structname>pg_stat_wal_receiver</structname></title>
<indexterm>
<primary>pg_stat_wal_receiver</primary>
</indexterm>
<para>
The <structname>pg_stat_wal_receiver</structname> view will contain only
one row, showing statistics about the WAL receiver from that receiver's
connected server.
</para>
<table id="pg-stat-wal-receiver-view" xreflabel="pg_stat_wal_receiver">
<title><structname>pg_stat_wal_receiver</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of the WAL receiver process
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>status</structfield> <type>text</type>
</para>
<para>
Activity status of the WAL receiver process
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>receive_start_lsn</structfield> <type>pg_lsn</type>
</para>
<para>
First write-ahead log location used when WAL receiver is
started
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>receive_start_tli</structfield> <type>integer</type>
</para>
<para>
First timeline number used when WAL receiver is started
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>written_lsn</structfield> <type>pg_lsn</type>
</para>
<para>
Last write-ahead log location already received and written to disk,
but not flushed. This should not be used for data integrity checks.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>flushed_lsn</structfield> <type>pg_lsn</type>
</para>
<para>
Last write-ahead log location already received and flushed to
disk, the initial value of this field being the first log location used
when WAL receiver is started
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>received_tli</structfield> <type>integer</type>
</para>
<para>
Timeline number of last write-ahead log location received and
flushed to disk, the initial value of this field being the timeline
number of the first log location used when WAL receiver is started
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_msg_send_time</structfield> <type>timestamp with time zone</type>
</para>
<para>
Send time of last message received from origin WAL sender
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_msg_receipt_time</structfield> <type>timestamp with time zone</type>
</para>
<para>
Receipt time of last message received from origin WAL sender
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>latest_end_lsn</structfield> <type>pg_lsn</type>
</para>
<para>
Last write-ahead log location reported to origin WAL sender
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>latest_end_time</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time of last write-ahead log location reported to origin WAL sender
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>slot_name</structfield> <type>text</type>
</para>
<para>
Replication slot name used by this WAL receiver
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sender_host</structfield> <type>text</type>
</para>
<para>
Host of the <productname>PostgreSQL</productname> instance
this WAL receiver is connected to. This can be a host name,
an IP address, or a directory path if the connection is via
Unix socket. (The path case can be distinguished because it
will always be an absolute path, beginning with <literal>/</literal>.)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sender_port</structfield> <type>integer</type>
</para>
<para>
Port number of the <productname>PostgreSQL</productname> instance
this WAL receiver is connected to.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>conninfo</structfield> <type>text</type>
</para>
<para>
Connection string used by this WAL receiver,
with security-sensitive fields obfuscated.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-recovery-prefetch">
<title><structname>pg_stat_recovery_prefetch</structname></title>
<indexterm>
<primary>pg_stat_recovery_prefetch</primary>
</indexterm>
<para>
The <structname>pg_stat_recovery_prefetch</structname> view will contain
only one row. The columns <structfield>wal_distance</structfield>,
<structfield>block_distance</structfield> and
<structfield>io_depth</structfield> show current values, and the
other columns show cumulative counters that can be reset
with the <function>pg_stat_reset_shared</function> function.
</para>
Prefetch data referenced by the WAL, take II. Introduce a new GUC recovery_prefetch. When enabled, look ahead in the WAL and try to initiate asynchronous reading of referenced data blocks that are not yet cached in our buffer pool. For now, this is done with posix_fadvise(), which has several caveats. Since not all OSes have that system call, "try" is provided so that it can be enabled where available. Better mechanisms for asynchronous I/O are possible in later work. Set to "try" for now for test coverage. Default setting to be finalized before release. The GUC wal_decode_buffer_size limits the distance we can look ahead in bytes of decoded data. The existing GUC maintenance_io_concurrency is used to limit the number of concurrent I/Os allowed, based on pessimistic heuristics used to infer that I/Os have begun and completed. We'll also not look more than maintenance_io_concurrency * 4 block references ahead. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version) Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version) Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version) Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version) Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version) Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version) Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2022-04-07 09:28:40 +02:00
<table id="pg-stat-recovery-prefetch-view" xreflabel="pg_stat_recovery_prefetch">
<title><structname>pg_stat_recovery_prefetch</structname> View</title>
<tgroup cols="1">
Prefetch data referenced by the WAL, take II. Introduce a new GUC recovery_prefetch. When enabled, look ahead in the WAL and try to initiate asynchronous reading of referenced data blocks that are not yet cached in our buffer pool. For now, this is done with posix_fadvise(), which has several caveats. Since not all OSes have that system call, "try" is provided so that it can be enabled where available. Better mechanisms for asynchronous I/O are possible in later work. Set to "try" for now for test coverage. Default setting to be finalized before release. The GUC wal_decode_buffer_size limits the distance we can look ahead in bytes of decoded data. The existing GUC maintenance_io_concurrency is used to limit the number of concurrent I/Os allowed, based on pessimistic heuristics used to infer that I/Os have begun and completed. We'll also not look more than maintenance_io_concurrency * 4 block references ahead. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version) Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version) Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version) Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version) Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version) Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version) Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2022-04-07 09:28:40 +02:00
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
Prefetch data referenced by the WAL, take II. Introduce a new GUC recovery_prefetch. When enabled, look ahead in the WAL and try to initiate asynchronous reading of referenced data blocks that are not yet cached in our buffer pool. For now, this is done with posix_fadvise(), which has several caveats. Since not all OSes have that system call, "try" is provided so that it can be enabled where available. Better mechanisms for asynchronous I/O are possible in later work. Set to "try" for now for test coverage. Default setting to be finalized before release. The GUC wal_decode_buffer_size limits the distance we can look ahead in bytes of decoded data. The existing GUC maintenance_io_concurrency is used to limit the number of concurrent I/Os allowed, based on pessimistic heuristics used to infer that I/Os have begun and completed. We'll also not look more than maintenance_io_concurrency * 4 block references ahead. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version) Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version) Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version) Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version) Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version) Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version) Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2022-04-07 09:28:40 +02:00
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which these statistics were last reset
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>prefetch</structfield> <type>bigint</type>
</para>
<para>
Number of blocks prefetched because they were not in the buffer pool
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>hit</structfield> <type>bigint</type>
</para>
<para>
Number of blocks not prefetched because they were already in the buffer pool
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>skip_init</structfield> <type>bigint</type>
</para>
<para>
Number of blocks not prefetched because they would be zero-initialized
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>skip_new</structfield> <type>bigint</type>
</para>
<para>
Number of blocks not prefetched because they didn't exist yet
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>skip_fpw</structfield> <type>bigint</type>
</para>
<para>
Number of blocks not prefetched because a full page image was included in the WAL
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>skip_rep</structfield> <type>bigint</type>
</para>
<para>
Number of blocks not prefetched because they were already recently prefetched
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>wal_distance</structfield> <type>int</type>
</para>
<para>
How many bytes ahead the prefetcher is looking
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>block_distance</structfield> <type>int</type>
</para>
<para>
How many blocks ahead the prefetcher is looking
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>io_depth</structfield> <type>int</type>
</para>
<para>
How many prefetches have been initiated but are not yet known to have completed
</para>
</entry>
</row>
Prefetch data referenced by the WAL, take II. Introduce a new GUC recovery_prefetch. When enabled, look ahead in the WAL and try to initiate asynchronous reading of referenced data blocks that are not yet cached in our buffer pool. For now, this is done with posix_fadvise(), which has several caveats. Since not all OSes have that system call, "try" is provided so that it can be enabled where available. Better mechanisms for asynchronous I/O are possible in later work. Set to "try" for now for test coverage. Default setting to be finalized before release. The GUC wal_decode_buffer_size limits the distance we can look ahead in bytes of decoded data. The existing GUC maintenance_io_concurrency is used to limit the number of concurrent I/Os allowed, based on pessimistic heuristics used to infer that I/Os have begun and completed. We'll also not look more than maintenance_io_concurrency * 4 block references ahead. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version) Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version) Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version) Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version) Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version) Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version) Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2022-04-07 09:28:40 +02:00
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-subscription">
<title><structname>pg_stat_subscription</structname></title>
<indexterm>
<primary>pg_stat_subscription</primary>
</indexterm>
Prefetch data referenced by the WAL, take II. Introduce a new GUC recovery_prefetch. When enabled, look ahead in the WAL and try to initiate asynchronous reading of referenced data blocks that are not yet cached in our buffer pool. For now, this is done with posix_fadvise(), which has several caveats. Since not all OSes have that system call, "try" is provided so that it can be enabled where available. Better mechanisms for asynchronous I/O are possible in later work. Set to "try" for now for test coverage. Default setting to be finalized before release. The GUC wal_decode_buffer_size limits the distance we can look ahead in bytes of decoded data. The existing GUC maintenance_io_concurrency is used to limit the number of concurrent I/Os allowed, based on pessimistic heuristics used to infer that I/Os have begun and completed. We'll also not look more than maintenance_io_concurrency * 4 block references ahead. Reviewed-by: Julien Rouhaud <rjuju123@gmail.com> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version) Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version) Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version) Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version) Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version) Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version) Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version) Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2022-04-07 09:28:40 +02:00
<table id="pg-stat-subscription" xreflabel="pg_stat_subscription">
<title><structname>pg_stat_subscription</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>subid</structfield> <type>oid</type>
</para>
<para>
OID of the subscription
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>subname</structfield> <type>name</type>
</para>
<para>
Name of the subscription
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>worker_type</structfield> <type>text</type>
</para>
<para>
Type of the subscription worker process. Possible types are
<literal>apply</literal>, <literal>parallel apply</literal>, and
<literal>table synchronization</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of the subscription worker process
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>leader_pid</structfield> <type>integer</type>
</para>
<para>
Process ID of the leader apply worker if this process is a parallel
apply worker; NULL if this process is a leader apply worker or a table
synchronization worker
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of the relation that the worker is synchronizing; NULL for the
leader apply worker and parallel apply workers
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>received_lsn</structfield> <type>pg_lsn</type>
</para>
<para>
Last write-ahead log location received, the initial value of
this field being 0; NULL for parallel apply workers
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_msg_send_time</structfield> <type>timestamp with time zone</type>
</para>
<para>
Send time of last message received from origin WAL sender; NULL for
parallel apply workers
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_msg_receipt_time</structfield> <type>timestamp with time zone</type>
</para>
<para>
Receipt time of last message received from origin WAL sender; NULL for
parallel apply workers
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>latest_end_lsn</structfield> <type>pg_lsn</type>
</para>
<para>
Last write-ahead log location reported to origin WAL sender; NULL for
parallel apply workers
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>latest_end_time</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time of last write-ahead log location reported to origin WAL
sender; NULL for parallel apply workers
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-subscription-stats">
<title><structname>pg_stat_subscription_stats</structname></title>
<indexterm>
<primary>pg_stat_subscription_stats</primary>
</indexterm>
<para>
The <structname>pg_stat_subscription_stats</structname> view will contain
one row per subscription.
</para>
<table id="pg-stat-subscription-stats" xreflabel="pg_stat_subscription_stats">
<title><structname>pg_stat_subscription_stats</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>subid</structfield> <type>oid</type>
</para>
<para>
OID of the subscription
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>subname</structfield> <type>name</type>
</para>
<para>
Name of the subscription
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>apply_error_count</structfield> <type>bigint</type>
</para>
<para>
Number of times an error occurred while applying changes
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sync_error_count</structfield> <type>bigint</type>
</para>
<para>
Number of times an error occurred during the initial table
synchronization
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which these statistics were last reset
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-ssl-view">
<title><structname>pg_stat_ssl</structname></title>
<indexterm>
<primary>pg_stat_ssl</primary>
</indexterm>
<para>
The <structname>pg_stat_ssl</structname> view will contain one row per
backend or WAL sender process, showing statistics about SSL usage on
this connection. It can be joined to <structname>pg_stat_activity</structname>
or <structname>pg_stat_replication</structname> on the
<structfield>pid</structfield> column to get more details about the
connection.
</para>
<table id="pg-stat-ssl-view" xreflabel="pg_stat_ssl">
<title><structname>pg_stat_ssl</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of a backend or WAL sender process
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>ssl</structfield> <type>boolean</type>
</para>
<para>
True if SSL is used on this connection
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>version</structfield> <type>text</type>
</para>
<para>
Version of SSL in use, or NULL if SSL is not in use
on this connection
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>cipher</structfield> <type>text</type>
</para>
<para>
Name of SSL cipher in use, or NULL if SSL is not in use
on this connection
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>bits</structfield> <type>integer</type>
</para>
<para>
Number of bits in the encryption algorithm used, or NULL
if SSL is not used on this connection
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>client_dn</structfield> <type>text</type>
</para>
<para>
Distinguished Name (DN) field from the client certificate
used, or NULL if no client certificate was supplied or if SSL
is not in use on this connection. This field is truncated if the
DN field is longer than <symbol>NAMEDATALEN</symbol> (64 characters
in a standard build).
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>client_serial</structfield> <type>numeric</type>
</para>
<para>
Serial number of the client certificate, or NULL if no client
certificate was supplied or if SSL is not in use on this connection. The
combination of certificate serial number and certificate issuer uniquely
identifies a certificate (unless the issuer erroneously reuses serial
numbers).
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>issuer_dn</structfield> <type>text</type>
</para>
<para>
DN of the issuer of the client certificate, or NULL if no client
certificate was supplied or if SSL is not in use on this connection.
This field is truncated like <structfield>client_dn</structfield>.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-gssapi-view">
<title><structname>pg_stat_gssapi</structname></title>
<indexterm>
<primary>pg_stat_gssapi</primary>
</indexterm>
<para>
The <structname>pg_stat_gssapi</structname> view will contain one row per
backend, showing information about GSSAPI usage on this connection. It can
be joined to <structname>pg_stat_activity</structname> or
<structname>pg_stat_replication</structname> on the
<structfield>pid</structfield> column to get more details about the
connection.
</para>
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
<table id="pg-stat-gssapi-view" xreflabel="pg_stat_gssapi">
<title><structname>pg_stat_gssapi</structname> View</title>
<tgroup cols="1">
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of a backend
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>gss_authenticated</structfield> <type>boolean</type>
</para>
<para>
True if GSSAPI authentication was used for this connection
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>principal</structfield> <type>text</type>
</para>
<para>
Principal used to authenticate this connection, or NULL
if GSSAPI was not used to authenticate this connection. This
field is truncated if the principal is longer than
<symbol>NAMEDATALEN</symbol> (64 characters in a standard build).
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>encrypted</structfield> <type>boolean</type>
</para>
<para>
True if GSSAPI encryption is in use on this connection
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>credentials_delegated</structfield> <type>boolean</type>
</para>
<para>
True if GSSAPI credentials were delegated on this connection.
</para></entry>
</row>
</tbody>
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-archiver-view">
<title><structname>pg_stat_archiver</structname></title>
<indexterm>
<primary>pg_stat_archiver</primary>
</indexterm>
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
<para>
The <structname>pg_stat_archiver</structname> view will always have a
single row, containing data about the archiver process of the cluster.
GSSAPI encryption support On both the frontend and backend, prepare for GSSAPI encryption support by moving common code for error handling into a separate file. Fix a TODO for handling multiple status messages in the process. Eliminate the OIDs, which have not been needed for some time. Add frontend and backend encryption support functions. Keep the context initiation for authentication-only separate on both the frontend and backend in order to avoid concerns about changing the requested flags to include encryption support. In postmaster, pull GSSAPI authorization checking into a shared function. Also share the initiator name between the encryption and non-encryption codepaths. For HBA, add "hostgssenc" and "hostnogssenc" entries that behave similarly to their SSL counterparts. "hostgssenc" requires either "gss", "trust", or "reject" for its authentication. Similarly, add a "gssencmode" parameter to libpq. Supported values are "disable", "require", and "prefer". Notably, negotiation will only be attempted if credentials can be acquired. Move credential acquisition into its own function to support this behavior. Add a simple pg_stat_gssapi view similar to pg_stat_ssl, for monitoring if GSSAPI authentication was used, what principal was used, and if encryption is being used on the connection. Finally, add documentation for everything new, and update existing documentation on connection security. Thanks to Michael Paquier for the Windows fixes. Author: Robbie Harwood, with changes to the read/write functions by me. Reviewed in various forms and at different times by: Michael Paquier, Andres Freund, David Steele. Discussion: https://www.postgresql.org/message-id/flat/jlg1tgq1ktm.fsf@thriss.redhat.com
2019-04-03 21:02:33 +02:00
</para>
<table id="pg-stat-archiver-view" xreflabel="pg_stat_archiver">
<title><structname>pg_stat_archiver</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>archived_count</structfield> <type>bigint</type>
</para>
<para>
Number of WAL files that have been successfully archived
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_archived_wal</structfield> <type>text</type>
</para>
<para>
Fix race condition in 028_pitr_timelines.pl test, add note to docs. The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL segment that was just filled up to be archived. It was because the test used 'pg_stat_archiver.last_archived_wal' to check if a file was archived, but the order that WAL files are archived when a standby is promoted is not fully deterministic, and 'last_archived_wal' tracks the last segment that was archived, not the highest-numbered WAL segment. Because of that, if the archiver archived segment 3, and then 2, 'last_archived_wal' say 2, and the test query would think that 3 has not been archived yet. Normally, WAL files are marked ready for archival in order, and the archiver process will process them in order, so that issue doesn't arise. We have used the same query on 'last_archived_wal' in a few other tests with no problem. But when a standby is promoted, things are a bit chaotic. After promotion, the server will try to archive all the WAL segments from the old timeline that are in pg_wal, as well as the history file and any new WAL segments on the new timeline. The end-of-recovery checkpoint will create the .ready files for all the WAL files on the old timeline, but at the same time, the new timeline is opened up for business. A file from the new timeline can therefore be archived before the files from the old timeline have been marked as ready for archival. It turns out that we don't really need to wait for the archival in this particular test, because the standby server is about to be stopped, and stopping a server will wait for the end-of-recovery checkpoint and all WAL archivals to finish, anyway. So we can just remove it from the test. Add a note to the docs on 'pg_stat_archiver' view that files can be archived out of order. Reviewed-by: Tom Lane Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
Name of the WAL file most recently successfully archived
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_archived_time</structfield> <type>timestamp with time zone</type>
</para>
<para>
Fix race condition in 028_pitr_timelines.pl test, add note to docs. The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL segment that was just filled up to be archived. It was because the test used 'pg_stat_archiver.last_archived_wal' to check if a file was archived, but the order that WAL files are archived when a standby is promoted is not fully deterministic, and 'last_archived_wal' tracks the last segment that was archived, not the highest-numbered WAL segment. Because of that, if the archiver archived segment 3, and then 2, 'last_archived_wal' say 2, and the test query would think that 3 has not been archived yet. Normally, WAL files are marked ready for archival in order, and the archiver process will process them in order, so that issue doesn't arise. We have used the same query on 'last_archived_wal' in a few other tests with no problem. But when a standby is promoted, things are a bit chaotic. After promotion, the server will try to archive all the WAL segments from the old timeline that are in pg_wal, as well as the history file and any new WAL segments on the new timeline. The end-of-recovery checkpoint will create the .ready files for all the WAL files on the old timeline, but at the same time, the new timeline is opened up for business. A file from the new timeline can therefore be archived before the files from the old timeline have been marked as ready for archival. It turns out that we don't really need to wait for the archival in this particular test, because the standby server is about to be stopped, and stopping a server will wait for the end-of-recovery checkpoint and all WAL archivals to finish, anyway. So we can just remove it from the test. Add a note to the docs on 'pg_stat_archiver' view that files can be archived out of order. Reviewed-by: Tom Lane Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
Time of the most recent successful archive operation
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>failed_count</structfield> <type>bigint</type>
</para>
<para>
Number of failed attempts for archiving WAL files
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_failed_wal</structfield> <type>text</type>
</para>
<para>
Fix race condition in 028_pitr_timelines.pl test, add note to docs. The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL segment that was just filled up to be archived. It was because the test used 'pg_stat_archiver.last_archived_wal' to check if a file was archived, but the order that WAL files are archived when a standby is promoted is not fully deterministic, and 'last_archived_wal' tracks the last segment that was archived, not the highest-numbered WAL segment. Because of that, if the archiver archived segment 3, and then 2, 'last_archived_wal' say 2, and the test query would think that 3 has not been archived yet. Normally, WAL files are marked ready for archival in order, and the archiver process will process them in order, so that issue doesn't arise. We have used the same query on 'last_archived_wal' in a few other tests with no problem. But when a standby is promoted, things are a bit chaotic. After promotion, the server will try to archive all the WAL segments from the old timeline that are in pg_wal, as well as the history file and any new WAL segments on the new timeline. The end-of-recovery checkpoint will create the .ready files for all the WAL files on the old timeline, but at the same time, the new timeline is opened up for business. A file from the new timeline can therefore be archived before the files from the old timeline have been marked as ready for archival. It turns out that we don't really need to wait for the archival in this particular test, because the standby server is about to be stopped, and stopping a server will wait for the end-of-recovery checkpoint and all WAL archivals to finish, anyway. So we can just remove it from the test. Add a note to the docs on 'pg_stat_archiver' view that files can be archived out of order. Reviewed-by: Tom Lane Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
Name of the WAL file of the most recent failed archival operation
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_failed_time</structfield> <type>timestamp with time zone</type>
</para>
<para>
Fix race condition in 028_pitr_timelines.pl test, add note to docs. The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL segment that was just filled up to be archived. It was because the test used 'pg_stat_archiver.last_archived_wal' to check if a file was archived, but the order that WAL files are archived when a standby is promoted is not fully deterministic, and 'last_archived_wal' tracks the last segment that was archived, not the highest-numbered WAL segment. Because of that, if the archiver archived segment 3, and then 2, 'last_archived_wal' say 2, and the test query would think that 3 has not been archived yet. Normally, WAL files are marked ready for archival in order, and the archiver process will process them in order, so that issue doesn't arise. We have used the same query on 'last_archived_wal' in a few other tests with no problem. But when a standby is promoted, things are a bit chaotic. After promotion, the server will try to archive all the WAL segments from the old timeline that are in pg_wal, as well as the history file and any new WAL segments on the new timeline. The end-of-recovery checkpoint will create the .ready files for all the WAL files on the old timeline, but at the same time, the new timeline is opened up for business. A file from the new timeline can therefore be archived before the files from the old timeline have been marked as ready for archival. It turns out that we don't really need to wait for the archival in this particular test, because the standby server is about to be stopped, and stopping a server will wait for the end-of-recovery checkpoint and all WAL archivals to finish, anyway. So we can just remove it from the test. Add a note to the docs on 'pg_stat_archiver' view that files can be archived out of order. Reviewed-by: Tom Lane Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
Time of the most recent failed archival operation
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which these statistics were last reset
</para></entry>
</row>
</tbody>
</tgroup>
</table>
Fix race condition in 028_pitr_timelines.pl test, add note to docs. The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL segment that was just filled up to be archived. It was because the test used 'pg_stat_archiver.last_archived_wal' to check if a file was archived, but the order that WAL files are archived when a standby is promoted is not fully deterministic, and 'last_archived_wal' tracks the last segment that was archived, not the highest-numbered WAL segment. Because of that, if the archiver archived segment 3, and then 2, 'last_archived_wal' say 2, and the test query would think that 3 has not been archived yet. Normally, WAL files are marked ready for archival in order, and the archiver process will process them in order, so that issue doesn't arise. We have used the same query on 'last_archived_wal' in a few other tests with no problem. But when a standby is promoted, things are a bit chaotic. After promotion, the server will try to archive all the WAL segments from the old timeline that are in pg_wal, as well as the history file and any new WAL segments on the new timeline. The end-of-recovery checkpoint will create the .ready files for all the WAL files on the old timeline, but at the same time, the new timeline is opened up for business. A file from the new timeline can therefore be archived before the files from the old timeline have been marked as ready for archival. It turns out that we don't really need to wait for the archival in this particular test, because the standby server is about to be stopped, and stopping a server will wait for the end-of-recovery checkpoint and all WAL archivals to finish, anyway. So we can just remove it from the test. Add a note to the docs on 'pg_stat_archiver' view that files can be archived out of order. Reviewed-by: Tom Lane Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
<para>
Normally, WAL files are archived in order, oldest to newest, but that is
not guaranteed, and does not hold under special circumstances like when
promoting a standby or after crash recovery. Therefore it is not safe to
assume that all files older than
<structfield>last_archived_wal</structfield> have also been successfully
archived.
</para>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
</sect2>
<sect2 id="monitoring-pg-stat-io-view">
<title><structname>pg_stat_io</structname></title>
<indexterm>
<primary>pg_stat_io</primary>
</indexterm>
<para>
The <structname>pg_stat_io</structname> view will contain one row for each
combination of backend type, target I/O object, and I/O context, showing
cluster-wide I/O statistics. Combinations which do not make sense are
omitted.
</para>
<para>
Currently, I/O on relations (e.g. tables, indexes) is tracked. However,
relation I/O which bypasses shared buffers (e.g. when moving a table from one
tablespace to another) is currently not tracked.
</para>
<table id="pg-stat-io-view" xreflabel="pg_stat_io">
<title><structname>pg_stat_io</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
Column Type
</para>
<para>
Description
</para>
</entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>backend_type</structfield> <type>text</type>
</para>
<para>
Type of backend (e.g. background worker, autovacuum worker). See <link
linkend="monitoring-pg-stat-activity-view">
<structname>pg_stat_activity</structname></link> for more information
on <varname>backend_type</varname>s. Some
<varname>backend_type</varname>s do not accumulate I/O operation
statistics and will not be included in the view.
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>object</structfield> <type>text</type>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
</para>
<para>
Target object of an I/O operation. Possible values are:
<itemizedlist>
<listitem>
<para>
<literal>relation</literal>: Permanent relations.
</para>
</listitem>
<listitem>
<para>
<literal>temp relation</literal>: Temporary relations.
</para>
</listitem>
</itemizedlist>
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>context</structfield> <type>text</type>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
</para>
<para>
The context of an I/O operation. Possible values are:
</para>
<itemizedlist>
<listitem>
<para>
<literal>normal</literal>: The default or standard
<varname>context</varname> for a type of I/O operation. For
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
example, by default, relation data is read into and written out from
shared buffers. Thus, reads and writes of relation data to and from
shared buffers are tracked in <varname>context</varname>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
<literal>normal</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>vacuum</literal>: I/O operations performed outside of shared
buffers while vacuuming and analyzing permanent relations. Temporary
table vacuums use the same local buffer pool as other temporary table
I/O operations and are tracked in <varname>context</varname>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
<literal>normal</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>bulkread</literal>: Certain large read I/O operations
done outside of shared buffers, for example, a sequential scan of a
large table.
</para>
</listitem>
<listitem>
<para>
<literal>bulkwrite</literal>: Certain large write I/O operations
done outside of shared buffers, such as <command>COPY</command>.
</para>
</listitem>
</itemizedlist>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>reads</structfield> <type>bigint</type>
</para>
<para>
Number of read operations, each of the size specified in
<varname>op_bytes</varname>.
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>read_time</structfield> <type>double precision</type>
</para>
<para>
Time spent in read operations in milliseconds (if
<xref linkend="guc-track-io-timing"/> is enabled, otherwise zero)
</para>
</entry>
</row>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>writes</structfield> <type>bigint</type>
</para>
<para>
Number of write operations, each of the size specified in
<varname>op_bytes</varname>.
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>write_time</structfield> <type>double precision</type>
</para>
<para>
Time spent in write operations in milliseconds (if
<xref linkend="guc-track-io-timing"/> is enabled, otherwise zero)
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>writebacks</structfield> <type>bigint</type>
</para>
<para>
Number of units of size <varname>op_bytes</varname> which the process
requested the kernel write out to permanent storage.
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>writeback_time</structfield> <type>double precision</type>
</para>
<para>
Time spent in writeback operations in milliseconds (if
<xref linkend="guc-track-io-timing"/> is enabled, otherwise zero). This
includes the time spent queueing write-out requests and, potentially,
the time spent to write out the dirty data.
</para>
</entry>
</row>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>extends</structfield> <type>bigint</type>
</para>
<para>
Number of relation extend operations, each of the size specified in
<varname>op_bytes</varname>.
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>extend_time</structfield> <type>double precision</type>
</para>
<para>
Time spent in extend operations in milliseconds (if
<xref linkend="guc-track-io-timing"/> is enabled, otherwise zero)
</para>
</entry>
</row>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>op_bytes</structfield> <type>bigint</type>
</para>
<para>
The number of bytes per unit of I/O read, written, or extended.
</para>
<para>
Relation data reads, writes, and extends are done in
<varname>block_size</varname> units, derived from the build-time
parameter <symbol>BLCKSZ</symbol>, which is <literal>8192</literal> by
default.
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>hits</structfield> <type>bigint</type>
</para>
<para>
The number of times a desired block was found in a shared buffer.
</para>
</entry>
</row>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>evictions</structfield> <type>bigint</type>
</para>
<para>
Number of times a block has been written out from a shared or local
buffer in order to make it available for another use.
</para>
<para>
In <varname>context</varname> <literal>normal</literal>, this counts
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
the number of times a block was evicted from a buffer and replaced with
another block. In <varname>context</varname>s
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
<literal>bulkwrite</literal>, <literal>bulkread</literal>, and
<literal>vacuum</literal>, this counts the number of times a block was
evicted from shared buffers in order to add the shared buffer to a
separate, size-limited ring buffer for use in a bulk I/O operation.
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>reuses</structfield> <type>bigint</type>
</para>
<para>
The number of times an existing buffer in a size-limited ring buffer
outside of shared buffers was reused as part of an I/O operation in the
<literal>bulkread</literal>, <literal>bulkwrite</literal>, or
<literal>vacuum</literal> <varname>context</varname>s.
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>fsyncs</structfield> <type>bigint</type>
</para>
<para>
Number of <literal>fsync</literal> calls. These are only tracked in
<varname>context</varname> <literal>normal</literal>.
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
</para>
</entry>
</row>
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>fsync_time</structfield> <type>double precision</type>
</para>
<para>
Time spent in fsync operations in milliseconds (if
<xref linkend="guc-track-io-timing"/> is enabled, otherwise zero)
</para>
</entry>
</row>
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
<row>
<entry role="catalog_table_entry">
<para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which these statistics were last reset.
</para>
</entry>
</row>
</tbody>
</tgroup>
</table>
<para>
Some backend types never perform I/O operations on some I/O objects and/or
in some I/O contexts. These rows are omitted from the view. For example, the
checkpointer does not checkpoint temporary tables, so there will be no rows
for <varname>backend_type</varname> <literal>checkpointer</literal> and
<varname>object</varname> <literal>temp relation</literal>.
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
</para>
<para>
In addition, some I/O operations will never be performed either by certain
backend types or on certain I/O objects and/or in certain I/O contexts.
These cells will be NULL. For example, temporary tables are not
<literal>fsync</literal>ed, so <varname>fsyncs</varname> will be NULL for
<varname>object</varname> <literal>temp relation</literal>. Also, the
Add pg_stat_io view, providing more detailed IO statistics Builds on 28e626bde00 and f30d62c2fc6. See the former for motivation. Rows of the view show IO operations for a particular backend type, IO target object, IO context combination (e.g. a client backend's operations on permanent relations in shared buffers) and each column in the view is the total number of IO Operations done (e.g. writes). So a cell in the view would be, for example, the number of blocks of relation data written from shared buffers by client backends since the last stats reset. In anticipation of tracking WAL IO and non-block-oriented IO (such as temporary file IO), the "op_bytes" column specifies the unit of the "reads", "writes", and "extends" columns for a given row. Rows for combinations of IO operation, backend type, target object and context that never occur, are ommitted entirely. For example, checkpointer will never operate on temporary relations. Similarly, if an IO operation never occurs for such a combination, the IO operation's cell will be null, to distinguish from 0 observed IO operations. For example, bgwriter should not perform reads. Note that some of the cells in the view are redundant with fields in pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for backwards compatibility. Bumps catversion. Author: Melanie Plageman <melanieplageman@gmail.com> Author: Samay Sharma <smilingsamay@gmail.com> Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com> Reviewed-by: Lukas Fittl <lukas@fittl.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 18:51:58 +01:00
background writer does not perform reads, so <varname>reads</varname> will
be NULL in rows for <varname>backend_type</varname> <literal>background
writer</literal>.
</para>
<para>
<structname>pg_stat_io</structname> can be used to inform database tuning.
For example:
<itemizedlist>
<listitem>
<para>
A high <varname>evictions</varname> count can indicate that shared
buffers should be increased.
</para>
</listitem>
<listitem>
<para>
Client backends rely on the checkpointer to ensure data is persisted to
permanent storage. Large numbers of <varname>fsyncs</varname> by
<literal>client backend</literal>s could indicate a misconfiguration of
shared buffers or of the checkpointer. More information on configuring
the checkpointer can be found in <xref linkend="wal-configuration"/>.
</para>
</listitem>
<listitem>
<para>
Normally, client backends should be able to rely on auxiliary processes
like the checkpointer and the background writer to write out dirty data
as much as possible. Large numbers of writes by client backends could
indicate a misconfiguration of shared buffers or of the checkpointer.
More information on configuring the checkpointer can be found in <xref
linkend="wal-configuration"/>.
</para>
</listitem>
</itemizedlist>
</para>
<note>
<para>
Columns tracking I/O time will only be non-zero when
<xref linkend="guc-track-io-timing"/> is enabled. The user should be
careful when referencing these columns in combination with their
corresponding I/O operations in case <varname>track_io_timing</varname>
was not enabled for the entire time since the last stats reset.
</para>
</note>
Fix race condition in 028_pitr_timelines.pl test, add note to docs. The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL segment that was just filled up to be archived. It was because the test used 'pg_stat_archiver.last_archived_wal' to check if a file was archived, but the order that WAL files are archived when a standby is promoted is not fully deterministic, and 'last_archived_wal' tracks the last segment that was archived, not the highest-numbered WAL segment. Because of that, if the archiver archived segment 3, and then 2, 'last_archived_wal' say 2, and the test query would think that 3 has not been archived yet. Normally, WAL files are marked ready for archival in order, and the archiver process will process them in order, so that issue doesn't arise. We have used the same query on 'last_archived_wal' in a few other tests with no problem. But when a standby is promoted, things are a bit chaotic. After promotion, the server will try to archive all the WAL segments from the old timeline that are in pg_wal, as well as the history file and any new WAL segments on the new timeline. The end-of-recovery checkpoint will create the .ready files for all the WAL files on the old timeline, but at the same time, the new timeline is opened up for business. A file from the new timeline can therefore be archived before the files from the old timeline have been marked as ready for archival. It turns out that we don't really need to wait for the archival in this particular test, because the standby server is about to be stopped, and stopping a server will wait for the end-of-recovery checkpoint and all WAL archivals to finish, anyway. So we can just remove it from the test. Add a note to the docs on 'pg_stat_archiver' view that files can be archived out of order. Reviewed-by: Tom Lane Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 00:37:48 +01:00
</sect2>
<sect2 id="monitoring-pg-stat-bgwriter-view">
<title><structname>pg_stat_bgwriter</structname></title>
<indexterm>
<primary>pg_stat_bgwriter</primary>
</indexterm>
<para>
The <structname>pg_stat_bgwriter</structname> view will always have a
single row, containing data about the background writer of the cluster.
</para>
<table id="pg-stat-bgwriter-view" xreflabel="pg_stat_bgwriter">
<title><structname>pg_stat_bgwriter</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>buffers_clean</structfield> <type>bigint</type>
</para>
<para>
Number of buffers written by the background writer
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>maxwritten_clean</structfield> <type>bigint</type>
</para>
<para>
Number of times the background writer stopped a cleaning
scan because it had written too many buffers
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>buffers_alloc</structfield> <type>bigint</type>
</para>
<para>
Number of buffers allocated
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which these statistics were last reset
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-checkpointer-view">
<title><structname>pg_stat_checkpointer</structname></title>
<indexterm>
<primary>pg_stat_checkpointer</primary>
</indexterm>
<para>
The <structname>pg_stat_checkpointer</structname> view will always have a
single row, containing data about the checkpointer process of the cluster.
</para>
<table id="pg-stat-checkpointer-view" xreflabel="pg_stat_checkpointer">
<title><structname>pg_stat_checkpointer</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>num_timed</structfield> <type>bigint</type>
</para>
<para>
Number of scheduled checkpoints that have been performed
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>num_requested</structfield> <type>bigint</type>
</para>
<para>
Number of requested checkpoints that have been performed
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>restartpoints_timed</structfield> <type>bigint</type>
</para>
<para>
Number of scheduled restartpoints due to timeout or after a failed attempt to perform it
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>restartpoints_req</structfield> <type>bigint</type>
</para>
<para>
Number of requested restartpoints
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>restartpoints_done</structfield> <type>bigint</type>
</para>
<para>
Number of restartpoints that have been performed
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>write_time</structfield> <type>double precision</type>
</para>
<para>
Total amount of time that has been spent in the portion of
processing checkpoints and restartpoints where files are written to disk,
in milliseconds
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sync_time</structfield> <type>double precision</type>
</para>
<para>
Total amount of time that has been spent in the portion of
processing checkpoints and restartpoints where files are synchronized to
disk, in milliseconds
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>buffers_written</structfield> <type>bigint</type>
</para>
<para>
Number of buffers written during checkpoints and restartpoints
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which these statistics were last reset
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-wal-view">
<title><structname>pg_stat_wal</structname></title>
<indexterm>
<primary>pg_stat_wal</primary>
</indexterm>
<para>
The <structname>pg_stat_wal</structname> view will always have a
single row, containing data about WAL activity of the cluster.
</para>
<table id="pg-stat-wal-view" xreflabel="pg_stat_wal">
<title><structname>pg_stat_wal</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wal_records</structfield> <type>bigint</type>
</para>
<para>
Total number of WAL records generated
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wal_fpi</structfield> <type>bigint</type>
</para>
<para>
Total number of WAL full page images generated
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wal_bytes</structfield> <type>numeric</type>
</para>
<para>
Total amount of WAL generated in bytes
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wal_buffers_full</structfield> <type>bigint</type>
</para>
<para>
Number of times WAL data was written to disk because WAL buffers became full
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wal_write</structfield> <type>bigint</type>
</para>
<para>
Number of times WAL buffers were written out to disk via
<function>XLogWrite</function> request.
See <xref linkend="wal-configuration"/> for more information about
the internal WAL function <function>XLogWrite</function>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wal_sync</structfield> <type>bigint</type>
</para>
<para>
Number of times WAL files were synced to disk via
<function>issue_xlog_fsync</function> request
(if <xref linkend="guc-fsync"/> is <literal>on</literal> and
<xref linkend="guc-wal-sync-method"/> is either
<literal>fdatasync</literal>, <literal>fsync</literal> or
<literal>fsync_writethrough</literal>, otherwise zero).
See <xref linkend="wal-configuration"/> for more information about
the internal WAL function <function>issue_xlog_fsync</function>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wal_write_time</structfield> <type>double precision</type>
</para>
<para>
Total amount of time spent writing WAL buffers to disk via
<function>XLogWrite</function> request, in milliseconds
(if <xref linkend="guc-track-wal-io-timing"/> is enabled,
otherwise zero). This includes the sync time when
<varname>wal_sync_method</varname> is either
<literal>open_datasync</literal> or <literal>open_sync</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wal_sync_time</structfield> <type>double precision</type>
</para>
<para>
Total amount of time spent syncing WAL files to disk via
<function>issue_xlog_fsync</function> request, in milliseconds
(if <varname>track_wal_io_timing</varname> is enabled,
<varname>fsync</varname> is <literal>on</literal>, and
<varname>wal_sync_method</varname> is either
<literal>fdatasync</literal>, <literal>fsync</literal> or
<literal>fsync_writethrough</literal>, otherwise zero).
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which these statistics were last reset
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-database-view">
<title><structname>pg_stat_database</structname></title>
<indexterm>
<primary>pg_stat_database</primary>
</indexterm>
<para>
The <structname>pg_stat_database</structname> view will contain one row
for each database in the cluster, plus one for shared objects, showing
database-wide statistics.
</para>
<table id="pg-stat-database-view" xreflabel="pg_stat_database">
<title><structname>pg_stat_database</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datid</structfield> <type>oid</type>
</para>
<para>
OID of this database, or 0 for objects belonging to a shared
relation
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datname</structfield> <type>name</type>
</para>
<para>
Name of this database, or <literal>NULL</literal> for shared
objects.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>numbackends</structfield> <type>integer</type>
</para>
<para>
Number of backends currently connected to this database, or
<literal>NULL</literal> for shared objects. This is the only column
in this view that returns a value reflecting current state; all other
columns return the accumulated values since the last reset.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>xact_commit</structfield> <type>bigint</type>
</para>
<para>
Number of transactions in this database that have been
committed
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>xact_rollback</structfield> <type>bigint</type>
</para>
<para>
Number of transactions in this database that have been
rolled back
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blks_read</structfield> <type>bigint</type>
</para>
<para>
Number of disk blocks read in this database
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blks_hit</structfield> <type>bigint</type>
</para>
<para>
Number of times disk blocks were found already in the buffer
cache, so that a read was not necessary (this only includes hits in the
PostgreSQL buffer cache, not the operating system's file system cache)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tup_returned</structfield> <type>bigint</type>
</para>
<para>
Number of live rows fetched by sequential scans and index entries returned by index scans in this database
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tup_fetched</structfield> <type>bigint</type>
</para>
<para>
Number of live rows fetched by index scans in this database
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tup_inserted</structfield> <type>bigint</type>
</para>
<para>
Number of rows inserted by queries in this database
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tup_updated</structfield> <type>bigint</type>
</para>
<para>
Number of rows updated by queries in this database
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tup_deleted</structfield> <type>bigint</type>
</para>
<para>
Number of rows deleted by queries in this database
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>conflicts</structfield> <type>bigint</type>
</para>
<para>
Number of queries canceled due to conflicts with recovery
in this database. (Conflicts occur only on standby servers; see
<link linkend="monitoring-pg-stat-database-conflicts-view">
<structname>pg_stat_database_conflicts</structname></link> for details.)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>temp_files</structfield> <type>bigint</type>
</para>
<para>
Number of temporary files created by queries in this database.
All temporary files are counted, regardless of why the temporary file
was created (e.g., sorting or hashing), and regardless of the
<xref linkend="guc-log-temp-files"/> setting.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>temp_bytes</structfield> <type>bigint</type>
</para>
<para>
Total amount of data written to temporary files by queries in
this database. All temporary files are counted, regardless of why
the temporary file was created, and
regardless of the <xref linkend="guc-log-temp-files"/> setting.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>deadlocks</structfield> <type>bigint</type>
</para>
<para>
Number of deadlocks detected in this database
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>checksum_failures</structfield> <type>bigint</type>
</para>
<para>
Number of data page checksum failures detected in this
database (or on a shared object), or NULL if data checksums are not
enabled.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>checksum_last_failure</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which the last data page checksum failure was detected in
this database (or on a shared object), or NULL if data checksums are not
enabled.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blk_read_time</structfield> <type>double precision</type>
</para>
<para>
Time spent reading data file blocks by backends in this database,
in milliseconds (if <xref linkend="guc-track-io-timing"/> is enabled,
otherwise zero)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blk_write_time</structfield> <type>double precision</type>
</para>
<para>
Time spent writing data file blocks by backends in this database,
in milliseconds (if <xref linkend="guc-track-io-timing"/> is enabled,
otherwise zero)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>session_time</structfield> <type>double precision</type>
</para>
<para>
Time spent by database sessions in this database, in milliseconds
(note that statistics are only updated when the state of a session
changes, so if sessions have been idle for a long time, this idle time
won't be included)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>active_time</structfield> <type>double precision</type>
</para>
<para>
Time spent executing SQL statements in this database, in milliseconds
(this corresponds to the states <literal>active</literal> and
<literal>fastpath function call</literal> in
<link linkend="monitoring-pg-stat-activity-view">
<structname>pg_stat_activity</structname></link>)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>idle_in_transaction_time</structfield> <type>double precision</type>
</para>
<para>
Time spent idling while in a transaction in this database, in milliseconds
(this corresponds to the states <literal>idle in transaction</literal> and
<literal>idle in transaction (aborted)</literal> in
<link linkend="monitoring-pg-stat-activity-view">
<structname>pg_stat_activity</structname></link>)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sessions</structfield> <type>bigint</type>
</para>
<para>
Total number of sessions established to this database
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sessions_abandoned</structfield> <type>bigint</type>
</para>
<para>
Number of database sessions to this database that were terminated
because connection to the client was lost
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sessions_fatal</structfield> <type>bigint</type>
</para>
<para>
Number of database sessions to this database that were terminated
by fatal errors
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sessions_killed</structfield> <type>bigint</type>
</para>
<para>
Number of database sessions to this database that were terminated
by operator intervention
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which these statistics were last reset
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-database-conflicts-view">
<title><structname>pg_stat_database_conflicts</structname></title>
<indexterm>
<primary>pg_stat_database_conflicts</primary>
</indexterm>
<para>
The <structname>pg_stat_database_conflicts</structname> view will contain
one row per database, showing database-wide statistics about
query cancels occurring due to conflicts with recovery on standby servers.
This view will only contain information on standby servers, since
conflicts do not occur on primary servers.
</para>
<table id="pg-stat-database-conflicts-view" xreflabel="pg_stat_database_conflicts">
<title><structname>pg_stat_database_conflicts</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datid</structfield> <type>oid</type>
</para>
<para>
OID of a database
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datname</structfield> <type>name</type>
</para>
<para>
Name of this database
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>confl_tablespace</structfield> <type>bigint</type>
</para>
<para>
Number of queries in this database that have been canceled due to
dropped tablespaces
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>confl_lock</structfield> <type>bigint</type>
</para>
<para>
Number of queries in this database that have been canceled due to
lock timeouts
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>confl_snapshot</structfield> <type>bigint</type>
</para>
<para>
Number of queries in this database that have been canceled due to
old snapshots
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>confl_bufferpin</structfield> <type>bigint</type>
</para>
<para>
Number of queries in this database that have been canceled due to
pinned buffers
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>confl_deadlock</structfield> <type>bigint</type>
</para>
<para>
Number of queries in this database that have been canceled due to
deadlocks
</para></entry>
</row>
Handle logical slot conflicts on standby During WAL replay on the standby, when a conflict with a logical slot is identified, invalidate such slots. There are two sources of conflicts: 1) Using the information added in 6af1793954e, logical slots are invalidated if required rows are removed 2) wal_level on the primary server is reduced to below logical Uses the infrastructure introduced in the prior commit. FIXME: add commit reference. Change InvalidatePossiblyObsoleteSlot() to use a recovery conflict to interrupt use of a slot, if called in the startup process. The new recovery conflict is added to pg_stat_database_conflicts, as confl_active_logicalslot. See 6af1793954e for an overall design of logical decoding on a standby. Bumps catversion for the addition of the pg_stat_database_conflicts column. Bumps PGSTAT_FILE_FORMAT_ID for the same reason. Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com> Author: Andres Freund <andres@anarazel.de> Author: Amit Khandekar <amitdkhan.pg@gmail.com> (in an older version) Reviewed-by: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20230407075009.igg7be27ha2htkbt@awork3.anarazel.de
2023-04-08 08:11:28 +02:00
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>confl_active_logicalslot</structfield> <type>bigint</type>
</para>
<para>
Number of uses of logical slots in this database that have been
canceled due to old snapshots or too low a <xref linkend="guc-wal-level"/>
Handle logical slot conflicts on standby During WAL replay on the standby, when a conflict with a logical slot is identified, invalidate such slots. There are two sources of conflicts: 1) Using the information added in 6af1793954e, logical slots are invalidated if required rows are removed 2) wal_level on the primary server is reduced to below logical Uses the infrastructure introduced in the prior commit. FIXME: add commit reference. Change InvalidatePossiblyObsoleteSlot() to use a recovery conflict to interrupt use of a slot, if called in the startup process. The new recovery conflict is added to pg_stat_database_conflicts, as confl_active_logicalslot. See 6af1793954e for an overall design of logical decoding on a standby. Bumps catversion for the addition of the pg_stat_database_conflicts column. Bumps PGSTAT_FILE_FORMAT_ID for the same reason. Author: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com> Author: Andres Freund <andres@anarazel.de> Author: Amit Khandekar <amitdkhan.pg@gmail.com> (in an older version) Reviewed-by: "Drouvot, Bertrand" <bertranddrouvot.pg@gmail.com> Reviewed-by: Andres Freund <andres@anarazel.de> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Reviewed-by: Amit Kapila <amit.kapila16@gmail.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org> Discussion: https://postgr.es/m/20230407075009.igg7be27ha2htkbt@awork3.anarazel.de
2023-04-08 08:11:28 +02:00
on the primary
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-all-tables-view">
<title><structname>pg_stat_all_tables</structname></title>
<indexterm>
<primary>pg_stat_all_tables</primary>
</indexterm>
<para>
The <structname>pg_stat_all_tables</structname> view will contain
one row for each table in the current database (including TOAST
tables), showing statistics about accesses to that specific table. The
<structname>pg_stat_user_tables</structname> and
<structname>pg_stat_sys_tables</structname> views
contain the same information,
but filtered to only show user and system tables respectively.
</para>
<table id="pg-stat-all-tables-view" xreflabel="pg_stat_all_tables">
<title><structname>pg_stat_all_tables</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of a table
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>schemaname</structfield> <type>name</type>
</para>
<para>
Name of the schema that this table is in
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relname</structfield> <type>name</type>
</para>
<para>
Name of this table
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>seq_scan</structfield> <type>bigint</type>
</para>
<para>
Number of sequential scans initiated on this table
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_seq_scan</structfield> <type>timestamp with time zone</type>
</para>
<para>
The time of the last sequential scan on this table, based on the
most recent transaction stop time
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>seq_tup_read</structfield> <type>bigint</type>
</para>
<para>
Number of live rows fetched by sequential scans
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>idx_scan</structfield> <type>bigint</type>
</para>
<para>
Number of index scans initiated on this table
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_idx_scan</structfield> <type>timestamp with time zone</type>
</para>
<para>
The time of the last index scan on this table, based on the
most recent transaction stop time
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>idx_tup_fetch</structfield> <type>bigint</type>
</para>
<para>
Number of live rows fetched by index scans
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>n_tup_ins</structfield> <type>bigint</type>
</para>
<para>
Total number of rows inserted
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>n_tup_upd</structfield> <type>bigint</type>
</para>
<para>
Total number of rows updated. (This includes row updates
counted in <structfield>n_tup_hot_upd</structfield> and
<structfield>n_tup_newpage_upd</structfield>, and remaining
non-<acronym>HOT</acronym> updates.)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>n_tup_del</structfield> <type>bigint</type>
</para>
<para>
Total number of rows deleted
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>n_tup_hot_upd</structfield> <type>bigint</type>
</para>
<para>
Number of rows <link linkend="storage-hot">HOT updated</link>.
These are updates where no successor versions are required in
indexes.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>n_tup_newpage_upd</structfield> <type>bigint</type>
</para>
<para>
Number of rows updated where the successor version goes onto a
<emphasis>new</emphasis> heap page, leaving behind an original
version with a
<link linkend="storage-tuple-layout"><structfield>t_ctid</structfield>
field</link> that points to a different heap page. These are
always non-<acronym>HOT</acronym> updates.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>n_live_tup</structfield> <type>bigint</type>
</para>
<para>
Estimated number of live rows
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>n_dead_tup</structfield> <type>bigint</type>
</para>
<para>
Estimated number of dead rows
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>n_mod_since_analyze</structfield> <type>bigint</type>
</para>
<para>
Estimated number of rows modified since this table was last analyzed
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>n_ins_since_vacuum</structfield> <type>bigint</type>
</para>
<para>
Estimated number of rows inserted since this table was last vacuumed
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_vacuum</structfield> <type>timestamp with time zone</type>
</para>
<para>
Last time at which this table was manually vacuumed
(not counting <command>VACUUM FULL</command>)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_autovacuum</structfield> <type>timestamp with time zone</type>
</para>
<para>
Last time at which this table was vacuumed by the autovacuum
daemon
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_analyze</structfield> <type>timestamp with time zone</type>
</para>
<para>
Last time at which this table was manually analyzed
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_autoanalyze</structfield> <type>timestamp with time zone</type>
</para>
<para>
Last time at which this table was analyzed by the autovacuum
daemon
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>vacuum_count</structfield> <type>bigint</type>
</para>
<para>
Number of times this table has been manually vacuumed
(not counting <command>VACUUM FULL</command>)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>autovacuum_count</structfield> <type>bigint</type>
</para>
<para>
Number of times this table has been vacuumed by the autovacuum
daemon
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>analyze_count</structfield> <type>bigint</type>
</para>
<para>
Number of times this table has been manually analyzed
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>autoanalyze_count</structfield> <type>bigint</type>
</para>
<para>
Number of times this table has been analyzed by the autovacuum
daemon
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-all-indexes-view">
<title><structname>pg_stat_all_indexes</structname></title>
<indexterm>
<primary>pg_stat_all_indexes</primary>
</indexterm>
<para>
The <structname>pg_stat_all_indexes</structname> view will contain
one row for each index in the current database,
showing statistics about accesses to that specific index. The
<structname>pg_stat_user_indexes</structname> and
<structname>pg_stat_sys_indexes</structname> views
contain the same information,
but filtered to only show user and system indexes respectively.
</para>
<table id="pg-stat-all-indexes-view" xreflabel="pg_stat_all_indexes">
<title><structname>pg_stat_all_indexes</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of the table for this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>indexrelid</structfield> <type>oid</type>
</para>
<para>
OID of this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>schemaname</structfield> <type>name</type>
</para>
<para>
Name of the schema this index is in
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relname</structfield> <type>name</type>
</para>
<para>
Name of the table for this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>indexrelname</structfield> <type>name</type>
</para>
<para>
Name of this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>idx_scan</structfield> <type>bigint</type>
</para>
<para>
Number of index scans initiated on this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>last_idx_scan</structfield> <type>timestamp with time zone</type>
</para>
<para>
The time of the last scan on this index, based on the
most recent transaction stop time
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>idx_tup_read</structfield> <type>bigint</type>
</para>
<para>
Number of index entries returned by scans on this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>idx_tup_fetch</structfield> <type>bigint</type>
</para>
<para>
Number of live table rows fetched by simple index scans using this
index
</para></entry>
</row>
</tbody>
</tgroup>
</table>
<para>
Indexes can be used by simple index scans, <quote>bitmap</quote> index scans,
and the optimizer. In a bitmap scan
the output of several indexes can be combined via AND or OR rules,
so it is difficult to associate individual heap row fetches
with specific indexes when a bitmap scan is used. Therefore, a bitmap
scan increments the
<structname>pg_stat_all_indexes</structname>.<structfield>idx_tup_read</structfield>
count(s) for the index(es) it uses, and it increments the
<structname>pg_stat_all_tables</structname>.<structfield>idx_tup_fetch</structfield>
count for the table, but it does not affect
<structname>pg_stat_all_indexes</structname>.<structfield>idx_tup_fetch</structfield>.
The optimizer also accesses indexes to check for supplied constants
whose values are outside the recorded range of the optimizer statistics
because the optimizer statistics might be stale.
</para>
<note>
<para>
The <structfield>idx_tup_read</structfield> and <structfield>idx_tup_fetch</structfield> counts
can be different even without any use of bitmap scans,
because <structfield>idx_tup_read</structfield> counts
index entries retrieved from the index while <structfield>idx_tup_fetch</structfield>
counts live rows fetched from the table. The latter will be less if any
dead or not-yet-committed rows are fetched using the index, or if any
heap fetches are avoided by means of an index-only scan.
</para>
</note>
</sect2>
<sect2 id="monitoring-pg-statio-all-tables-view">
<title><structname>pg_statio_all_tables</structname></title>
<indexterm>
<primary>pg_statio_all_tables</primary>
</indexterm>
<para>
The <structname>pg_statio_all_tables</structname> view will contain
one row for each table in the current database (including TOAST
tables), showing statistics about I/O on that specific table. The
<structname>pg_statio_user_tables</structname> and
<structname>pg_statio_sys_tables</structname> views
contain the same information,
but filtered to only show user and system tables respectively.
</para>
<table id="pg-statio-all-tables-view" xreflabel="pg_statio_all_tables">
<title><structname>pg_statio_all_tables</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of a table
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>schemaname</structfield> <type>name</type>
</para>
<para>
Name of the schema that this table is in
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relname</structfield> <type>name</type>
</para>
<para>
Name of this table
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>heap_blks_read</structfield> <type>bigint</type>
</para>
<para>
Number of disk blocks read from this table
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>heap_blks_hit</structfield> <type>bigint</type>
</para>
<para>
Number of buffer hits in this table
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>idx_blks_read</structfield> <type>bigint</type>
</para>
<para>
Number of disk blocks read from all indexes on this table
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>idx_blks_hit</structfield> <type>bigint</type>
</para>
<para>
Number of buffer hits in all indexes on this table
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>toast_blks_read</structfield> <type>bigint</type>
</para>
<para>
Number of disk blocks read from this table's TOAST table (if any)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>toast_blks_hit</structfield> <type>bigint</type>
</para>
<para>
Number of buffer hits in this table's TOAST table (if any)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tidx_blks_read</structfield> <type>bigint</type>
</para>
<para>
Number of disk blocks read from this table's TOAST table indexes (if any)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tidx_blks_hit</structfield> <type>bigint</type>
</para>
<para>
Number of buffer hits in this table's TOAST table indexes (if any)
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-statio-all-indexes-view">
<title><structname>pg_statio_all_indexes</structname></title>
<indexterm>
<primary>pg_statio_all_indexes</primary>
</indexterm>
<para>
The <structname>pg_statio_all_indexes</structname> view will contain
one row for each index in the current database,
showing statistics about I/O on that specific index. The
<structname>pg_statio_user_indexes</structname> and
<structname>pg_statio_sys_indexes</structname> views
contain the same information,
but filtered to only show user and system indexes respectively.
</para>
<table id="pg-statio-all-indexes-view" xreflabel="pg_statio_all_indexes">
<title><structname>pg_statio_all_indexes</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of the table for this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>indexrelid</structfield> <type>oid</type>
</para>
<para>
OID of this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>schemaname</structfield> <type>name</type>
</para>
<para>
Name of the schema this index is in
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relname</structfield> <type>name</type>
</para>
<para>
Name of the table for this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>indexrelname</structfield> <type>name</type>
</para>
<para>
Name of this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>idx_blks_read</structfield> <type>bigint</type>
</para>
<para>
Number of disk blocks read from this index
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>idx_blks_hit</structfield> <type>bigint</type>
</para>
<para>
Number of buffer hits in this index
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-statio-all-sequences-view">
<title><structname>pg_statio_all_sequences</structname></title>
<indexterm>
<primary>pg_statio_all_sequences</primary>
</indexterm>
<para>
The <structname>pg_statio_all_sequences</structname> view will contain
one row for each sequence in the current database,
showing statistics about I/O on that specific sequence.
</para>
<table id="pg-statio-all-sequences-view" xreflabel="pg_statio_all_sequences">
<title><structname>pg_statio_all_sequences</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of a sequence
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>schemaname</structfield> <type>name</type>
</para>
<para>
Name of the schema this sequence is in
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relname</structfield> <type>name</type>
</para>
<para>
Name of this sequence
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blks_read</structfield> <type>bigint</type>
</para>
<para>
Number of disk blocks read from this sequence
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blks_hit</structfield> <type>bigint</type>
</para>
<para>
Number of buffer hits in this sequence
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-user-functions-view">
<title><structname>pg_stat_user_functions</structname></title>
<indexterm>
<primary>pg_stat_user_functions</primary>
</indexterm>
<para>
The <structname>pg_stat_user_functions</structname> view will contain
one row for each tracked function, showing statistics about executions of
that function. The <xref linkend="guc-track-functions"/> parameter
controls exactly which functions are tracked.
</para>
<table id="pg-stat-user-functions-view" xreflabel="pg_stat_user_functions">
<title><structname>pg_stat_user_functions</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>funcid</structfield> <type>oid</type>
</para>
<para>
OID of a function
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>schemaname</structfield> <type>name</type>
</para>
<para>
Name of the schema this function is in
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>funcname</structfield> <type>name</type>
</para>
<para>
Name of this function
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>calls</structfield> <type>bigint</type>
</para>
<para>
Number of times this function has been called
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>total_time</structfield> <type>double precision</type>
</para>
<para>
Total time spent in this function and all other functions
called by it, in milliseconds
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>self_time</structfield> <type>double precision</type>
</para>
<para>
Total time spent in this function itself, not including
other functions called by it, in milliseconds
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-pg-stat-slru-view">
<title><structname>pg_stat_slru</structname></title>
<indexterm>
<primary>SLRU</primary>
</indexterm>
<indexterm>
<primary>pg_stat_slru</primary>
</indexterm>
<para>
<productname>PostgreSQL</productname> accesses certain on-disk information
Improve performance of subsystems on top of SLRU More precisely, what we do here is make the SLRU cache sizes configurable with new GUCs, so that sites with high concurrency and big ranges of transactions in flight (resp. multixacts/subtransactions) can benefit from bigger caches. In order for this to work with good performance, two additional changes are made: 1. the cache is divided in "banks" (to borrow terminology from CPU caches), and algorithms such as eviction buffer search only affect one specific bank. This forestalls the problem that linear searching for a specific buffer across the whole cache takes too long: we only have to search the specific bank, whose size is small. This work is authored by Andrey Borodin. 2. Change the locking regime for the SLRU banks, so that each bank uses a separate LWLock. This allows for increased scalability. This work is authored by Dilip Kumar. (A part of this was previously committed as d172b717c6f4.) Special care is taken so that the algorithms that can potentially traverse more than one bank release one bank's lock before acquiring the next. This should happen rarely, but particularly clog.c's group commit feature needed code adjustment to cope with this. I (Álvaro) also added lots of comments to make sure the design is sound. The new GUCs match the names introduced by bcdfa5f2e2f2 in the pg_stat_slru view. The default values for these parameters are similar to the previous sizes of each SLRU. commit_ts, clog and subtrans accept value 0, which means to adjust by dividing shared_buffers by 512 (so 2MB for every 1GB of shared_buffers), with a cap of 8MB. (A new slru.c function SimpleLruAutotuneBuffers() was added to support this.) The cap was previously 1MB for clog, so for sites with more than 512MB of shared memory the total memory used increases, which is likely a good tradeoff. However, other SLRUs (notably multixact ones) retain smaller sizes and don't support a configured value of 0. These values based on shared_buffers may need to be revisited, but that's an easy change. There was some resistance to adding these new GUCs: it would be better to adjust to memory pressure automatically somehow, for example by stealing memory from shared_buffers (where the caches can grow and shrink naturally). However, doing that seems to be a much larger project and one which has made virtually no progress in several years, and because this is such a pain point for so many users, here we take the pragmatic approach. Author: Andrey Borodin <x4mmm@yandex-team.ru> Author: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Amul Sul, Gilles Darold, Anastasia Lubennikova, Ivan Lazarev, Robert Haas, Thomas Munro, Tomas Vondra, Yura Sokolov, Васильев Дмитрий (Dmitry Vasiliev). Discussion: https://postgr.es/m/2BEC2B3F-9B61-4C1D-9FB5-5FAB0F05EF86@yandex-team.ru Discussion: https://postgr.es/m/CAFiTN-vzDvNz=ExGXz6gdyjtzGixKSqs0mKHMmaQ8sOSEFZ33A@mail.gmail.com
2024-02-28 17:05:31 +01:00
via <literal>SLRU</literal> (<firstterm>simple least-recently-used</firstterm>)
caches.
The <structname>pg_stat_slru</structname> view will contain
one row for each tracked SLRU cache, showing statistics about access
to cached pages.
</para>
Improve performance of subsystems on top of SLRU More precisely, what we do here is make the SLRU cache sizes configurable with new GUCs, so that sites with high concurrency and big ranges of transactions in flight (resp. multixacts/subtransactions) can benefit from bigger caches. In order for this to work with good performance, two additional changes are made: 1. the cache is divided in "banks" (to borrow terminology from CPU caches), and algorithms such as eviction buffer search only affect one specific bank. This forestalls the problem that linear searching for a specific buffer across the whole cache takes too long: we only have to search the specific bank, whose size is small. This work is authored by Andrey Borodin. 2. Change the locking regime for the SLRU banks, so that each bank uses a separate LWLock. This allows for increased scalability. This work is authored by Dilip Kumar. (A part of this was previously committed as d172b717c6f4.) Special care is taken so that the algorithms that can potentially traverse more than one bank release one bank's lock before acquiring the next. This should happen rarely, but particularly clog.c's group commit feature needed code adjustment to cope with this. I (Álvaro) also added lots of comments to make sure the design is sound. The new GUCs match the names introduced by bcdfa5f2e2f2 in the pg_stat_slru view. The default values for these parameters are similar to the previous sizes of each SLRU. commit_ts, clog and subtrans accept value 0, which means to adjust by dividing shared_buffers by 512 (so 2MB for every 1GB of shared_buffers), with a cap of 8MB. (A new slru.c function SimpleLruAutotuneBuffers() was added to support this.) The cap was previously 1MB for clog, so for sites with more than 512MB of shared memory the total memory used increases, which is likely a good tradeoff. However, other SLRUs (notably multixact ones) retain smaller sizes and don't support a configured value of 0. These values based on shared_buffers may need to be revisited, but that's an easy change. There was some resistance to adding these new GUCs: it would be better to adjust to memory pressure automatically somehow, for example by stealing memory from shared_buffers (where the caches can grow and shrink naturally). However, doing that seems to be a much larger project and one which has made virtually no progress in several years, and because this is such a pain point for so many users, here we take the pragmatic approach. Author: Andrey Borodin <x4mmm@yandex-team.ru> Author: Dilip Kumar <dilipbalaut@gmail.com> Reviewed-by: Amul Sul, Gilles Darold, Anastasia Lubennikova, Ivan Lazarev, Robert Haas, Thomas Munro, Tomas Vondra, Yura Sokolov, Васильев Дмитрий (Dmitry Vasiliev). Discussion: https://postgr.es/m/2BEC2B3F-9B61-4C1D-9FB5-5FAB0F05EF86@yandex-team.ru Discussion: https://postgr.es/m/CAFiTN-vzDvNz=ExGXz6gdyjtzGixKSqs0mKHMmaQ8sOSEFZ33A@mail.gmail.com
2024-02-28 17:05:31 +01:00
<para>
For each <literal>SLRU</literal> cache that's part of the core server,
there is a configuration parameter that controls its size, with the suffix
<literal>_buffers</literal> appended.
</para>
<table id="pg-stat-slru-view" xreflabel="pg_stat_slru">
<title><structname>pg_stat_slru</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>name</structfield> <type>text</type>
</para>
<para>
Name of the SLRU
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blks_zeroed</structfield> <type>bigint</type>
</para>
<para>
Number of blocks zeroed during initializations
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blks_hit</structfield> <type>bigint</type>
</para>
<para>
Number of times disk blocks were found already in the SLRU,
so that a read was not necessary (this only includes hits in the
SLRU, not the operating system's file system cache)
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blks_read</structfield> <type>bigint</type>
</para>
<para>
Number of disk blocks read for this SLRU
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blks_written</structfield> <type>bigint</type>
</para>
<para>
Number of disk blocks written for this SLRU
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blks_exists</structfield> <type>bigint</type>
</para>
<para>
Number of blocks checked for existence for this SLRU
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>flushes</structfield> <type>bigint</type>
</para>
<para>
Number of flushes of dirty data for this SLRU
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>truncates</structfield> <type>bigint</type>
</para>
<para>
Number of truncates for this SLRU
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>stats_reset</structfield> <type>timestamp with time zone</type>
</para>
<para>
Time at which these statistics were last reset
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="monitoring-stats-functions">
<title>Statistics Functions</title>
<para>
Other ways of looking at the statistics can be set up by writing
queries that use the same underlying statistics access functions used by
the standard views shown above. For details such as the functions' names,
consult the definitions of the standard views. (For example, in
<application>psql</application> you could issue <literal>\d+ pg_stat_activity</literal>.)
The access functions for per-database statistics take a database OID as an
argument to identify which database to report on.
The per-table and per-index functions take a table or index OID.
The functions for per-function statistics take a function OID.
Note that only tables, indexes, and functions in the current database
can be seen with these functions.
</para>
<para>
Additional functions related to the cumulative statistics system are listed
in <xref linkend="monitoring-stats-funcs-table"/>.
</para>
<table id="monitoring-stats-funcs-table">
<title>Additional Statistics Functions</title>
<tgroup cols="1">
<thead>
<row>
<entry role="func_table_entry"><para role="func_signature">
Function
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<!-- See also the entry for this in func.sgml -->
<entry role="func_table_entry"><para role="func_signature">
<function>pg_backend_pid</function> ()
<returnvalue>integer</returnvalue>
</para>
<para>
Returns the process ID of the server process attached to the current
session.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_activity</primary>
</indexterm>
<function>pg_stat_get_activity</function> ( <type>integer</type> )
<returnvalue>setof record</returnvalue>
</para>
<para>
Returns a record of information about the backend with the specified
process ID, or one record for each active backend in the system
if <literal>NULL</literal> is specified. The fields returned are a
subset of those in the <structname>pg_stat_activity</structname> view.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_snapshot_timestamp</primary>
</indexterm>
<function>pg_stat_get_snapshot_timestamp</function> ()
<returnvalue>timestamp with time zone</returnvalue>
</para>
<para>
pgstat: store statistics in shared memory. Previously the statistics collector received statistics updates via UDP and shared statistics data by writing them out to temporary files regularly. These files can reach tens of megabytes and are written out up to twice a second. This has repeatedly prevented us from adding additional useful statistics. Now statistics are stored in shared memory. Statistics for variable-numbered objects are stored in a dshash hashtable (backed by dynamic shared memory). Fixed-numbered stats are stored in plain shared memory. The header for pgstat.c contains an overview of the architecture. The stats collector is not needed anymore, remove it. By utilizing the transactional statistics drop infrastructure introduced in a prior commit statistics entries cannot "leak" anymore. Previously leaked statistics were dropped by pgstat_vacuum_stat(), called from [auto-]vacuum. On systems with many small relations pgstat_vacuum_stat() could be quite expensive. Now that replicas drop statistics entries for dropped objects, it is not necessary anymore to reset stats when starting from a cleanly shut down replica. Subsequent commits will perform some further code cleanup, adapt docs and add tests. Bumps PGSTAT_FILE_FORMAT_ID. Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com> Author: Andres Freund <andres@anarazel.de> Author: Melanie Plageman <melanieplageman@gmail.com> Reviewed-By: Andres Freund <andres@anarazel.de> Reviewed-By: Thomas Munro <thomas.munro@gmail.com> Reviewed-By: Justin Pryzby <pryzby@telsasoft.com> Reviewed-By: "David G. Johnston" <david.g.johnston@gmail.com> Reviewed-By: Tomas Vondra <tomas.vondra@2ndquadrant.com> (in a much earlier version) Reviewed-By: Arthur Zakirov <a.zakirov@postgrespro.ru> (in a much earlier version) Reviewed-By: Antonin Houska <ah@cybertec.at> (in a much earlier version) Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de Discussion: https://postgr.es/m/20220308205351.2xcn6k4x5yivcxyd@alap3.anarazel.de Discussion: https://postgr.es/m/20210319235115.y3wz7hpnnrshdyv6@alap3.anarazel.de
2022-04-07 06:29:46 +02:00
Returns the timestamp of the current statistics snapshot, or NULL if
no statistics snapshot has been taken. A snapshot is taken the first
time cumulative statistics are accessed in a transaction if
<varname>stats_fetch_consistency</varname> is set to
<literal>snapshot</literal>
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_xact_blocks_fetched</primary>
</indexterm>
<function>pg_stat_get_xact_blocks_fetched</function> ( <type>oid</type> )
<returnvalue>bigint</returnvalue>
</para>
<para>
Returns the number of block read requests for table or index, in the
current transaction. This number minus
<function>pg_stat_get_xact_blocks_hit</function> gives the number of
kernel <function>read()</function> calls; the number of actual
physical reads is usually lower due to kernel-level buffering.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_xact_blocks_hit</primary>
</indexterm>
<function>pg_stat_get_xact_blocks_hit</function> ( <type>oid</type> )
<returnvalue>bigint</returnvalue>
</para>
<para>
Returns the number of block read requests for table or index, in the
current transaction, found in cache (not triggering kernel
<function>read()</function> calls).
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_clear_snapshot</primary>
</indexterm>
<function>pg_stat_clear_snapshot</function> ()
<returnvalue>void</returnvalue>
</para>
<para>
Discards the current statistics snapshot or cached information.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_reset</primary>
</indexterm>
<function>pg_stat_reset</function> ()
<returnvalue>void</returnvalue>
</para>
<para>
Resets all statistics counters for the current database to zero.
</para>
<para>
This function is restricted to superusers by default, but other users
can be granted EXECUTE to run the function.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_reset_shared</primary>
</indexterm>
<function>pg_stat_reset_shared</function> ( [ <parameter>target</parameter> <type>text</type> <literal>DEFAULT</literal> <literal>NULL</literal> ] )
<returnvalue>void</returnvalue>
</para>
<para>
Resets some cluster-wide statistics counters to zero, depending on the
argument. <parameter>target</parameter> can be:
<itemizedlist>
<listitem>
<para>
<literal>archiver</literal>: Reset all the counters shown in the
<structname>pg_stat_archiver</structname> view.
</para>
</listitem>
<listitem>
<para>
<literal>bgwriter</literal>: Reset all the counters shown in the
<structname>pg_stat_bgwriter</structname> view.
</para>
</listitem>
<listitem>
<para>
<literal>checkpointer</literal>: Reset all the counters shown in the
<structname>pg_stat_checkpointer</structname> view.
</para>
</listitem>
<listitem>
<para>
<literal>io</literal>: Reset all the counters shown in the
<structname>pg_stat_io</structname> view.
</para>
</listitem>
<listitem>
<para>
<literal>recovery_prefetch</literal>: Reset all the counters shown in
the <structname>pg_stat_recovery_prefetch</structname> view.
</para>
</listitem>
<listitem>
<para>
<literal>slru</literal>: Reset all the counters shown in the
<structname>pg_stat_slru</structname> view.
</para>
</listitem>
<listitem>
<para>
<literal>wal</literal>: Reset all the counters shown in the
<structname>pg_stat_wal</structname> view.
</para>
</listitem>
<listitem>
<para>
<literal>NULL</literal> or not specified: All the counters from the
views listed above are reset.
</para>
</listitem>
</itemizedlist>
</para>
<para>
This function is restricted to superusers by default, but other users
can be granted EXECUTE to run the function.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_reset_single_table_counters</primary>
</indexterm>
<function>pg_stat_reset_single_table_counters</function> ( <type>oid</type> )
<returnvalue>void</returnvalue>
</para>
<para>
Resets statistics for a single table or index in the current database
or shared across all databases in the cluster to zero.
</para>
<para>
This function is restricted to superusers by default, but other users
can be granted EXECUTE to run the function.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_reset_single_function_counters</primary>
</indexterm>
<function>pg_stat_reset_single_function_counters</function> ( <type>oid</type> )
<returnvalue>void</returnvalue>
</para>
<para>
Resets statistics for a single function in the current database to
zero.
</para>
<para>
This function is restricted to superusers by default, but other users
can be granted EXECUTE to run the function.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_reset_slru</primary>
</indexterm>
<function>pg_stat_reset_slru</function> ( [ <parameter>target</parameter> <type>text</type> <literal>DEFAULT</literal> <literal>NULL</literal> ] )
<returnvalue>void</returnvalue>
</para>
<para>
Resets statistics to zero for a single SLRU cache, or for all SLRUs in
the cluster. If <parameter>target</parameter> is
<literal>NULL</literal> or is not specified, all the counters shown in
the <structname>pg_stat_slru</structname> view for all SLRU caches are
reset. The argument can be one of
<literal>commit_timestamp</literal>,
<literal>multixact_member</literal>,
<literal>multixact_offset</literal>,
<literal>notify</literal>,
<literal>serializable</literal>,
<literal>subtransaction</literal>, or
<literal>transaction</literal>
to reset the counters for only that entry.
If the argument is <literal>other</literal> (or indeed, any
unrecognized name), then the counters for all other SLRU caches, such
as extension-defined caches, are reset.
</para>
<para>
This function is restricted to superusers by default, but other users
can be granted EXECUTE to run the function.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_reset_replication_slot</primary>
</indexterm>
<function>pg_stat_reset_replication_slot</function> ( <type>text</type> )
<returnvalue>void</returnvalue>
</para>
<para>
Resets statistics of the replication slot defined by the argument. If
the argument is <literal>NULL</literal>, resets statistics for all
the replication slots.
</para>
<para>
This function is restricted to superusers by default, but other users
can be granted EXECUTE to run the function.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_reset_subscription_stats</primary>
</indexterm>
<function>pg_stat_reset_subscription_stats</function> ( <type>oid</type> )
<returnvalue>void</returnvalue>
</para>
<para>
Resets statistics for a single subscription shown in the
<structname>pg_stat_subscription_stats</structname> view to zero. If
the argument is <literal>NULL</literal>, reset statistics for all
subscriptions.
</para>
<para>
This function is restricted to superusers by default, but other users
can be granted EXECUTE to run the function.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
<warning>
<para>
Using <function>pg_stat_reset()</function> also resets counters that
autovacuum uses to determine when to trigger a vacuum or an analyze.
Resetting these counters can cause autovacuum to not perform necessary
work, which can cause problems such as table bloat or out-dated
table statistics. A database-wide <command>ANALYZE</command> is
recommended after the statistics have been reset.
</para>
</warning>
<para>
<function>pg_stat_get_activity</function>, the underlying function of
the <structname>pg_stat_activity</structname> view, returns a set of records
containing all the available information about each backend process.
Sometimes it may be more convenient to obtain just a subset of this
information. In such cases, another set of per-backend statistics
access functions can be used; these are shown in <xref
linkend="monitoring-stats-backend-funcs-table"/>.
These access functions use the session's backend ID number, which is a
small integer (>= 0) that is distinct from the backend ID of any
concurrent session, although a session's ID can be recycled as soon as
it exits. The backend ID is used, among other things, to identify the
session's temporary schema if it has one.
The function <function>pg_stat_get_backend_idset</function> provides a
convenient way to list all the active backends' ID numbers for
invoking these functions. For example, to show the <acronym>PID</acronym>s and
current queries of all backends:
<programlisting>
SELECT pg_stat_get_backend_pid(backendid) AS pid,
pg_stat_get_backend_activity(backendid) AS query
FROM pg_stat_get_backend_idset() AS backendid;
</programlisting>
</para>
<table id="monitoring-stats-backend-funcs-table">
<title>Per-Backend Statistics Functions</title>
<tgroup cols="1">
<thead>
<row>
<entry role="func_table_entry"><para role="func_signature">
Function
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_activity</primary>
</indexterm>
<function>pg_stat_get_backend_activity</function> ( <type>integer</type> )
<returnvalue>text</returnvalue>
</para>
<para>
Returns the text of this backend's most recent query.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_activity_start</primary>
</indexterm>
<function>pg_stat_get_backend_activity_start</function> ( <type>integer</type> )
<returnvalue>timestamp with time zone</returnvalue>
</para>
<para>
Returns the time when the backend's most recent query was started.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_client_addr</primary>
</indexterm>
<function>pg_stat_get_backend_client_addr</function> ( <type>integer</type> )
<returnvalue>inet</returnvalue>
</para>
<para>
Returns the IP address of the client connected to this backend.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_client_port</primary>
</indexterm>
<function>pg_stat_get_backend_client_port</function> ( <type>integer</type> )
<returnvalue>integer</returnvalue>
</para>
<para>
Returns the TCP port number that the client is using for communication.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_dbid</primary>
</indexterm>
<function>pg_stat_get_backend_dbid</function> ( <type>integer</type> )
<returnvalue>oid</returnvalue>
</para>
<para>
Returns the OID of the database this backend is connected to.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_idset</primary>
</indexterm>
<function>pg_stat_get_backend_idset</function> ()
<returnvalue>setof integer</returnvalue>
</para>
<para>
Returns the set of currently active backend ID numbers.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_pid</primary>
</indexterm>
<function>pg_stat_get_backend_pid</function> ( <type>integer</type> )
<returnvalue>integer</returnvalue>
</para>
<para>
Returns the process ID of this backend.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_start</primary>
</indexterm>
<function>pg_stat_get_backend_start</function> ( <type>integer</type> )
<returnvalue>timestamp with time zone</returnvalue>
</para>
<para>
Returns the time when this process was started.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_subxact</primary>
</indexterm>
<function>pg_stat_get_backend_subxact</function> ( <type>integer</type> )
<returnvalue>record</returnvalue>
</para>
<para>
Returns a record of information about the subtransactions of the
backend with the specified ID.
The fields returned are <parameter>subxact_count</parameter>, which
is the number of subtransactions in the backend's subtransaction cache,
and <parameter>subxact_overflow</parameter>, which indicates whether
the backend's subtransaction cache is overflowed or not.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_userid</primary>
</indexterm>
<function>pg_stat_get_backend_userid</function> ( <type>integer</type> )
<returnvalue>oid</returnvalue>
</para>
<para>
Returns the OID of the user logged into this backend.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_wait_event</primary>
</indexterm>
<function>pg_stat_get_backend_wait_event</function> ( <type>integer</type> )
<returnvalue>text</returnvalue>
</para>
<para>
Returns the wait event name if this backend is currently waiting,
otherwise NULL. See <xref linkend="wait-event-activity-table"/> through
<xref linkend="wait-event-timeout-table"/>.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_wait_event_type</primary>
</indexterm>
<function>pg_stat_get_backend_wait_event_type</function> ( <type>integer</type> )
<returnvalue>text</returnvalue>
</para>
<para>
Returns the wait event type name if this backend is currently waiting,
otherwise NULL. See <xref linkend="wait-event-table"/> for details.
</para></entry>
</row>
<row>
<entry role="func_table_entry"><para role="func_signature">
<indexterm>
<primary>pg_stat_get_backend_xact_start</primary>
</indexterm>
<function>pg_stat_get_backend_xact_start</function> ( <type>integer</type> )
<returnvalue>timestamp with time zone</returnvalue>
</para>
<para>
Returns the time when the backend's current transaction was started.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
</sect1>
<sect1 id="monitoring-locks">
<title>Viewing Locks</title>
2003-08-31 19:32:24 +02:00
<indexterm zone="monitoring-locks">
<primary>lock</primary>
<secondary>monitoring</secondary>
</indexterm>
<para>
Another useful tool for monitoring database activity is the
<structname>pg_locks</structname> system table. It allows the
database administrator to view information about the outstanding
locks in the lock manager. For example, this capability can be used
to:
2004-12-28 20:08:58 +01:00
<itemizedlist>
<listitem>
<para>
View all the locks currently outstanding, all the locks on
relations in a particular database, all the locks on a
particular relation, or all the locks held by a particular
<productname>PostgreSQL</productname> session.
</para>
</listitem>
<listitem>
<para>
Determine the relation in the current database with the most
ungranted locks (which might be a source of contention among
database clients).
</para>
</listitem>
<listitem>
<para>
Determine the effect of lock contention on overall database
performance, as well as the extent to which contention varies
with overall database traffic.
</para>
</listitem>
</itemizedlist>
Details of the <structname>pg_locks</structname> view appear in
<xref linkend="view-pg-locks"/>.
For more information on locking and managing concurrency with
<productname>PostgreSQL</productname>, refer to <xref linkend="mvcc"/>.
</para>
</sect1>
<sect1 id="progress-reporting">
<title>Progress Reporting</title>
<para>
<productname>PostgreSQL</productname> has the ability to report the progress of
certain commands during command execution. Currently, the only commands
which support progress reporting are <command>ANALYZE</command>,
<command>CLUSTER</command>,
<command>CREATE INDEX</command>, <command>VACUUM</command>,
<command>COPY</command>,
and <xref linkend="protocol-replication-base-backup"/> (i.e., replication
command that <xref linkend="app-pgbasebackup"/> issues to take
a base backup).
This may be expanded in the future.
</para>
<sect2 id="analyze-progress-reporting">
<title>ANALYZE Progress Reporting</title>
<indexterm>
<primary>pg_stat_progress_analyze</primary>
</indexterm>
<para>
Whenever <command>ANALYZE</command> is running, the
<structname>pg_stat_progress_analyze</structname> view will contain a
row for each backend that is currently running that command. The tables
below describe the information that will be reported and provide
information about how to interpret it.
</para>
<table id="pg-stat-progress-analyze-view" xreflabel="pg_stat_progress_analyze">
<title><structname>pg_stat_progress_analyze</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of backend.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datid</structfield> <type>oid</type>
</para>
<para>
OID of the database to which this backend is connected.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datname</structfield> <type>name</type>
</para>
<para>
Name of the database to which this backend is connected.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of the table being analyzed.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>phase</structfield> <type>text</type>
</para>
<para>
Current processing phase. See <xref linkend="analyze-phases"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sample_blks_total</structfield> <type>bigint</type>
</para>
<para>
Total number of heap blocks that will be sampled.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>sample_blks_scanned</structfield> <type>bigint</type>
</para>
<para>
Number of heap blocks scanned.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>ext_stats_total</structfield> <type>bigint</type>
</para>
<para>
Number of extended statistics.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>ext_stats_computed</structfield> <type>bigint</type>
</para>
<para>
Number of extended statistics computed. This counter only advances
when the phase is <literal>computing extended statistics</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>child_tables_total</structfield> <type>bigint</type>
</para>
<para>
Number of child tables.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>child_tables_done</structfield> <type>bigint</type>
</para>
<para>
Number of child tables scanned. This counter only advances when the
phase is <literal>acquiring inherited sample rows</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>current_child_table_relid</structfield> <type>oid</type>
</para>
<para>
OID of the child table currently being scanned. This field is
only valid when the phase is
<literal>acquiring inherited sample rows</literal>.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
<table id="analyze-phases">
<title>ANALYZE Phases</title>
<tgroup cols="2">
<colspec colname="col1" colwidth="1*"/>
<colspec colname="col2" colwidth="2*"/>
<thead>
<row>
<entry>Phase</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>initializing</literal></entry>
<entry>
The command is preparing to begin scanning the heap. This phase is
expected to be very brief.
</entry>
</row>
<row>
<entry><literal>acquiring sample rows</literal></entry>
<entry>
The command is currently scanning the table given by
<structfield>relid</structfield> to obtain sample rows.
</entry>
</row>
<row>
<entry><literal>acquiring inherited sample rows</literal></entry>
<entry>
The command is currently scanning child tables to obtain sample rows.
Columns <structfield>child_tables_total</structfield>,
<structfield>child_tables_done</structfield>, and
<structfield>current_child_table_relid</structfield> contain the
progress information for this phase.
</entry>
</row>
<row>
<entry><literal>computing statistics</literal></entry>
<entry>
The command is computing statistics from the sample rows obtained
during the table scan.
</entry>
</row>
<row>
<entry><literal>computing extended statistics</literal></entry>
<entry>
The command is computing extended statistics from the sample rows
obtained during the table scan.
</entry>
</row>
<row>
<entry><literal>finalizing analyze</literal></entry>
<entry>
The command is updating <structname>pg_class</structname>. When this
phase is completed, <command>ANALYZE</command> will end.
</entry>
</row>
</tbody>
</tgroup>
</table>
<note>
<para>
Note that when <command>ANALYZE</command> is run on a partitioned table,
Improve <xref> vs. <command> formatting in the documentation SQL commands are generally marked up as <command>, except when a link to a reference page is used using <xref>. But the latter doesn't create monospace markup, so this looks strange especially when a paragraph contains a mix of links and non-links. We considered putting <command> in the <refentrytitle> on the target side, but that creates some formatting side effects elsewhere. Generally, it seems safer to solve this on the link source side. We can't put the <xref> inside the <command>; the DTD doesn't allow this. DocBook 5 would allow the <command> to have the linkend attribute itself, but we are not there yet. So to solve this for now, convert the <xref>s to <link> plus <command>. This gives the correct look and also gives some more flexibility what we can put into the link text (e.g., subcommands or other clauses). In the future, these could then be converted to DocBook 5 style. I haven't converted absolutely all xrefs to SQL command reference pages, only those where we care about the appearance of the link text or where it was otherwise appropriate to make the appearance match a bit better. Also in some cases, the links where repetitive, so in those cases the links where just removed and replaced by a plain <command>. In cases where we just want the link and don't specifically care about the generated link text (typically phrased "for further information see <xref ...>") the xref is kept. Reported-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> Discussion: https://www.postgresql.org/message-id/flat/87o8pco34z.fsf@wibble.ilmari.org
2020-10-03 16:16:51 +02:00
all of its partitions are also recursively analyzed.
In that case, <command>ANALYZE</command>
progress is reported first for the parent table, whereby its inheritance
statistics are collected, followed by that for each partition.
</para>
</note>
</sect2>
<sect2 id="cluster-progress-reporting">
<title>CLUSTER Progress Reporting</title>
<indexterm>
<primary>pg_stat_progress_cluster</primary>
</indexterm>
<para>
Whenever <command>CLUSTER</command> or <command>VACUUM FULL</command> is
running, the <structname>pg_stat_progress_cluster</structname> view will
contain a row for each backend that is currently running either command.
The tables below describe the information that will be reported and
provide information about how to interpret it.
</para>
<table id="pg-stat-progress-cluster-view" xreflabel="pg_stat_progress_cluster">
<title><structname>pg_stat_progress_cluster</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of backend.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datid</structfield> <type>oid</type>
</para>
<para>
OID of the database to which this backend is connected.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datname</structfield> <type>name</type>
</para>
<para>
Name of the database to which this backend is connected.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of the table being clustered.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>command</structfield> <type>text</type>
</para>
<para>
The command that is running. Either <literal>CLUSTER</literal> or <literal>VACUUM FULL</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>phase</structfield> <type>text</type>
</para>
<para>
Current processing phase. See <xref linkend="cluster-phases"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>cluster_index_relid</structfield> <type>oid</type>
</para>
<para>
If the table is being scanned using an index, this is the OID of the
index being used; otherwise, it is zero.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>heap_tuples_scanned</structfield> <type>bigint</type>
</para>
<para>
Number of heap tuples scanned.
This counter only advances when the phase is
<literal>seq scanning heap</literal>,
<literal>index scanning heap</literal>
or <literal>writing new heap</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>heap_tuples_written</structfield> <type>bigint</type>
</para>
<para>
Number of heap tuples written.
This counter only advances when the phase is
<literal>seq scanning heap</literal>,
<literal>index scanning heap</literal>
or <literal>writing new heap</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>heap_blks_total</structfield> <type>bigint</type>
</para>
<para>
Total number of heap blocks in the table. This number is reported
as of the beginning of <literal>seq scanning heap</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>heap_blks_scanned</structfield> <type>bigint</type>
</para>
<para>
Number of heap blocks scanned. This counter only advances when the
phase is <literal>seq scanning heap</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>index_rebuild_count</structfield> <type>bigint</type>
</para>
<para>
Number of indexes rebuilt. This counter only advances when the phase
is <literal>rebuilding index</literal>.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
<table id="cluster-phases">
<title>CLUSTER and VACUUM FULL Phases</title>
<tgroup cols="2">
<colspec colname="col1" colwidth="1*"/>
<colspec colname="col2" colwidth="2*"/>
<thead>
<row>
<entry>Phase</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>initializing</literal></entry>
<entry>
The command is preparing to begin scanning the heap. This phase is
expected to be very brief.
</entry>
</row>
<row>
<entry><literal>seq scanning heap</literal></entry>
<entry>
The command is currently scanning the table using a sequential scan.
</entry>
</row>
<row>
<entry><literal>index scanning heap</literal></entry>
<entry>
<command>CLUSTER</command> is currently scanning the table using an index scan.
</entry>
</row>
<row>
<entry><literal>sorting tuples</literal></entry>
<entry>
<command>CLUSTER</command> is currently sorting tuples.
</entry>
</row>
<row>
<entry><literal>writing new heap</literal></entry>
<entry>
<command>CLUSTER</command> is currently writing the new heap.
</entry>
</row>
<row>
<entry><literal>swapping relation files</literal></entry>
<entry>
The command is currently swapping newly-built files into place.
</entry>
</row>
<row>
<entry><literal>rebuilding index</literal></entry>
<entry>
The command is currently rebuilding an index.
</entry>
</row>
<row>
<entry><literal>performing final cleanup</literal></entry>
<entry>
The command is performing final cleanup. When this phase is
completed, <command>CLUSTER</command>
or <command>VACUUM FULL</command> will end.
</entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="copy-progress-reporting">
<title>COPY Progress Reporting</title>
<indexterm>
<primary>pg_stat_progress_copy</primary>
</indexterm>
<para>
Whenever <command>COPY</command> is running, the
<structname>pg_stat_progress_copy</structname> view will contain one row
for each backend that is currently running a <command>COPY</command> command.
The table below describes the information that will be reported and provides
information about how to interpret it.
</para>
<table id="pg-stat-progress-copy-view" xreflabel="pg_stat_progress_copy">
<title><structname>pg_stat_progress_copy</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of backend.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datid</structfield> <type>oid</type>
</para>
<para>
OID of the database to which this backend is connected.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datname</structfield> <type>name</type>
</para>
<para>
Name of the database to which this backend is connected.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of the table on which the <command>COPY</command> command is
executed. It is set to <literal>0</literal> if copying from a
<command>SELECT</command> query.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>command</structfield> <type>text</type>
</para>
<para>
The command that is running: <literal>COPY FROM</literal>, or
<literal>COPY TO</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>type</structfield> <type>text</type>
</para>
<para>
The I/O type that the data is read from or written to:
<literal>FILE</literal>, <literal>PROGRAM</literal>,
<literal>PIPE</literal> (for <command>COPY FROM STDIN</command> and
<command>COPY TO STDOUT</command>), or <literal>CALLBACK</literal>
(used for example during the initial table synchronization in
logical replication).
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>bytes_processed</structfield> <type>bigint</type>
</para>
<para>
Number of bytes already processed by <command>COPY</command> command.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>bytes_total</structfield> <type>bigint</type>
</para>
<para>
Size of source file for <command>COPY FROM</command> command in bytes.
It is set to <literal>0</literal> if not available.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tuples_processed</structfield> <type>bigint</type>
</para>
<para>
Number of tuples already processed by <command>COPY</command> command.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tuples_excluded</structfield> <type>bigint</type>
</para>
<para>
Number of tuples not processed because they were excluded by the
<command>WHERE</command> clause of the <command>COPY</command> command.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tuples_skipped</structfield> <type>bigint</type>
</para>
<para>
Number of tuples skipped because they contain malformed data.
This counter only advances when a value other than
<literal>stop</literal> is specified to the <literal>ON_ERROR</literal>
option.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="create-index-progress-reporting">
<title>CREATE INDEX Progress Reporting</title>
<indexterm>
<primary>pg_stat_progress_create_index</primary>
</indexterm>
<para>
Whenever <command>CREATE INDEX</command> or <command>REINDEX</command> is running, the
<structname>pg_stat_progress_create_index</structname> view will contain
one row for each backend that is currently creating indexes. The tables
below describe the information that will be reported and provide information
about how to interpret it.
</para>
<table id="pg-stat-progress-create-index-view" xreflabel="pg_stat_progress_create_index">
<title><structname>pg_stat_progress_create_index</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of the backend creating indexes.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datid</structfield> <type>oid</type>
</para>
<para>
OID of the database to which this backend is connected.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datname</structfield> <type>name</type>
</para>
<para>
Name of the database to which this backend is connected.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of the table on which the index is being created.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>index_relid</structfield> <type>oid</type>
</para>
<para>
OID of the index being created or reindexed. During a
non-concurrent <command>CREATE INDEX</command>, this is 0.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>command</structfield> <type>text</type>
</para>
<para>
Specific command type: <literal>CREATE INDEX</literal>,
<literal>CREATE INDEX CONCURRENTLY</literal>,
<literal>REINDEX</literal>, or <literal>REINDEX CONCURRENTLY</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>phase</structfield> <type>text</type>
</para>
<para>
Current processing phase of index creation. See <xref linkend="create-index-phases"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>lockers_total</structfield> <type>bigint</type>
</para>
<para>
Total number of lockers to wait for, when applicable.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>lockers_done</structfield> <type>bigint</type>
</para>
<para>
Number of lockers already waited for.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>current_locker_pid</structfield> <type>bigint</type>
</para>
<para>
Process ID of the locker currently being waited for.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blocks_total</structfield> <type>bigint</type>
</para>
<para>
Total number of blocks to be processed in the current phase.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>blocks_done</structfield> <type>bigint</type>
</para>
<para>
Number of blocks already processed in the current phase.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tuples_total</structfield> <type>bigint</type>
</para>
<para>
Total number of tuples to be processed in the current phase.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tuples_done</structfield> <type>bigint</type>
</para>
<para>
Number of tuples already processed in the current phase.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>partitions_total</structfield> <type>bigint</type>
</para>
<para>
Total number of partitions on which the index is to be created
or attached, including both direct and indirect partitions.
<literal>0</literal> during a <literal>REINDEX</literal>, or when
the index is not partitioned.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>partitions_done</structfield> <type>bigint</type>
</para>
<para>
Number of partitions on which the index has already been created
or attached, including both direct and indirect partitions.
<literal>0</literal> during a <literal>REINDEX</literal>, or when
the index is not partitioned.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
<table id="create-index-phases">
<title>CREATE INDEX Phases</title>
<tgroup cols="2">
<colspec colname="col1" colwidth="1*"/>
<colspec colname="col2" colwidth="2*"/>
<thead>
<row>
<entry>Phase</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>initializing</literal></entry>
<entry>
<command>CREATE INDEX</command> or <command>REINDEX</command> is preparing to create the index. This
phase is expected to be very brief.
</entry>
</row>
<row>
<entry><literal>waiting for writers before build</literal></entry>
<entry>
<command>CREATE INDEX CONCURRENTLY</command> or <command>REINDEX CONCURRENTLY</command> is waiting for transactions
with write locks that can potentially see the table to finish.
This phase is skipped when not in concurrent mode.
Columns <structname>lockers_total</structname>, <structname>lockers_done</structname>
and <structname>current_locker_pid</structname> contain the progress
information for this phase.
</entry>
</row>
<row>
<entry><literal>building index</literal></entry>
<entry>
The index is being built by the access method-specific code. In this phase,
access methods that support progress reporting fill in their own progress data,
and the subphase is indicated in this column. Typically,
<structname>blocks_total</structname> and <structname>blocks_done</structname>
will contain progress data, as well as potentially
<structname>tuples_total</structname> and <structname>tuples_done</structname>.
</entry>
</row>
<row>
<entry><literal>waiting for writers before validation</literal></entry>
<entry>
<command>CREATE INDEX CONCURRENTLY</command> or <command>REINDEX CONCURRENTLY</command> is waiting for transactions
with write locks that can potentially write into the table to finish.
This phase is skipped when not in concurrent mode.
Columns <structname>lockers_total</structname>, <structname>lockers_done</structname>
and <structname>current_locker_pid</structname> contain the progress
information for this phase.
</entry>
</row>
<row>
<entry><literal>index validation: scanning index</literal></entry>
<entry>
<command>CREATE INDEX CONCURRENTLY</command> is scanning the index searching
for tuples that need to be validated.
This phase is skipped when not in concurrent mode.
Columns <structname>blocks_total</structname> (set to the total size of the index)
and <structname>blocks_done</structname> contain the progress information for this phase.
</entry>
</row>
<row>
<entry><literal>index validation: sorting tuples</literal></entry>
<entry>
<command>CREATE INDEX CONCURRENTLY</command> is sorting the output of the
index scanning phase.
</entry>
</row>
<row>
<entry><literal>index validation: scanning table</literal></entry>
<entry>
<command>CREATE INDEX CONCURRENTLY</command> is scanning the table
to validate the index tuples collected in the previous two phases.
This phase is skipped when not in concurrent mode.
Columns <structname>blocks_total</structname> (set to the total size of the table)
and <structname>blocks_done</structname> contain the progress information for this phase.
</entry>
</row>
<row>
<entry><literal>waiting for old snapshots</literal></entry>
<entry>
<command>CREATE INDEX CONCURRENTLY</command> or <command>REINDEX CONCURRENTLY</command> is waiting for transactions
that can potentially see the table to release their snapshots. This
phase is skipped when not in concurrent mode.
Columns <structname>lockers_total</structname>, <structname>lockers_done</structname>
and <structname>current_locker_pid</structname> contain the progress
information for this phase.
</entry>
</row>
<row>
<entry><literal>waiting for readers before marking dead</literal></entry>
<entry>
<command>REINDEX CONCURRENTLY</command> is waiting for transactions
with read locks on the table to finish, before marking the old index dead.
This phase is skipped when not in concurrent mode.
Columns <structname>lockers_total</structname>, <structname>lockers_done</structname>
and <structname>current_locker_pid</structname> contain the progress
information for this phase.
</entry>
</row>
<row>
<entry><literal>waiting for readers before dropping</literal></entry>
<entry>
<command>REINDEX CONCURRENTLY</command> is waiting for transactions
with read locks on the table to finish, before dropping the old index.
This phase is skipped when not in concurrent mode.
Columns <structname>lockers_total</structname>, <structname>lockers_done</structname>
and <structname>current_locker_pid</structname> contain the progress
information for this phase.
</entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="vacuum-progress-reporting">
<title>VACUUM Progress Reporting</title>
<indexterm>
<primary>pg_stat_progress_vacuum</primary>
</indexterm>
<para>
Whenever <command>VACUUM</command> is running, the
<structname>pg_stat_progress_vacuum</structname> view will contain
one row for each backend (including autovacuum worker processes) that is
currently vacuuming. The tables below describe the information
that will be reported and provide information about how to interpret it.
Progress for <command>VACUUM FULL</command> commands is reported via
<structname>pg_stat_progress_cluster</structname>
because both <command>VACUUM FULL</command> and <command>CLUSTER</command>
rewrite the table, while regular <command>VACUUM</command> only modifies it
in place. See <xref linkend="cluster-progress-reporting"/>.
</para>
<table id="pg-stat-progress-vacuum-view" xreflabel="pg_stat_progress_vacuum">
<title><structname>pg_stat_progress_vacuum</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of backend.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datid</structfield> <type>oid</type>
</para>
<para>
OID of the database to which this backend is connected.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>datname</structfield> <type>name</type>
</para>
<para>
Name of the database to which this backend is connected.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>relid</structfield> <type>oid</type>
</para>
<para>
OID of the table being vacuumed.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>phase</structfield> <type>text</type>
</para>
<para>
Current processing phase of vacuum. See <xref linkend="vacuum-phases"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>heap_blks_total</structfield> <type>bigint</type>
</para>
<para>
Total number of heap blocks in the table. This number is reported
as of the beginning of the scan; blocks added later will not be (and
need not be) visited by this <command>VACUUM</command>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>heap_blks_scanned</structfield> <type>bigint</type>
</para>
<para>
Number of heap blocks scanned. Because the
<link linkend="storage-vm">visibility map</link> is used to optimize scans,
some blocks will be skipped without inspection; skipped blocks are
included in this total, so that this number will eventually become
equal to <structfield>heap_blks_total</structfield> when the vacuum is complete.
This counter only advances when the phase is <literal>scanning heap</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>heap_blks_vacuumed</structfield> <type>bigint</type>
</para>
<para>
Number of heap blocks vacuumed. Unless the table has no indexes, this
counter only advances when the phase is <literal>vacuuming heap</literal>.
Blocks that contain no dead tuples are skipped, so the counter may
sometimes skip forward in large increments.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>index_vacuum_count</structfield> <type>bigint</type>
</para>
<para>
Number of completed index vacuum cycles.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>max_dead_tuples</structfield> <type>bigint</type>
</para>
<para>
Number of dead tuples that we can store before needing to perform
an index vacuum cycle, based on
<xref linkend="guc-maintenance-work-mem"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>num_dead_tuples</structfield> <type>bigint</type>
</para>
<para>
Number of dead tuples collected since the last index vacuum cycle.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>indexes_total</structfield> <type>bigint</type>
</para>
<para>
Total number of indexes that will be vacuumed or cleaned up. This
number is reported at the beginning of the
<literal>vacuuming indexes</literal> phase or the
<literal>cleaning up indexes</literal> phase.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>indexes_processed</structfield> <type>bigint</type>
</para>
<para>
Number of indexes processed. This counter only advances when the
phase is <literal>vacuuming indexes</literal> or
<literal>cleaning up indexes</literal>.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
<table id="vacuum-phases">
<title>VACUUM Phases</title>
<tgroup cols="2">
<colspec colname="col1" colwidth="1*"/>
<colspec colname="col2" colwidth="2*"/>
<thead>
<row>
<entry>Phase</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>initializing</literal></entry>
<entry>
<command>VACUUM</command> is preparing to begin scanning the heap. This
phase is expected to be very brief.
</entry>
</row>
<row>
<entry><literal>scanning heap</literal></entry>
<entry>
<command>VACUUM</command> is currently scanning the heap. It will prune and
defragment each page if required, and possibly perform freezing
activity. The <structfield>heap_blks_scanned</structfield> column can be used
to monitor the progress of the scan.
</entry>
</row>
<row>
<entry><literal>vacuuming indexes</literal></entry>
<entry>
<command>VACUUM</command> is currently vacuuming the indexes. If a table has
any indexes, this will happen at least once per vacuum, after the heap
has been completely scanned. It may happen multiple times per vacuum
if <xref linkend="guc-maintenance-work-mem"/> (or, in the case of autovacuum,
<xref linkend="guc-autovacuum-work-mem"/> if set) is insufficient to store
the number of dead tuples found.
</entry>
</row>
<row>
<entry><literal>vacuuming heap</literal></entry>
<entry>
<command>VACUUM</command> is currently vacuuming the heap. Vacuuming the heap
is distinct from scanning the heap, and occurs after each instance of
vacuuming indexes. If <structfield>heap_blks_scanned</structfield> is less than
<structfield>heap_blks_total</structfield>, the system will return to scanning
the heap after this phase is completed; otherwise, it will begin
cleaning up indexes after this phase is completed.
</entry>
</row>
<row>
<entry><literal>cleaning up indexes</literal></entry>
<entry>
<command>VACUUM</command> is currently cleaning up indexes. This occurs after
the heap has been completely scanned and all vacuuming of the indexes
and the heap has been completed.
</entry>
</row>
<row>
<entry><literal>truncating heap</literal></entry>
<entry>
<command>VACUUM</command> is currently truncating the heap so as to return
empty pages at the end of the relation to the operating system. This
occurs after cleaning up indexes.
</entry>
</row>
<row>
<entry><literal>performing final cleanup</literal></entry>
<entry>
<command>VACUUM</command> is performing final cleanup. During this phase,
<command>VACUUM</command> will vacuum the free space map, update statistics
in <literal>pg_class</literal>, and report statistics to the cumulative
statistics system. When this phase is completed, <command>VACUUM</command> will end.
</entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="basebackup-progress-reporting">
<title>Base Backup Progress Reporting</title>
<indexterm>
<primary>pg_stat_progress_basebackup</primary>
</indexterm>
<para>
Whenever an application like <application>pg_basebackup</application>
is taking a base backup, the
<structname>pg_stat_progress_basebackup</structname>
view will contain a row for each WAL sender process that is currently
running the <command>BASE_BACKUP</command> replication command
and streaming the backup. The tables below describe the information
that will be reported and provide information about how to interpret it.
</para>
<table id="pg-stat-progress-basebackup-view" xreflabel="pg_stat_progress_basebackup">
<title><structname>pg_stat_progress_basebackup</structname> View</title>
<tgroup cols="1">
<thead>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
Column Type
</para>
<para>
Description
</para></entry>
</row>
</thead>
<tbody>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>pid</structfield> <type>integer</type>
</para>
<para>
Process ID of a WAL sender process.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>phase</structfield> <type>text</type>
</para>
<para>
Current processing phase. See <xref linkend="basebackup-phases"/>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>backup_total</structfield> <type>bigint</type>
</para>
<para>
Total amount of data that will be streamed. This is estimated and
reported as of the beginning of
<literal>streaming database files</literal> phase. Note that
this is only an approximation since the database
may change during <literal>streaming database files</literal> phase
and WAL log may be included in the backup later. This is always
the same value as <structfield>backup_streamed</structfield>
once the amount of data streamed exceeds the estimated
total size. If the estimation is disabled in
<application>pg_basebackup</application>
(i.e., <literal>--no-estimate-size</literal> option is specified),
this is <literal>NULL</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>backup_streamed</structfield> <type>bigint</type>
</para>
<para>
Amount of data streamed. This counter only advances
when the phase is <literal>streaming database files</literal> or
<literal>transferring wal files</literal>.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tablespaces_total</structfield> <type>bigint</type>
</para>
<para>
Total number of tablespaces that will be streamed.
</para></entry>
</row>
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>tablespaces_streamed</structfield> <type>bigint</type>
</para>
<para>
Number of tablespaces streamed. This counter only
advances when the phase is <literal>streaming database files</literal>.
</para></entry>
</row>
</tbody>
</tgroup>
</table>
<table id="basebackup-phases">
<title>Base Backup Phases</title>
<tgroup cols="2">
<colspec colname="col1" colwidth="1*"/>
<colspec colname="col2" colwidth="2*"/>
<thead>
<row>
<entry>Phase</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
<entry><literal>initializing</literal></entry>
<entry>
The WAL sender process is preparing to begin the backup.
This phase is expected to be very brief.
</entry>
</row>
<row>
<entry><literal>waiting for checkpoint to finish</literal></entry>
<entry>
The WAL sender process is currently performing
<function>pg_backup_start</function> to prepare to
take a base backup, and waiting for the start-of-backup
checkpoint to finish.
</entry>
</row>
<row>
<entry><literal>estimating backup size</literal></entry>
<entry>
The WAL sender process is currently estimating the total amount
of database files that will be streamed as a base backup.
</entry>
</row>
<row>
<entry><literal>streaming database files</literal></entry>
<entry>
The WAL sender process is currently streaming database files
as a base backup.
</entry>
</row>
<row>
<entry><literal>waiting for wal archiving to finish</literal></entry>
<entry>
The WAL sender process is currently performing
<function>pg_backup_stop</function> to finish the backup,
and waiting for all the WAL files required for the base backup
to be successfully archived.
If either <literal>--wal-method=none</literal> or
<literal>--wal-method=stream</literal> is specified in
<application>pg_basebackup</application>, the backup will end
when this phase is completed.
</entry>
</row>
<row>
<entry><literal>transferring wal files</literal></entry>
<entry>
The WAL sender process is currently transferring all WAL logs
generated during the backup. This phase occurs after
<literal>waiting for wal archiving to finish</literal> phase if
<literal>--wal-method=fetch</literal> is specified in
<application>pg_basebackup</application>. The backup will end
when this phase is completed.
</entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
</sect1>
<sect1 id="dynamic-trace">
<title>Dynamic Tracing</title>
<indexterm zone="dynamic-trace">
<primary>DTrace</primary>
</indexterm>
<para>
<productname>PostgreSQL</productname> provides facilities to support
dynamic tracing of the database server. This allows an external
utility to be called at specific points in the code and thereby trace
execution.
</para>
<para>
A number of probes or trace points are already inserted into the source
code. These probes are intended to be used by database developers and
administrators. By default the probes are not compiled into
<productname>PostgreSQL</productname>; the user needs to explicitly tell
the configure script to make the probes available.
</para>
<para>
Currently, the
<ulink url="https://en.wikipedia.org/wiki/DTrace">DTrace</ulink>
utility is supported, which, at the time of this writing, is available
on Solaris, macOS, FreeBSD, NetBSD, and Oracle Linux. The
<ulink url="https://sourceware.org/systemtap/">SystemTap</ulink> project
for Linux provides a DTrace equivalent and can also be used. Supporting other dynamic
tracing utilities is theoretically possible by changing the definitions for
the macros in <filename>src/include/utils/probes.h</filename>.
</para>
<sect2 id="compiling-for-trace">
<title>Compiling for Dynamic Tracing</title>
<para>
By default, probes are not available, so you will need to
explicitly tell the configure script to make the probes available
in <productname>PostgreSQL</productname>. To include DTrace support
specify <option>--enable-dtrace</option> to configure. See <xref
linkend="configure-options-devel"/> for further information.
2006-12-08 20:16:17 +01:00
</para>
</sect2>
<sect2 id="trace-points">
<title>Built-in Probes</title>
<para>
A number of standard probes are provided in the source code,
as shown in <xref linkend="dtrace-probe-point-table"/>;
<xref linkend="typedefs-table"/>
shows the types used in the probes. More probes can certainly be
added to enhance <productname>PostgreSQL</productname>'s observability.
</para>
<table id="dtrace-probe-point-table">
<title>Built-in DTrace Probes</title>
<tgroup cols="3">
<colspec colname="col1" colwidth="2*"/>
<colspec colname="col2" colwidth="3*"/>
<colspec colname="col3" colwidth="3*"/>
<thead>
<row>
<entry>Name</entry>
<entry>Parameters</entry>
<entry>Description</entry>
</row>
</thead>
<tbody>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>transaction-start</literal></entry>
<entry><literal>(LocalTransactionId)</literal></entry>
<entry>Probe that fires at the start of a new transaction.
2010-04-15 22:56:13 +02:00
arg0 is the transaction ID.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>transaction-commit</literal></entry>
<entry><literal>(LocalTransactionId)</literal></entry>
<entry>Probe that fires when a transaction completes successfully.
2010-04-15 22:56:13 +02:00
arg0 is the transaction ID.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>transaction-abort</literal></entry>
<entry><literal>(LocalTransactionId)</literal></entry>
<entry>Probe that fires when a transaction completes unsuccessfully.
2010-04-15 22:56:13 +02:00
arg0 is the transaction ID.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>query-start</literal></entry>
<entry><literal>(const char *)</literal></entry>
<entry>Probe that fires when the processing of a query is started.
arg0 is the query string.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>query-done</literal></entry>
<entry><literal>(const char *)</literal></entry>
<entry>Probe that fires when the processing of a query is complete.
arg0 is the query string.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>query-parse-start</literal></entry>
<entry><literal>(const char *)</literal></entry>
<entry>Probe that fires when the parsing of a query is started.
arg0 is the query string.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>query-parse-done</literal></entry>
<entry><literal>(const char *)</literal></entry>
<entry>Probe that fires when the parsing of a query is complete.
arg0 is the query string.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>query-rewrite-start</literal></entry>
<entry><literal>(const char *)</literal></entry>
<entry>Probe that fires when the rewriting of a query is started.
arg0 is the query string.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>query-rewrite-done</literal></entry>
<entry><literal>(const char *)</literal></entry>
<entry>Probe that fires when the rewriting of a query is complete.
arg0 is the query string.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>query-plan-start</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires when the planning of a query is started.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>query-plan-done</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires when the planning of a query is complete.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>query-execute-start</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires when the execution of a query is started.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>query-execute-done</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires when the execution of a query is complete.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>statement-status</literal></entry>
<entry><literal>(const char *)</literal></entry>
<entry>Probe that fires anytime the server process updates its
<structname>pg_stat_activity</structname>.<structfield>status</structfield>.
arg0 is the new status string.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>checkpoint-start</literal></entry>
<entry><literal>(int)</literal></entry>
<entry>Probe that fires when a checkpoint is started.
arg0 holds the bitwise flags used to distinguish different checkpoint
types, such as shutdown, immediate or force.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>checkpoint-done</literal></entry>
<entry><literal>(int, int, int, int, int)</literal></entry>
<entry>Probe that fires when a checkpoint is complete.
(The probes listed next fire in sequence during checkpoint processing.)
arg0 is the number of buffers written. arg1 is the total number of
2015-12-12 17:31:28 +01:00
buffers. arg2, arg3 and arg4 contain the number of WAL files added,
removed and recycled respectively.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>clog-checkpoint-start</literal></entry>
<entry><literal>(bool)</literal></entry>
<entry>Probe that fires when the CLOG portion of a checkpoint is started.
arg0 is true for normal checkpoint, false for shutdown
checkpoint.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>clog-checkpoint-done</literal></entry>
<entry><literal>(bool)</literal></entry>
<entry>Probe that fires when the CLOG portion of a checkpoint is
2015-12-12 17:31:28 +01:00
complete. arg0 has the same meaning as for <literal>clog-checkpoint-start</literal>.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>subtrans-checkpoint-start</literal></entry>
<entry><literal>(bool)</literal></entry>
<entry>Probe that fires when the SUBTRANS portion of a checkpoint is
started.
arg0 is true for normal checkpoint, false for shutdown
checkpoint.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>subtrans-checkpoint-done</literal></entry>
<entry><literal>(bool)</literal></entry>
<entry>Probe that fires when the SUBTRANS portion of a checkpoint is
complete. arg0 has the same meaning as for
2015-12-12 17:31:28 +01:00
<literal>subtrans-checkpoint-start</literal>.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>multixact-checkpoint-start</literal></entry>
<entry><literal>(bool)</literal></entry>
<entry>Probe that fires when the MultiXact portion of a checkpoint is
started.
arg0 is true for normal checkpoint, false for shutdown
checkpoint.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>multixact-checkpoint-done</literal></entry>
<entry><literal>(bool)</literal></entry>
<entry>Probe that fires when the MultiXact portion of a checkpoint is
complete. arg0 has the same meaning as for
2015-12-12 17:31:28 +01:00
<literal>multixact-checkpoint-start</literal>.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>buffer-checkpoint-start</literal></entry>
<entry><literal>(int)</literal></entry>
<entry>Probe that fires when the buffer-writing portion of a checkpoint
is started.
arg0 holds the bitwise flags used to distinguish different checkpoint
types, such as shutdown, immediate or force.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>buffer-sync-start</literal></entry>
<entry><literal>(int, int)</literal></entry>
<entry>Probe that fires when we begin to write dirty buffers during
checkpoint (after identifying which buffers must be written).
arg0 is the total number of buffers.
arg1 is the number that are currently dirty and need to be written.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>buffer-sync-written</literal></entry>
<entry><literal>(int)</literal></entry>
<entry>Probe that fires after each buffer is written during checkpoint.
arg0 is the ID number of the buffer.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>buffer-sync-done</literal></entry>
<entry><literal>(int, int, int)</literal></entry>
<entry>Probe that fires when all dirty buffers have been written.
arg0 is the total number of buffers.
arg1 is the number of buffers actually written by the checkpoint process.
arg2 is the number that were expected to be written (arg1 of
2015-12-12 17:31:28 +01:00
<literal>buffer-sync-start</literal>); any difference reflects other processes flushing
buffers during the checkpoint.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>buffer-checkpoint-sync-start</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires after dirty buffers have been written to the
kernel, and before starting to issue fsync requests.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>buffer-checkpoint-done</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires when syncing of buffers to disk is
complete.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>twophase-checkpoint-start</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires when the two-phase portion of a checkpoint is
started.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>twophase-checkpoint-done</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires when the two-phase portion of a checkpoint is
complete.</entry>
</row>
bufmgr: Introduce infrastructure for faster relation extension The primary bottlenecks for relation extension are: 1) The extension lock is held while acquiring a victim buffer for the new page. Acquiring a victim buffer can require writing out the old page contents including possibly needing to flush WAL. 2) When extending via ReadBuffer() et al, we write a zero page during the extension, and then later write out the actual page contents. This can nearly double the write rate. 3) The existing bulk relation extension infrastructure in hio.c just amortized the cost of acquiring the relation extension lock, but none of the other costs. Unfortunately 1) cannot currently be addressed in a central manner as the callers to ReadBuffer() need to acquire the extension lock. To address that, this this commit moves the responsibility for acquiring the extension lock into bufmgr.c functions. That allows to acquire the relation extension lock for just the required time. This will also allow us to improve relation extension further, without changing callers. The reason we write all-zeroes pages during relation extension is that we hope to get ENOSPC errors earlier that way (largely works, except for CoW filesystems). It is easier to handle out-of-space errors gracefully if the page doesn't yet contain actual tuples. This commit addresses 2), by using the recently introduced smgrzeroextend(), which extends the relation, without dirtying the kernel page cache for all the extended pages. To address 3), this commit introduces a function to extend a relation by multiple blocks at a time. There are three new exposed functions: ExtendBufferedRel() for extending the relation by a single block, ExtendBufferedRelBy() to extend a relation by multiple blocks at once, and ExtendBufferedRelTo() for extending a relation up to a certain size. To avoid duplicating code between ReadBuffer(P_NEW) and the new functions, ReadBuffer(P_NEW) now implements relation extension with ExtendBufferedRel(), using a flag to tell ExtendBufferedRel() that the relation lock is already held. Note that this commit does not yet lead to a meaningful performance or scalability improvement - for that uses of ReadBuffer(P_NEW) will need to be converted to ExtendBuffered*(), which will be done in subsequent commits. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de
2023-04-06 01:21:09 +02:00
<row>
<entry><literal>buffer-extend-start</literal></entry>
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int, unsigned int)</literal></entry>
<entry>Probe that fires when a relation extension starts.
arg0 contains the fork to be extended. arg1, arg2, and arg3 contain the
tablespace, database, and relation OIDs identifying the relation. arg4
is the ID of the backend which created the temporary relation for a
local buffer, or <symbol>INVALID_PROC_NUMBER</symbol> (-1) for a shared
bufmgr: Introduce infrastructure for faster relation extension The primary bottlenecks for relation extension are: 1) The extension lock is held while acquiring a victim buffer for the new page. Acquiring a victim buffer can require writing out the old page contents including possibly needing to flush WAL. 2) When extending via ReadBuffer() et al, we write a zero page during the extension, and then later write out the actual page contents. This can nearly double the write rate. 3) The existing bulk relation extension infrastructure in hio.c just amortized the cost of acquiring the relation extension lock, but none of the other costs. Unfortunately 1) cannot currently be addressed in a central manner as the callers to ReadBuffer() need to acquire the extension lock. To address that, this this commit moves the responsibility for acquiring the extension lock into bufmgr.c functions. That allows to acquire the relation extension lock for just the required time. This will also allow us to improve relation extension further, without changing callers. The reason we write all-zeroes pages during relation extension is that we hope to get ENOSPC errors earlier that way (largely works, except for CoW filesystems). It is easier to handle out-of-space errors gracefully if the page doesn't yet contain actual tuples. This commit addresses 2), by using the recently introduced smgrzeroextend(), which extends the relation, without dirtying the kernel page cache for all the extended pages. To address 3), this commit introduces a function to extend a relation by multiple blocks at a time. There are three new exposed functions: ExtendBufferedRel() for extending the relation by a single block, ExtendBufferedRelBy() to extend a relation by multiple blocks at once, and ExtendBufferedRelTo() for extending a relation up to a certain size. To avoid duplicating code between ReadBuffer(P_NEW) and the new functions, ReadBuffer(P_NEW) now implements relation extension with ExtendBufferedRel(), using a flag to tell ExtendBufferedRel() that the relation lock is already held. Note that this commit does not yet lead to a meaningful performance or scalability improvement - for that uses of ReadBuffer(P_NEW) will need to be converted to ExtendBuffered*(), which will be done in subsequent commits. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de
2023-04-06 01:21:09 +02:00
buffer. arg5 is the number of blocks the caller would like to extend
by.</entry>
</row>
<row>
<entry><literal>buffer-extend-done</literal></entry>
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int, unsigned int, BlockNumber)</literal></entry>
<entry>Probe that fires when a relation extension is complete.
arg0 contains the fork to be extended. arg1, arg2, and arg3 contain the
tablespace, database, and relation OIDs identifying the relation. arg4
is the ID of the backend which created the temporary relation for a
local buffer, or <symbol>INVALID_PROC_NUMBER</symbol> (-1) for a shared
bufmgr: Introduce infrastructure for faster relation extension The primary bottlenecks for relation extension are: 1) The extension lock is held while acquiring a victim buffer for the new page. Acquiring a victim buffer can require writing out the old page contents including possibly needing to flush WAL. 2) When extending via ReadBuffer() et al, we write a zero page during the extension, and then later write out the actual page contents. This can nearly double the write rate. 3) The existing bulk relation extension infrastructure in hio.c just amortized the cost of acquiring the relation extension lock, but none of the other costs. Unfortunately 1) cannot currently be addressed in a central manner as the callers to ReadBuffer() need to acquire the extension lock. To address that, this this commit moves the responsibility for acquiring the extension lock into bufmgr.c functions. That allows to acquire the relation extension lock for just the required time. This will also allow us to improve relation extension further, without changing callers. The reason we write all-zeroes pages during relation extension is that we hope to get ENOSPC errors earlier that way (largely works, except for CoW filesystems). It is easier to handle out-of-space errors gracefully if the page doesn't yet contain actual tuples. This commit addresses 2), by using the recently introduced smgrzeroextend(), which extends the relation, without dirtying the kernel page cache for all the extended pages. To address 3), this commit introduces a function to extend a relation by multiple blocks at a time. There are three new exposed functions: ExtendBufferedRel() for extending the relation by a single block, ExtendBufferedRelBy() to extend a relation by multiple blocks at once, and ExtendBufferedRelTo() for extending a relation up to a certain size. To avoid duplicating code between ReadBuffer(P_NEW) and the new functions, ReadBuffer(P_NEW) now implements relation extension with ExtendBufferedRel(), using a flag to tell ExtendBufferedRel() that the relation lock is already held. Note that this commit does not yet lead to a meaningful performance or scalability improvement - for that uses of ReadBuffer(P_NEW) will need to be converted to ExtendBuffered*(), which will be done in subsequent commits. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de
2023-04-06 01:21:09 +02:00
buffer. arg5 is the number of blocks the relation was extended by, this
can be less than the number in the
<literal>buffer-extend-start</literal> due to resource
constraints. arg6 contains the BlockNumber of the first new
block.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>buffer-read-start</literal></entry>
bufmgr: Introduce infrastructure for faster relation extension The primary bottlenecks for relation extension are: 1) The extension lock is held while acquiring a victim buffer for the new page. Acquiring a victim buffer can require writing out the old page contents including possibly needing to flush WAL. 2) When extending via ReadBuffer() et al, we write a zero page during the extension, and then later write out the actual page contents. This can nearly double the write rate. 3) The existing bulk relation extension infrastructure in hio.c just amortized the cost of acquiring the relation extension lock, but none of the other costs. Unfortunately 1) cannot currently be addressed in a central manner as the callers to ReadBuffer() need to acquire the extension lock. To address that, this this commit moves the responsibility for acquiring the extension lock into bufmgr.c functions. That allows to acquire the relation extension lock for just the required time. This will also allow us to improve relation extension further, without changing callers. The reason we write all-zeroes pages during relation extension is that we hope to get ENOSPC errors earlier that way (largely works, except for CoW filesystems). It is easier to handle out-of-space errors gracefully if the page doesn't yet contain actual tuples. This commit addresses 2), by using the recently introduced smgrzeroextend(), which extends the relation, without dirtying the kernel page cache for all the extended pages. To address 3), this commit introduces a function to extend a relation by multiple blocks at a time. There are three new exposed functions: ExtendBufferedRel() for extending the relation by a single block, ExtendBufferedRelBy() to extend a relation by multiple blocks at once, and ExtendBufferedRelTo() for extending a relation up to a certain size. To avoid duplicating code between ReadBuffer(P_NEW) and the new functions, ReadBuffer(P_NEW) now implements relation extension with ExtendBufferedRel(), using a flag to tell ExtendBufferedRel() that the relation lock is already held. Note that this commit does not yet lead to a meaningful performance or scalability improvement - for that uses of ReadBuffer(P_NEW) will need to be converted to ExtendBuffered*(), which will be done in subsequent commits. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de
2023-04-06 01:21:09 +02:00
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int)</literal></entry>
<entry>Probe that fires when a buffer read is started.
bufmgr: Introduce infrastructure for faster relation extension The primary bottlenecks for relation extension are: 1) The extension lock is held while acquiring a victim buffer for the new page. Acquiring a victim buffer can require writing out the old page contents including possibly needing to flush WAL. 2) When extending via ReadBuffer() et al, we write a zero page during the extension, and then later write out the actual page contents. This can nearly double the write rate. 3) The existing bulk relation extension infrastructure in hio.c just amortized the cost of acquiring the relation extension lock, but none of the other costs. Unfortunately 1) cannot currently be addressed in a central manner as the callers to ReadBuffer() need to acquire the extension lock. To address that, this this commit moves the responsibility for acquiring the extension lock into bufmgr.c functions. That allows to acquire the relation extension lock for just the required time. This will also allow us to improve relation extension further, without changing callers. The reason we write all-zeroes pages during relation extension is that we hope to get ENOSPC errors earlier that way (largely works, except for CoW filesystems). It is easier to handle out-of-space errors gracefully if the page doesn't yet contain actual tuples. This commit addresses 2), by using the recently introduced smgrzeroextend(), which extends the relation, without dirtying the kernel page cache for all the extended pages. To address 3), this commit introduces a function to extend a relation by multiple blocks at a time. There are three new exposed functions: ExtendBufferedRel() for extending the relation by a single block, ExtendBufferedRelBy() to extend a relation by multiple blocks at once, and ExtendBufferedRelTo() for extending a relation up to a certain size. To avoid duplicating code between ReadBuffer(P_NEW) and the new functions, ReadBuffer(P_NEW) now implements relation extension with ExtendBufferedRel(), using a flag to tell ExtendBufferedRel() that the relation lock is already held. Note that this commit does not yet lead to a meaningful performance or scalability improvement - for that uses of ReadBuffer(P_NEW) will need to be converted to ExtendBuffered*(), which will be done in subsequent commits. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de
2023-04-06 01:21:09 +02:00
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or <symbol>INVALID_PROC_NUMBER</symbol> (-1) for a shared buffer.
bufmgr: Introduce infrastructure for faster relation extension The primary bottlenecks for relation extension are: 1) The extension lock is held while acquiring a victim buffer for the new page. Acquiring a victim buffer can require writing out the old page contents including possibly needing to flush WAL. 2) When extending via ReadBuffer() et al, we write a zero page during the extension, and then later write out the actual page contents. This can nearly double the write rate. 3) The existing bulk relation extension infrastructure in hio.c just amortized the cost of acquiring the relation extension lock, but none of the other costs. Unfortunately 1) cannot currently be addressed in a central manner as the callers to ReadBuffer() need to acquire the extension lock. To address that, this this commit moves the responsibility for acquiring the extension lock into bufmgr.c functions. That allows to acquire the relation extension lock for just the required time. This will also allow us to improve relation extension further, without changing callers. The reason we write all-zeroes pages during relation extension is that we hope to get ENOSPC errors earlier that way (largely works, except for CoW filesystems). It is easier to handle out-of-space errors gracefully if the page doesn't yet contain actual tuples. This commit addresses 2), by using the recently introduced smgrzeroextend(), which extends the relation, without dirtying the kernel page cache for all the extended pages. To address 3), this commit introduces a function to extend a relation by multiple blocks at a time. There are three new exposed functions: ExtendBufferedRel() for extending the relation by a single block, ExtendBufferedRelBy() to extend a relation by multiple blocks at once, and ExtendBufferedRelTo() for extending a relation up to a certain size. To avoid duplicating code between ReadBuffer(P_NEW) and the new functions, ReadBuffer(P_NEW) now implements relation extension with ExtendBufferedRel(), using a flag to tell ExtendBufferedRel() that the relation lock is already held. Note that this commit does not yet lead to a meaningful performance or scalability improvement - for that uses of ReadBuffer(P_NEW) will need to be converted to ExtendBuffered*(), which will be done in subsequent commits. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de
2023-04-06 01:21:09 +02:00
</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>buffer-read-done</literal></entry>
bufmgr: Introduce infrastructure for faster relation extension The primary bottlenecks for relation extension are: 1) The extension lock is held while acquiring a victim buffer for the new page. Acquiring a victim buffer can require writing out the old page contents including possibly needing to flush WAL. 2) When extending via ReadBuffer() et al, we write a zero page during the extension, and then later write out the actual page contents. This can nearly double the write rate. 3) The existing bulk relation extension infrastructure in hio.c just amortized the cost of acquiring the relation extension lock, but none of the other costs. Unfortunately 1) cannot currently be addressed in a central manner as the callers to ReadBuffer() need to acquire the extension lock. To address that, this this commit moves the responsibility for acquiring the extension lock into bufmgr.c functions. That allows to acquire the relation extension lock for just the required time. This will also allow us to improve relation extension further, without changing callers. The reason we write all-zeroes pages during relation extension is that we hope to get ENOSPC errors earlier that way (largely works, except for CoW filesystems). It is easier to handle out-of-space errors gracefully if the page doesn't yet contain actual tuples. This commit addresses 2), by using the recently introduced smgrzeroextend(), which extends the relation, without dirtying the kernel page cache for all the extended pages. To address 3), this commit introduces a function to extend a relation by multiple blocks at a time. There are three new exposed functions: ExtendBufferedRel() for extending the relation by a single block, ExtendBufferedRelBy() to extend a relation by multiple blocks at once, and ExtendBufferedRelTo() for extending a relation up to a certain size. To avoid duplicating code between ReadBuffer(P_NEW) and the new functions, ReadBuffer(P_NEW) now implements relation extension with ExtendBufferedRel(), using a flag to tell ExtendBufferedRel() that the relation lock is already held. Note that this commit does not yet lead to a meaningful performance or scalability improvement - for that uses of ReadBuffer(P_NEW) will need to be converted to ExtendBuffered*(), which will be done in subsequent commits. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de
2023-04-06 01:21:09 +02:00
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool)</literal></entry>
<entry>Probe that fires when a buffer read is complete.
bufmgr: Introduce infrastructure for faster relation extension The primary bottlenecks for relation extension are: 1) The extension lock is held while acquiring a victim buffer for the new page. Acquiring a victim buffer can require writing out the old page contents including possibly needing to flush WAL. 2) When extending via ReadBuffer() et al, we write a zero page during the extension, and then later write out the actual page contents. This can nearly double the write rate. 3) The existing bulk relation extension infrastructure in hio.c just amortized the cost of acquiring the relation extension lock, but none of the other costs. Unfortunately 1) cannot currently be addressed in a central manner as the callers to ReadBuffer() need to acquire the extension lock. To address that, this this commit moves the responsibility for acquiring the extension lock into bufmgr.c functions. That allows to acquire the relation extension lock for just the required time. This will also allow us to improve relation extension further, without changing callers. The reason we write all-zeroes pages during relation extension is that we hope to get ENOSPC errors earlier that way (largely works, except for CoW filesystems). It is easier to handle out-of-space errors gracefully if the page doesn't yet contain actual tuples. This commit addresses 2), by using the recently introduced smgrzeroextend(), which extends the relation, without dirtying the kernel page cache for all the extended pages. To address 3), this commit introduces a function to extend a relation by multiple blocks at a time. There are three new exposed functions: ExtendBufferedRel() for extending the relation by a single block, ExtendBufferedRelBy() to extend a relation by multiple blocks at once, and ExtendBufferedRelTo() for extending a relation up to a certain size. To avoid duplicating code between ReadBuffer(P_NEW) and the new functions, ReadBuffer(P_NEW) now implements relation extension with ExtendBufferedRel(), using a flag to tell ExtendBufferedRel() that the relation lock is already held. Note that this commit does not yet lead to a meaningful performance or scalability improvement - for that uses of ReadBuffer(P_NEW) will need to be converted to ExtendBuffered*(), which will be done in subsequent commits. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de
2023-04-06 01:21:09 +02:00
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or <symbol>INVALID_PROC_NUMBER</symbol> (-1) for a shared buffer.
bufmgr: Introduce infrastructure for faster relation extension The primary bottlenecks for relation extension are: 1) The extension lock is held while acquiring a victim buffer for the new page. Acquiring a victim buffer can require writing out the old page contents including possibly needing to flush WAL. 2) When extending via ReadBuffer() et al, we write a zero page during the extension, and then later write out the actual page contents. This can nearly double the write rate. 3) The existing bulk relation extension infrastructure in hio.c just amortized the cost of acquiring the relation extension lock, but none of the other costs. Unfortunately 1) cannot currently be addressed in a central manner as the callers to ReadBuffer() need to acquire the extension lock. To address that, this this commit moves the responsibility for acquiring the extension lock into bufmgr.c functions. That allows to acquire the relation extension lock for just the required time. This will also allow us to improve relation extension further, without changing callers. The reason we write all-zeroes pages during relation extension is that we hope to get ENOSPC errors earlier that way (largely works, except for CoW filesystems). It is easier to handle out-of-space errors gracefully if the page doesn't yet contain actual tuples. This commit addresses 2), by using the recently introduced smgrzeroextend(), which extends the relation, without dirtying the kernel page cache for all the extended pages. To address 3), this commit introduces a function to extend a relation by multiple blocks at a time. There are three new exposed functions: ExtendBufferedRel() for extending the relation by a single block, ExtendBufferedRelBy() to extend a relation by multiple blocks at once, and ExtendBufferedRelTo() for extending a relation up to a certain size. To avoid duplicating code between ReadBuffer(P_NEW) and the new functions, ReadBuffer(P_NEW) now implements relation extension with ExtendBufferedRel(), using a flag to tell ExtendBufferedRel() that the relation lock is already held. Note that this commit does not yet lead to a meaningful performance or scalability improvement - for that uses of ReadBuffer(P_NEW) will need to be converted to ExtendBuffered*(), which will be done in subsequent commits. Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi> Reviewed-by: Melanie Plageman <melanieplageman@gmail.com> Discussion: https://postgr.es/m/20221029025420.eplyow6k7tgu6he3@awork3.anarazel.de
2023-04-06 01:21:09 +02:00
arg6 is true if the buffer was found in the pool, false if not.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>buffer-flush-start</literal></entry>
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid)</literal></entry>
<entry>Probe that fires before issuing any write request for a shared
buffer.
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>buffer-flush-done</literal></entry>
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid)</literal></entry>
<entry>Probe that fires when a write request is complete. (Note
that this just reflects the time to pass the data to the kernel;
it's typically not actually been written to disk yet.)
2015-12-12 17:31:28 +01:00
The arguments are the same as for <literal>buffer-flush-start</literal>.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>wal-buffer-write-dirty-start</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires when a server process begins to write a
dirty WAL buffer because no more WAL buffer space is available.
(If this happens often, it implies that
<xref linkend="guc-wal-buffers"/> is too small.)</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>wal-buffer-write-dirty-done</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires when a dirty WAL buffer write is complete.</entry>
</row>
<row>
<entry><literal>wal-insert</literal></entry>
2015-12-12 17:31:28 +01:00
<entry><literal>(unsigned char, unsigned char)</literal></entry>
<entry>Probe that fires when a WAL record is inserted.
arg0 is the resource manager (rmid) for the record.
arg1 contains the info flags.</entry>
</row>
<row>
<entry><literal>wal-switch</literal></entry>
2015-12-12 17:31:28 +01:00
<entry><literal>()</literal></entry>
<entry>Probe that fires when a WAL segment switch is requested.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>smgr-md-read-start</literal></entry>
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int)</literal></entry>
<entry>Probe that fires when beginning to read a block from a relation.
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or <symbol>INVALID_PROC_NUMBER</symbol> (-1) for a shared buffer.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>smgr-md-read-done</literal></entry>
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)</literal></entry>
<entry>Probe that fires when a block read is complete.
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or <symbol>INVALID_PROC_NUMBER</symbol> (-1) for a shared buffer.
arg6 is the number of bytes actually read, while arg7 is the number
requested (if these are different it indicates a short read).</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>smgr-md-write-start</literal></entry>
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int)</literal></entry>
<entry>Probe that fires when beginning to write a block to a relation.
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or <symbol>INVALID_PROC_NUMBER</symbol> (-1) for a shared buffer.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>smgr-md-write-done</literal></entry>
<entry><literal>(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)</literal></entry>
<entry>Probe that fires when a block write is complete.
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or <symbol>INVALID_PROC_NUMBER</symbol> (-1) for a shared buffer.
arg6 is the number of bytes actually written, while arg7 is the number
requested (if these are different it indicates a short write).</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>sort-start</literal></entry>
<entry><literal>(int, bool, int, int, bool, int)</literal></entry>
<entry>Probe that fires when a sort operation is started.
arg0 indicates heap, index or datum sort.
arg1 is true for unique-value enforcement.
arg2 is the number of key columns.
arg3 is the number of kilobytes of work memory allowed.
arg4 is true if random access to the sort result is required.
arg5 indicates serial when <literal>0</literal>, parallel worker when
<literal>1</literal>, or parallel leader when <literal>2</literal>.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>sort-done</literal></entry>
<entry><literal>(bool, long)</literal></entry>
<entry>Probe that fires when a sort is complete.
arg0 is true for external sort, false for internal sort.
arg1 is the number of disk blocks used for an external sort,
or kilobytes of memory used for an internal sort.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>lwlock-acquire</literal></entry>
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
<entry><literal>(char *, LWLockMode)</literal></entry>
<entry>Probe that fires when an LWLock has been acquired.
arg0 is the LWLock's tranche.
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
arg1 is the requested lock mode, either exclusive or shared.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>lwlock-release</literal></entry>
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
<entry><literal>(char *)</literal></entry>
<entry>Probe that fires when an LWLock has been released (but note
that any released waiters have not yet been awakened).
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
arg0 is the LWLock's tranche.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>lwlock-wait-start</literal></entry>
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
<entry><literal>(char *, LWLockMode)</literal></entry>
<entry>Probe that fires when an LWLock was not immediately available and
a server process has begun to wait for the lock to become available.
arg0 is the LWLock's tranche.
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
arg1 is the requested lock mode, either exclusive or shared.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>lwlock-wait-done</literal></entry>
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
<entry><literal>(char *, LWLockMode)</literal></entry>
<entry>Probe that fires when a server process has been released from its
wait for an LWLock (it does not actually have the lock yet).
arg0 is the LWLock's tranche.
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
arg1 is the requested lock mode, either exclusive or shared.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>lwlock-condacquire</literal></entry>
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
<entry><literal>(char *, LWLockMode)</literal></entry>
<entry>Probe that fires when an LWLock was successfully acquired when the
caller specified no waiting.
arg0 is the LWLock's tranche.
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
arg1 is the requested lock mode, either exclusive or shared.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>lwlock-condacquire-fail</literal></entry>
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
<entry><literal>(char *, LWLockMode)</literal></entry>
<entry>Probe that fires when an LWLock was not successfully acquired when
the caller specified no waiting.
arg0 is the LWLock's tranche.
Simplify LWLock tranche machinery by removing array_base/array_stride. array_base and array_stride were added so that we could identify the offset of an LWLock within a tranche, but this facility is only very marginally used apart from the main tranche. So, give every lock in the main tranche its own tranche ID and get rid of array_base, array_stride, and all that's attached. For debugging facilities (Trace_lwlocks and LWLOCK_STATS) print the pointer address of the LWLock using %p instead of the offset. This is arguably more useful, and certainly a lot cheaper. Drop the offset-within-tranche from the information reported to dtrace and from one can't-happen message inside lwlock.c. The main user-visible impact of this change is that pg_stat_activity will now report all waits for LWLocks as "LWLock" rather than reporting some as "LWLockTranche" and others as "LWLockNamed". The main motivation for this change is that the need to specify an array_base and an array_stride is awkward for parallel query. There is only a very limited supply of tranche IDs so we can't just keep allocating new ones, and if we try to use the same tranche IDs every time then we run into trouble when multiple parallel contexts are use simultaneously. So if we didn't get rid of this mechanism we'd have to make it even more complicated. By simplifying it in this way, we instead reduce the size of the generated code for lwlock.c by about 5%. Discussion: http://postgr.es/m/CA+TgmoYsFn6NUW1x0AZtupJGUAs1UDY4dJtCN47_Q6D0sP80PA@mail.gmail.com
2016-12-16 17:29:23 +01:00
arg1 is the requested lock mode, either exclusive or shared.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>lock-wait-start</literal></entry>
<entry><literal>(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)</literal></entry>
<entry>Probe that fires when a request for a heavyweight lock (lmgr lock)
has begun to wait because the lock is not available.
arg0 through arg3 are the tag fields identifying the object being
locked. arg4 indicates the type of object being locked.
arg5 indicates the lock type being requested.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>lock-wait-done</literal></entry>
<entry><literal>(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)</literal></entry>
<entry>Probe that fires when a request for a heavyweight lock (lmgr lock)
has finished waiting (i.e., has acquired the lock).
2015-12-12 17:31:28 +01:00
The arguments are the same as for <literal>lock-wait-start</literal>.</entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><literal>deadlock-found</literal></entry>
<entry><literal>()</literal></entry>
<entry>Probe that fires when a deadlock is found by the deadlock
detector.</entry>
</row>
</tbody>
</tgroup>
</table>
<table id="typedefs-table">
<title>Defined Types Used in Probe Parameters</title>
<tgroup cols="2">
<thead>
<row>
<entry>Type</entry>
<entry>Definition</entry>
</row>
</thead>
<tbody>
<row>
2015-12-12 17:31:28 +01:00
<entry><type>LocalTransactionId</type></entry>
<entry><type>unsigned int</type></entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><type>LWLockMode</type></entry>
<entry><type>int</type></entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><type>LOCKMODE</type></entry>
<entry><type>int</type></entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><type>BlockNumber</type></entry>
<entry><type>unsigned int</type></entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><type>Oid</type></entry>
<entry><type>unsigned int</type></entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><type>ForkNumber</type></entry>
<entry><type>int</type></entry>
</row>
<row>
2015-12-12 17:31:28 +01:00
<entry><type>bool</type></entry>
<entry><type>unsigned char</type></entry>
</row>
</tbody>
</tgroup>
</table>
</sect2>
<sect2 id="using-trace-points">
<title>Using Probes</title>
<para>
The example below shows a DTrace script for analyzing transaction
counts in the system, as an alternative to snapshotting
<structname>pg_stat_database</structname> before and after a performance test:
<programlisting>
#!/usr/sbin/dtrace -qs
postgresql$1:::transaction-start
{
@start["Start"] = count();
self->ts = timestamp;
}
postgresql$1:::transaction-abort
{
@abort["Abort"] = count();
}
postgresql$1:::transaction-commit
/self->ts/
{
@commit["Commit"] = count();
@time["Total time (ns)"] = sum(timestamp - self->ts);
self->ts=0;
}
</programlisting>
When executed, the example D script gives output such as:
<screen>
# ./txn_count.d `pgrep -n postgres` or ./txn_count.d &lt;PID&gt;
^C
Start 71
Commit 70
Total time (ns) 2312105013
</screen>
</para>
<note>
<para>
SystemTap uses a different notation for trace scripts than DTrace does,
even though the underlying trace points are compatible. One point worth
noting is that at this writing, SystemTap scripts must reference probe
names using double underscores in place of hyphens. This is expected to
be fixed in future SystemTap releases.
</para>
</note>
<para>
You should remember that DTrace scripts need to be carefully written and
debugged, otherwise the trace information collected might
be meaningless. In most cases where problems are found it is the
instrumentation that is at fault, not the underlying system. When
discussing information found using dynamic tracing, be sure to enclose
the script used to allow that too to be checked and discussed.
</para>
</sect2>
<sect2 id="defining-trace-points">
<title>Defining New Probes</title>
<para>
New probes can be defined within the code wherever the developer
desires, though this will require a recompilation. Below are the steps
for inserting new probes:
</para>
<procedure>
<step>
<para>
Decide on probe names and data to be made available through the probes
</para>
</step>
<step>
<para>
Add the probe definitions to <filename>src/backend/utils/probes.d</filename>
</para>
</step>
<step>
<para>
Include <filename>pg_trace.h</filename> if it is not already present in the
module(s) containing the probe points, and insert
<literal>TRACE_POSTGRESQL</literal> probe macros at the desired locations
in the source code
</para>
</step>
<step>
<para>
Recompile and verify that the new probes are available
</para>
</step>
</procedure>
<formalpara>
<title>Example:</title>
<para>
Here is an example of how you would add a probe to trace all new
transactions by transaction ID.
</para>
</formalpara>
<procedure>
<step>
<para>
Decide that the probe will be named <literal>transaction-start</literal> and
2015-12-12 17:31:28 +01:00
requires a parameter of type <type>LocalTransactionId</type>
</para>
</step>
<step>
<para>
Add the probe definition to <filename>src/backend/utils/probes.d</filename>:
<programlisting>
probe transaction__start(LocalTransactionId);
</programlisting>
Note the use of the double underline in the probe name. In a DTrace
script using the probe, the double underline needs to be replaced with a
hyphen, so <literal>transaction-start</literal> is the name to document for
users.
</para>
</step>
<step>
<para>
At compile time, <literal>transaction__start</literal> is converted to a macro
called <literal>TRACE_POSTGRESQL_TRANSACTION_START</literal> (notice the
underscores are single here), which is available by including
<filename>pg_trace.h</filename>. Add the macro call to the appropriate location
in the source code. In this case, it looks like the following:
<programlisting>
TRACE_POSTGRESQL_TRANSACTION_START(vxid.localTransactionId);
</programlisting>
</para>
</step>
<step>
<para>
After recompiling and running the new binary, check that your newly added
probe is available by executing the following DTrace command. You
should see similar output:
<screen>
# dtrace -ln transaction-start
ID PROVIDER MODULE FUNCTION NAME
18705 postgresql49878 postgres StartTransactionCommand transaction-start
18755 postgresql49877 postgres StartTransactionCommand transaction-start
18805 postgresql49876 postgres StartTransactionCommand transaction-start
18855 postgresql49875 postgres StartTransactionCommand transaction-start
18986 postgresql49873 postgres StartTransactionCommand transaction-start
</screen>
</para>
</step>
</procedure>
<para>
There are a few things to be careful about when adding trace macros
to the C code:
<itemizedlist>
<listitem>
<para>
You should take care that the data types specified for a probe's
parameters match the data types of the variables used in the macro.
Otherwise, you will get compilation errors.
</para>
</listitem>
<listitem>
<para>
On most platforms, if <productname>PostgreSQL</productname> is
built with <option>--enable-dtrace</option>, the arguments to a trace
macro will be evaluated whenever control passes through the
macro, <emphasis>even if no tracing is being done</emphasis>. This is
usually not worth worrying about if you are just reporting the
values of a few local variables. But beware of putting expensive
function calls into the arguments. If you need to do that,
consider protecting the macro with a check to see if the trace
is actually enabled:
<programlisting>
if (TRACE_POSTGRESQL_TRANSACTION_START_ENABLED())
TRACE_POSTGRESQL_TRANSACTION_START(some_function(...));
</programlisting>
Each trace macro has a corresponding <literal>ENABLED</literal> macro.
</para>
</listitem>
</itemizedlist>
</para>
</sect2>
</sect1>
</chapter>