Commit Graph

16021 Commits

Author SHA1 Message Date
Tomas Vondra 2c7ea57e56 Revert "Logical decoding of sequences"
This reverts a sequence of commits, implementing features related to
logical decoding and replication of sequences:

 - 0da92dc530
 - 80901b3291
 - b779d7d8fd
 - d5ed9da41d
 - a180c2b34d
 - 75b1521dae
 - 2d2232933b
 - 002c9dd97a
 - 05843b1aa4

The implementation has issues, mostly due to combining transactional and
non-transactional behavior of sequences. It's not clear how this could
be fixed, but it'll require reworking significant part of the patch.

Discussion: https://postgr.es/m/95345a19-d508-63d1-860a-f5c2f41e8d40@enterprisedb.com
2022-04-07 20:06:36 +02:00
Peter Eisentraut d7ab2a9a3c doc: Fix man page whitespace issues
Whitespace between tags is significant, and in some cases it creates
extra vertical space in man pages.  The fix is to remove some newlines
in the markup.
2022-04-07 18:25:12 +02:00
Peter Eisentraut 344d62fb9a Unlogged sequences
Add support for unlogged sequences.  Unlike for unlogged tables, this
is not a performance feature.  It allows sequences associated with
unlogged tables to be excluded from replication.

A new subcommand ALTER SEQUENCE ... SET LOGGED/UNLOGGED is added.

An identity/serial sequence now automatically gets and follows the
persistence level (logged/unlogged) of its owning table.  (The
sequences owned by temporary tables were already temporary through the
separate mechanism in RangeVarAdjustRelationPersistence().)  But you
can still change the persistence of an owned sequence separately.
Also, pg_dump and pg_upgrade preserve the persistence of existing
sequences.

Discussion: https://www.postgresql.org/message-id/flat/04e12818-2f98-257c-b926-2845d74ed04f%402ndquadrant.com
2022-04-07 16:18:00 +02:00
Peter Eisentraut d772b18d44 Avoid <substeps> element in man pages
The upstream DocBook manpages stylesheet apparently does not handle
the <substeps> element at all, and so the content comes out
unformatted, which is not useful.

As a workaround, replace <substeps> with a nested <procedure>, which
ends up effectively the same in output.
2022-04-07 13:45:25 +02:00
Thomas Munro 5dc0418fab Prefetch data referenced by the WAL, take II.
Introduce a new GUC recovery_prefetch.  When enabled, look ahead in the
WAL and try to initiate asynchronous reading of referenced data blocks
that are not yet cached in our buffer pool.  For now, this is done with
posix_fadvise(), which has several caveats.  Since not all OSes have
that system call, "try" is provided so that it can be enabled where
available.  Better mechanisms for asynchronous I/O are possible in later
work.

Set to "try" for now for test coverage.  Default setting to be finalized
before release.

The GUC wal_decode_buffer_size limits the distance we can look ahead in
bytes of decoded data.

The existing GUC maintenance_io_concurrency is used to limit the number
of concurrent I/Os allowed, based on pessimistic heuristics used to
infer that I/Os have begun and completed.  We'll also not look more than
maintenance_io_concurrency * 4 block references ahead.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version)
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version)
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version)
Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version)
Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version)
Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version)
Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version)
Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
2022-04-07 19:42:14 +12:00
Jeff Davis 5c279a6d35 Custom WAL Resource Managers.
Allow extensions to specify a new custom resource manager (rmgr),
which allows specialized WAL. This is meant to be used by a Table
Access Method or Index Access Method.

Prior to this commit, only Generic WAL was available, which offers
support for recovery and physical replication but not logical
replication.

Reviewed-by: Julien Rouhaud, Bharath Rupireddy, Andres Freund
Discussion: https://postgr.es/m/ed1fb2e22d15d3563ae0eb610f7b61bb15999c0a.camel%40j-davis.com
2022-04-06 23:06:46 -07:00
Andres Freund 5891c7a8ed pgstat: store statistics in shared memory.
Previously the statistics collector received statistics updates via UDP and
shared statistics data by writing them out to temporary files regularly. These
files can reach tens of megabytes and are written out up to twice a
second. This has repeatedly prevented us from adding additional useful
statistics.

Now statistics are stored in shared memory. Statistics for variable-numbered
objects are stored in a dshash hashtable (backed by dynamic shared
memory). Fixed-numbered stats are stored in plain shared memory.

The header for pgstat.c contains an overview of the architecture.

The stats collector is not needed anymore, remove it.

By utilizing the transactional statistics drop infrastructure introduced in a
prior commit statistics entries cannot "leak" anymore. Previously leaked
statistics were dropped by pgstat_vacuum_stat(), called from [auto-]vacuum. On
systems with many small relations pgstat_vacuum_stat() could be quite
expensive.

Now that replicas drop statistics entries for dropped objects, it is not
necessary anymore to reset stats when starting from a cleanly shut down
replica.

Subsequent commits will perform some further code cleanup, adapt docs and add
tests.

Bumps PGSTAT_FILE_FORMAT_ID.

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Reviewed-By: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-By: "David G. Johnston" <david.g.johnston@gmail.com>
Reviewed-By: Tomas Vondra <tomas.vondra@2ndquadrant.com> (in a much earlier version)
Reviewed-By: Arthur Zakirov <a.zakirov@postgrespro.ru> (in a much earlier version)
Reviewed-By: Antonin Houska <ah@cybertec.at> (in a much earlier version)
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Discussion: https://postgr.es/m/20220308205351.2xcn6k4x5yivcxyd@alap3.anarazel.de
Discussion: https://postgr.es/m/20210319235115.y3wz7hpnnrshdyv6@alap3.anarazel.de
2022-04-06 21:29:46 -07:00
Amit Kapila 79b716cfb7 Reorder subskiplsn in pg_subscription to avoid alignment issues.
The column 'subskiplsn' uses TYPALIGN_DOUBLE (which has 4 bytes alignment
on AIX) for storage. But the C Struct (Form_pg_subscription) has 8-byte
alignment for this field, so retrieving it from storage causes an
unaligned read.

To fix this, we rearranged the 'subskiplsn' column in the catalog so that
it naturally comes at an 8-byte boundary.

We have fixed a similar problem in commit f3b421da5f. This patch adds a
test to avoid a similar mistake in the future.

Reported-by: Noah Misch
Diagnosed-by: Noah Misch, Masahiko Sawada, Amit Kapila
Author: Masahiko Sawada
Reviewed-by: Noah Misch, Amit Kapila
Discussion: https://postgr.es/m/20220401074423.GC3682158@rfd.leadboat.com
	    https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
2022-04-07 09:39:25 +05:30
Michael Paquier 0d5c387573 Add option --config-file to pg_rewind
This option is useful to do a rewind with the server configuration file
(aka postgresql.conf) located outside the data directory, which is
something that some Linux distributions and some HA tools like to rely
on.  As a result, this can simplify the logic around a rewind by
avoiding the copy of such files before running pg_rewind.

This option affects pg_rewind when it internally starts the target
cluster with some "postgres" commands, adding -c config_file=FILE to the
command strings generated, when:
- retrieving a restore_command using a "postgres -C" command for
-c/--restore-target-wal.
- forcing crash recovery once to get the cluster into a clean shutdown
state.

Author: Gunnar "Nick" Bluth
Reviewed-by: Michael Banck, Alexander Kukushkin, Michael Paquier,
Alexander Alekseev
Discussion: https://postgr.es/m/7c59265d-ac50-b0aa-ca1e-65e8bd27642a@pro-open.de
2022-04-07 08:51:49 +09:00
Stephen Frost 39969e2a1e Remove exclusive backup mode
Exclusive-mode backups have been deprecated since 9.6 (when
non-exclusive backups were introduced) due to the issues
they can cause should the system crash while one is running and
generally because non-exclusive provides a much better interface.
Further, exclusive backup mode wasn't really being tested (nor was most
of the related code- like being able to log in just to stop an exclusive
backup and the bits of the state machine related to that) and having to
possibly deal with an exclusive backup and the backup_label file
existing during pg_basebackup, pg_rewind, etc, added other complexities
that we are better off without.

This patch removes the exclusive backup mode, the various special cases
for dealing with it, and greatly simplifies the online backup code and
documentation.

Authors: David Steele, Nathan Bossart
Reviewed-by: Chapman Flack
Discussion: https://postgr.es/m/ac7339ca-3718-3c93-929f-99e725d1172c@pgmasters.net
https://postgr.es/m/CAHg+QDfiM+WU61tF6=nPZocMZvHDzCK47Kneyb0ZRULYzV5sKQ@mail.gmail.com
2022-04-06 14:41:03 -04:00
Tom Lane a0ffa885e4 Allow granting SET and ALTER SYSTEM privileges on GUC parameters.
This patch allows "PGC_SUSET" parameters to be set by non-superusers
if they have been explicitly granted the privilege to do so.
The privilege to perform ALTER SYSTEM SET/RESET on a specific parameter
can also be granted.
Such privileges are cluster-wide, not per database.  They are tracked
in a new shared catalog, pg_parameter_acl.

Granting and revoking these new privileges works as one would expect.
One caveat is that PGC_USERSET GUCs are unaffected by the SET privilege
--- one could wish that those were handled by a revocable grant to
PUBLIC, but they are not, because we couldn't make it robust enough
for GUCs defined by extensions.

Mark Dilger, reviewed at various times by Andrew Dunstan, Robert Haas,
Joshua Brindle, and myself

Discussion: https://postgr.es/m/3D691E20-C1D5-4B80-8BA5-6BEB63AF3029@enterprisedb.com
2022-04-06 13:24:33 -04:00
Tatsuo Ishii 17a856d08b Change aggregated log format of pgbench.
Commit 4a39f87acd changed the aggregated log format. Problem is, now
the explanatory paragraph for the log line in the document is too
long. Also the log format included more optional columns, and it's
harder to parse the log lines.  This commit tries to solve the
problems.

- There's no optional log columns anymore. If a column is not
  meaningful with provided pgbench option, it will be presented as 0.

- Reorder the log columns so that it's easier to parse them.

- Adjust explanatory paragraph for the log line in the doc.

Discussion: https://postgr.es/m/flat/202203280757.3tu4ovs3petm%40alvherre.pgsql
2022-04-06 09:55:58 +09:00
Peter Eisentraut 7844c9918a psql: Show all query results by default
Previously, psql printed only the last result if a command string
returned multiple result sets.  Now it prints all of them.  The
previous behavior can be obtained by setting the psql variable
SHOW_ALL_RESULTS to off.

This is a significantly enhanced version of
3a51306722 (that was later reverted).
There is also much more test coverage for various psql features now.

Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: "Iwata, Aya" <iwata.aya@jp.fujitsu.com> (earlier version)
Reviewed-by: Daniel Verite <daniel@manitou-mail.org> (earlier version)
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> (earlier version)
Reviewed-by: vignesh C <vignesh21@gmail.com> (earlier version)
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
2022-04-04 20:00:33 +02:00
Peter Geoghegan 0b018fabaa Set relfrozenxid to oldest extant XID seen by VACUUM.
When VACUUM set relfrozenxid before now, it set it to whatever value was
used to determine which tuples to freeze -- the FreezeLimit cutoff.
This approach was very naive.  The relfrozenxid invariant only requires
that new relfrozenxid values be <= the oldest extant XID remaining in
the table (at the point that the VACUUM operation ends), which in
general might be much more recent than FreezeLimit.

VACUUM now carefully tracks the oldest remaining XID/MultiXactId as it
goes (the oldest remaining values _after_ lazy_scan_prune processing).
The final values are set as the table's new relfrozenxid and new
relminmxid in pg_class at the end of each VACUUM.  The oldest XID might
come from a tuple's xmin, xmax, or xvac fields.  It might even come from
one of the table's remaining MultiXacts.

Final relfrozenxid values must still be >= FreezeLimit in an aggressive
VACUUM (FreezeLimit still acts as a lower bound on the final value that
aggressive VACUUM can set relfrozenxid to).  Since standard VACUUMs
still make no guarantees about advancing relfrozenxid, they might as
well set relfrozenxid to a value from well before FreezeLimit when the
opportunity presents itself.  In general standard VACUUMs may now set
relfrozenxid to any value > the original relfrozenxid and <= OldestXmin.

Credit for the general idea of using the oldest extant XID to set
pg_class.relfrozenxid at the end of VACUUM goes to Andres Freund.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CAH2-WzkymFbz6D_vL+jmqSn_5q1wsFvFrE+37yLgL_Rkfd6Gzg@mail.gmail.com
2022-04-03 09:57:21 -07:00
Peter Geoghegan 05023a237c Doc: Add relfrozenxid Tip to XID wraparound section.
VACUUM VERBOSE and autovacuum log reports were taught to report the
details of how VACUUM advanced relfrozenxid (and relminmxid) by commit
872770fd.  Highlight this by adding a "Tip" to the documentation, next
to related discussion of age(relfrozenxid) monitoring.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzk0C1O-MKkOrj4YAfsGRru2=cA2VQpqM-9R1HNuG3nFaQ@mail.gmail.com
2022-04-02 13:34:57 -07:00
Peter Geoghegan f7e4d5c64f Doc: Clarify the role of aggressive VACUUMs.
Adjust the documentation's coverage of aggressive VACUUMs to make it
clearer that aggressive vacuuming isn't always strictly necessary.  It's
possible for non-aggressive VACUUMs to advance relfrozenxid/relminmxid
without fail, given the right workload conditions.

While this has always been true, it matters more with recent and pending
improvements to VACUUM.  These improvements make non-aggressive
vacuuming more likely to advance relfrozenxid/relminmxid in practice.
While this is an unrelated improvement to the docs, formally speaking,
it still doesn't seem worth backpatching.  So don't backpatch.
2022-04-02 12:29:35 -07:00
Joe Conway 9752436f04 Use has_privs_for_roles for predefined role checks: round 2
Similar to commit 6198420ad, replace is_member_of_role with
has_privs_for_role for predefined role access checks in recently
committed basebackup code. In passing fix a double-word error
in a nearby comment.

Discussion: https://postgr.es/m/flat/CAGB+Vh4Zv_TvKt2tv3QNS6tUM_F_9icmuj0zjywwcgVi4PAhFA@mail.gmail.com
2022-04-02 13:24:38 -04:00
Alvaro Herrera cfdd03f45e
Allow CLUSTER on partitioned tables
This is essentially the same as applying VACUUM FULL to a partitioned
table, which has been supported since commit 3c3bb99330 (March 2017).
While there's no great use case in applying CLUSTER to partitioned
tables, we don't have any strong reason not to allow it either.

For now, partitioned indexes cannot be marked clustered, so an index
must always be specified.

While at it, rename some variables that were RangeVars during the
development that led to 8bc717cb88 but never made it that way to the
source tree; there's no need to perpetuate names that have always been
more confusing than helpful.

Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/20201028003312.GU9241@telsasoft.com
Discussion: https://postgr.es/m/20200611153502.GT14879@telsasoft.com
2022-04-02 19:08:34 +02:00
Peter Geoghegan b7c485fb93 Doc: Remove MultiXact wraparound section link.
Remove circular "25.1.5.1. Multixacts And Wraparound" link that
references the section that the link itself appears in.  An explanation
of MultiXactId age appears only a few sentences before the link, so
there's no question that the link is superfluous at best.

Oversight in commit d5409295.

Author: Peter Geoghegan <pg@bowt.ie>
Backpatch: 14-
2022-04-02 09:49:08 -07:00
Alvaro Herrera 0af504733c
Remove excess semicolon in MERGE docs
Author: Euler Taveira <euler@eulerto.com>
Discussion: https://postgr.es/m/01dea9ef-85ad-4e09-a783-a1eadeae7bbe@www.fastmail.com
2022-04-02 17:18:14 +02:00
Michael Paquier d2a2ce4184 Make upgradecheck a no-op in MSVC's vcregress.pl
322becb has changed upgradecheck to use the TAP tests, discarding
pg_upgrade's tests in bincheck.  However, this is proving to be a bad
idea for the Windows buildfarm clients that use MSVC when TAP tests are
disabled as this causes a hard failure at the pg_upgrade step.

This commit disables upgradecheck, moving the execution of the tests of
pg_upgrade to bincheck, as per an initial suggestion from Andres
Freund, so as the buildfarm is able to live happily with those changes.

While on it, remove the routine that was used by upgradecheck to
create databases whose names are generated with a range of ASCII
characters as it is not used since 322becb.  upgradecheck is removed
from the CI script for Windows, as bincheck takes care of that now.

Per report from buildfarm member hamerkop (MSVC 2017 without a TAP
setup).

Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/YkbnpriYEAagZ2wH@paquier.xyz
2022-04-02 12:06:11 +09:00
Tom Lane 479b69a4a5 Set minimum required version of zstd as 1.4.0.
It emerges that our new zstd code depends on features that were not
present (or at least not enabled by default) in zstd before 1.4.0.
That's already four years old, so there's little reason to try to
make our code work with something older.  Instead make configure
check that zstd is at least 1.4.0, and document this requirement.

Justin Pryzby (doc changes by me)

Discussion: https://postgr.es/m/efbd94cd1102f99fd1300e44905ee4a994ee2ef6.camel@gunduz.org
2022-04-01 11:05:52 -04:00
Peter Eisentraut c1932e5428 libpq: Allow IP address SANs in server certificates
The current implementation supports exactly one IP address in a server
certificate's Common Name, which is brittle (the strings must match
exactly).  This patch adds support for IPv4 and IPv6 addresses in a
server's Subject Alternative Names.

Per discussion on-list:

- If the client's expected host is an IP address, we allow fallback to
  the Subject Common Name if an iPAddress SAN is not present, even if
  a dNSName is present.  This matches the behavior of NSS, in
  violation of the relevant RFCs.

- We also, counter-intuitively, match IP addresses embedded in dNSName
  SANs.  From inspection this appears to have been the behavior since
  the SAN matching feature was introduced in acd08d76.

- Unlike NSS, we don't map IPv4 to IPv6 addresses, or vice-versa.

Author: Jacob Champion <pchampion@vmware.com>
Co-authored-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/9f5f20974cd3a4091a788cf7f00ab663d5fcdffe.camel@vmware.com
2022-04-01 15:51:23 +02:00
Robert Haas fa25bebb82 docs: Changing column type doesn't always require an index rebuild.
James Coleman and Robert Haas, reviewed by Matthias van de Meent.

Discussion: https://postgr.es/m/CAAaqYe90Ea3RG=A7H-ONvTcx549-oQhp07BrHErwM=AyH2ximg@mail.gmail.com
2022-04-01 08:48:44 -04:00
Daniel Gustafsson 4e31c46e1e doc: Fix typo in ANALYZE documentation
Commit 61fa6ca79b accidentally wrote constrast instead of contrast.

Backpatch-through: 10
Discussion: https://postgr.es/m/88903179-5ce2-3d4d-af43-7830372bdcb6@enterprisedb.com
2022-03-31 12:03:33 +02:00
Amit Kapila 8f2e2bbf14 Raise a WARNING for missing publications.
When we create or alter a subscription to add publications raise a warning
for non-existent publications. We don't want to give an error here because
it is possible that users can later create the missing publications.

Author: Vignesh C
Reviewed-by: Bharath Rupireddy, Japin Li, Dilip Kumar, Euler Taveira, Ashutosh Sharma, Amit Kapila
Discussion: https://postgr.es/m/CALDaNm0f4YujGW+q-Di0CbZpnQKFFrXntikaQQKuEmGG0=Zw=Q@mail.gmail.com
2022-03-31 08:25:50 +05:30
Tomas Vondra db0d67db24 Optimize order of GROUP BY keys
When evaluating a query with a multi-column GROUP BY clause using sort,
the cost may be heavily dependent on the order in which the keys are
compared when building the groups. Grouping does not imply any ordering,
so we're allowed to compare the keys in arbitrary order, and a Hash Agg
leverages this. But for Group Agg, we simply compared keys in the order
as specified in the query. This commit explores alternative ordering of
the keys, trying to find a cheaper one.

In principle, we might generate grouping paths for all permutations of
the keys, and leave the rest to the optimizer. But that might get very
expensive, so we try to pick only a couple interesting orderings based
on both local and global information.

When planning the grouping path, we explore statistics (number of
distinct values, cost of the comparison function) for the keys and
reorder them to minimize comparison costs. Intuitively, it may be better
to perform more expensive comparisons (for complex data types etc.)
last, because maybe the cheaper comparisons will be enough. Similarly,
the higher the cardinality of a key, the lower the probability we’ll
need to compare more keys. The patch generates and costs various
orderings, picking the cheapest ones.

The ordering of group keys may interact with other parts of the query,
some of which may not be known while planning the grouping. E.g. there
may be an explicit ORDER BY clause, or some other ordering-dependent
operation, higher up in the query, and using the same ordering may allow
using either incremental sort or even eliminate the sort entirely.

The patch generates orderings and picks those minimizing the comparison
cost (for various pathkeys), and then adds orderings that might be
useful for operations higher up in the plan (ORDER BY, etc.). Finally,
it always keeps the ordering specified in the query, on the assumption
the user might have additional insights.

This introduces a new GUC enable_group_by_reordering, so that the
optimization may be disabled if needed.

The original patch was proposed by Teodor Sigaev, and later improved and
reworked by Dmitry Dolgov. Reviews by a number of people, including me,
Andrey Lepikhov, Claudio Freire, Ibrar Ahmed and Zhihong Yu.

Author: Dmitry Dolgov, Teodor Sigaev, Tomas Vondra
Reviewed-by: Tomas Vondra, Andrey Lepikhov, Claudio Freire, Ibrar Ahmed, Zhihong Yu
Discussion: https://postgr.es/m/7c79e6a5-8597-74e8-0671-1c39d124c9d6%40sigaev.ru
Discussion: https://postgr.es/m/CA%2Bq6zcW_4o2NC0zutLkOJPsFt80megSpX_dVRo6GK9PC-Jx_Ag%40mail.gmail.com
2022-03-31 01:13:33 +02:00
Andrew Dunstan 606948b058 SQL JSON functions
This Patch introduces three SQL standard JSON functions:

JSON() (incorrectly mentioned in my commit message for f4fb45d15c)
JSON_SCALAR()
JSON_SERIALIZE()

JSON() produces json values from text, bytea, json or jsonb values, and
has facilitites for handling duplicate keys.
JSON_SCALAR() produces a json value from any scalar sql value, including
json and jsonb.
JSON_SERIALIZE() produces text or bytea from input which containis or
represents json or jsonb;

For the most part these functions don't add any significant new
capabilities, but they will be of use to users wanting standard
compliant JSON handling.

Nikita Glukhov

Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.

Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
2022-03-30 16:30:37 -04:00
Robert Haas 26a0c025e2 Document basebackup_to_shell.required_role.
Omission noted by Joe Conway.

Discussion: https://postgr.es/m/CA+Tgmoat+zbzzZQJ7poXyUwiqxQxTaUid=auB4FejZ15VvDh4Q@mail.gmail.com
Discussion: https://postgr.es/m/744cf762-47d3-050f-5fa1-d4f9e8dbae2e@joeconway.com
2022-03-30 15:40:04 -04:00
Peter Eisentraut 7ae1619bc5 Add range_agg with multirange inputs
range_agg for normal ranges already existed.  A lot of code can be
shared.

Author: Paul Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Chapman Flack <chap@anastigmatix.net>
Discussion: https://www.postgresql.org/message-id/flat/007ef255-35ef-fd26-679c-f97e7a7f30c2@illuminatedcomputing.com
2022-03-30 20:16:23 +02:00
Peter Eisentraut b21c4cf951 doc: Document range_intersect_agg(anymultirange)
It already existed but was not mentioned in the documentation.  (Only
the anyrange variant was listed.)

Author: Paul Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Chapman Flack <chap@anastigmatix.net>
Discussion: https://www.postgresql.org/message-id/flat/007ef255-35ef-fd26-679c-f97e7a7f30c2@illuminatedcomputing.com
2022-03-30 17:10:58 +02:00
Robert Haas 51c0d186d9 Allow parallel zstd compression when taking a base backup.
libzstd allows transparent parallel compression just by setting
an option when creating the compression context, so permit that
for both client and server-side backup compression. To use this,
use something like pg_basebackup --compress WHERE-zstd:workers=N
where WHERE is "client" or "server" and N is an integer.

When compression is performed on the server side, this will spawn
threads inside the PostgreSQL backend. While there is almost no
PostgreSQL server code which is thread-safe, the threads here are used
internally by libzstd and touch only data structures controlled by
libzstd.

Patch by me, based in part on earlier work by Dipesh Pandit
and Jeevan Ladhe. Reviewed by Justin Pryzby.

Discussion: http://postgr.es/m/CA+Tgmobj6u-nWF-j=FemygUhobhryLxf9h-wJN7W-2rSsseHNA@mail.gmail.com
2022-03-30 09:41:26 -04:00
Daniel Gustafsson 860ea46ba7 doc: Clarify when SSL actually means TLS
SSL has become the de facto term to mean an end-to-end encrypted channel
regardless of protocol used, even though the SSL protocol is deprecated.
Clarify what we mean with SSL in our documentation, especially for new
users who might be looking for TLS.

Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/D4ABB281-6CFD-46C6-A4E0-8EC23A2977BC@yesql.se
2022-03-30 13:07:30 +02:00
Peter Eisentraut 072132f04e Add header matching mode to COPY FROM
COPY FROM supports the HEADER option to silently discard the header
line from a CSV or text file.  It is possible to load by mistake a
file that matches the expected format, for example, if two text
columns have been swapped, resulting in garbage in the database.

This adds a new option value HEADER MATCH that checks the column names
in the header line against the actual column names and errors out if
they do not match.

Author: Rémi Lapeyre <remi.lapeyre@lenstra.fr>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com
2022-03-30 09:02:31 +02:00
Robert Haas 9c08aea6a3 Add new block-by-block strategy for CREATE DATABASE.
Because this strategy logs changes on a block-by-block basis, it
avoids the need to checkpoint before and after the operation.
However, because it logs each changed block individually, it might
generate a lot of extra write-ahead logging if the template database
is large. Therefore, the older strategy remains available via a new
STRATEGY parameter to CREATE DATABASE, and a corresponding --strategy
option to createdb.

Somewhat controversially, this patch assembles the list of relations
to be copied to the new database by reading the pg_class relation of
the template database. Cross-database access like this isn't normally
possible, but it can be made to work here because there can't be any
connections to the database being copied, nor can it contain any
in-doubt transactions. Even so, we have to use lower-level interfaces
than normal, since the table scan and relcache interfaces will not
work for a database to which we're not connected. The advantage of
this approach is that we do not need to rely on the filesystem to
determine what ought to be copied, but instead on PostgreSQL's own
knowledge of the database structure. This avoids, for example,
copying stray files that happen to be located in the source database
directory.

Dilip Kumar, with a fairly large number of cosmetic changes by me.
Reviewed and tested by Ashutosh Sharma, Andres Freund, John Naylor,
Greg Nancarrow, Neha Sharma. Additional feedback from Bruce Momjian,
Heikki Linnakangas, Julien Rouhaud, Adam Brusselback, Kyotaro
Horiguchi, Tomas Vondra, Andrew Dunstan, Álvaro Herrera, and others.

Discussion: http://postgr.es/m/CA+TgmoYtcdxBjLh31DLxUXHxFVMPGzrU5_T=CYCvRyFHywSBUQ@mail.gmail.com
2022-03-29 11:48:36 -04:00
Daniel Gustafsson 3785d8e98b doc: Make UPDATE FROM examples consistent
The original first half of the example used an employees table and an
accounts.sales_person foreign key column, while the second half (added
in commit 8f889b1083) used a salesmen table and accounts.sales_id
for the foreign key.  This makes everything use the original names.

Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/87o81vqjw0.fsf@wibble.ilmari.org
2022-03-29 14:53:20 +02:00
Daniel Gustafsson ebc8b7d441 Enable SSL library detection via PQsslAttribute()
Currently, libpq client code must have a connection handle
before it can query the "library" SSL attribute.  This poses
problems if the client needs to know what SSL library is in
use before constructing a connection string.

Allow PQsslAttribute(NULL, "library") to return the library
in use -- currently, just "OpenSSL" or NULL. The new behavior
is announced with the LIBPQ_HAS_SSL_LIBRARY_DETECTION feature
macro, allowing clients to differentiate between a libpq that
was compiled without SSL support and a libpq that's just too
old to tell.

Author: Jacob Champion <pchampion@vmware.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/4c8b76ef434a96627170a31c3acd33cbfd6e41f1.camel@vmware.com
2022-03-29 14:02:45 +02:00
Michael Paquier a2c84990be Add system view pg_ident_file_mappings
This view is similar to pg_hba_file_rules view, except that it is
associated with the parsing of pg_ident.conf.  Similarly to its cousin,
this view is useful to check via SQL if changes planned in pg_ident.conf
would work upon reload or restart, or to diagnose a previous failure.

Bumps catalog version.

Author: Julien Rouhaud
Reviewed-by: Aleksander Alekseev, Michael Paquier
Discussion: https://postgr.es/m/20220223045959.35ipdsvbxcstrhya@jrouhaud
2022-03-29 10:15:48 +09:00
Joe Conway 6198420ad8 Use has_privs_for_roles for predefined role checks
Generally if a role is granted membership to another role with NOINHERIT
they must use SET ROLE to access the privileges of that role, however
with predefined roles the membership and privilege is conflated. Fix that
by replacing is_member_of_role with has_privs_for_role for predefined
roles. Patch does not remove is_member_of_role from acl.h, but it does
add a warning not to use that function for privilege checking. Not
backpatched based on hackers list discussion.

Author: Joshua Brindle
Reviewed-by: Stephen Frost, Nathan Bossart, Joe Conway
Discussion: https://postgr.es/m/flat/CAGB+Vh4Zv_TvKt2tv3QNS6tUM_F_9icmuj0zjywwcgVi4PAhFA@mail.gmail.com
2022-03-28 15:10:04 -04:00
Robert Haas 79de9842ab Remove the ability of a role to administer itself.
Commit f9fd176461 effectively gave
every role ADMIN OPTION on itself. However, this appears to be
something that happened accidentally as a result of refactoring
work rather than an intentional decision. Almost a decade later,
it was discovered that this was a security vulnerability. As a
result, commit fea164a72a restricted
this implicit ADMIN OPTION privilege to be exercisable only when
the role being administered is the same as the session user and
when no security-restricted operation is in progress. That
commit also documented the existence of this implicit privilege
for what seems to be the first time.

The effect of the privilege is to allow a login role to grant
the privileges of that role, and optionally ADMIN OPTION on it,
to some other role. That's an unusual thing to do, because generally
membership is granted in roles used as groups, rather than roles
used as users. Therefore, it does not seem likely that removing
the privilege will break things for many PostgreSQL users.

However, it will make it easier to reason about the permissions
system. This is the only case where a user who has not been given any
special permission (superuser, or ADMIN OPTION on some role) can
modify role membership, so removing it makes things more consistent.
For example, if a superuser sets up role A and B and grants A to B
but no other privileges to anyone, she can now be sure that no one
else will be able to revoke that grant. Without this change, that
would have been true only if A was a non-login role.

Patch by me. Reviewed by Tom Lane and Stephen Frost.

Discussion: http://postgr.es/m/CA+Tgmoawdt03kbA+dNyBcNWJpRxu0f4X=69Y3+DkXXZqmwMDLg@mail.gmail.com
2022-03-28 13:38:13 -04:00
Alvaro Herrera 7103ebb7aa
Add support for MERGE SQL command
MERGE performs actions that modify rows in the target table using a
source table or query. MERGE provides a single SQL statement that can
conditionally INSERT/UPDATE/DELETE rows -- a task that would otherwise
require multiple PL statements.  For example,

MERGE INTO target AS t
USING source AS s
ON t.tid = s.sid
WHEN MATCHED AND t.balance > s.delta THEN
  UPDATE SET balance = t.balance - s.delta
WHEN MATCHED THEN
  DELETE
WHEN NOT MATCHED AND s.delta > 0 THEN
  INSERT VALUES (s.sid, s.delta)
WHEN NOT MATCHED THEN
  DO NOTHING;

MERGE works with regular tables, partitioned tables and inheritance
hierarchies, including column and row security enforcement, as well as
support for row and statement triggers and transition tables therein.

MERGE is optimized for OLTP and is parameterizable, though also useful
for large scale ETL/ELT. MERGE is not intended to be used in preference
to existing single SQL commands for INSERT, UPDATE or DELETE since there
is some overhead.  MERGE can be used from PL/pgSQL.

MERGE does not support targetting updatable views or foreign tables, and
RETURNING clauses are not allowed either.  These limitations are likely
fixable with sufficient effort.  Rewrite rules are also not supported,
but it's not clear that we'd want to support them.

Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Simon Riggs <simon.riggs@enterprisedb.com>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier versions)
Reviewed-by: Peter Geoghegan <pg@bowt.ie> (earlier versions)
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions)
Reviewed-by: Japin Li <japinli@hotmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CANP8+jKitBSrB7oTgT9CY2i1ObfOt36z0XMraQc+Xrz8QB0nXA@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzkJdBuxj9PO=2QaO9-3h3xGbQPZ34kJH=HukRekwM-GZg@mail.gmail.com
Discussion: https://postgr.es/m/20201231134736.GA25392@alvherre.pgsql
2022-03-28 16:47:48 +02:00
Tomas Vondra 61fa6ca79b Document autoanalyze limitations for partitioned tables
When dealing with partitioned tables, counters for partitioned tables
are not updated when modifying child tables. This means autoanalyze may
not update optimizer statistics for the parent relations, which can
result in poor plans for some queries.

It's worth documenting this limitation, so that people are aware of it
and can take steps to mitigate it (e.g. by setting up a script executing
ANALYZE regularly).

Backpatch to v10. Older branches are affected too, of couse, but we no
longer maintain those.

Author: Justin Pryzby
Reviewed-by: Zhihong Yu, Tomas Vondra
Backpatch-through: 10
Discussion: https://postgr.es/m/20210913035409.GA10647%40telsasoft.com
2022-03-28 14:30:00 +02:00
Thomas Munro 8af36427d4 Fix pg_waldump docs.
Before 52b5568, the recently added -l option was short for --relation.
We changed it to -R, but we forgot to update one place in the
documentation.

Author: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/MEYP282MB1669435CFBE57CBBA5116C66B61D9%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2022-03-28 16:26:32 +13:00
Tomas Vondra 41b00f8e60 Move prattrs to the pg_publication_rel section in docs
Commit 923def9a53 documented the prattrs to the pg_publication_namespace
catalog, probably due to a rebase mistake. Move it to the section for
the pg_publication_rel catalog.

Author: Noriyoshi Shinoda
Discussion: https://postgr.es/m/PH7PR84MB18850A74D275F39762059E6CEE1B9@PH7PR84MB1885.NAMPRD84.PROD.OUTLOOK.COM
2022-03-26 19:13:27 +01:00
Tomas Vondra 923def9a53 Allow specifying column lists for logical replication
This allows specifying an optional column list when adding a table to
logical replication. The column list may be specified after the table
name, enclosed in parentheses. Columns not included in this list are not
sent to the subscriber, allowing the schema on the subscriber to be a
subset of the publisher schema.

For UPDATE/DELETE publications, the column list needs to cover all
REPLICA IDENTITY columns. For INSERT publications, the column list is
arbitrary and may omit some REPLICA IDENTITY columns. Furthermore, if
the table uses REPLICA IDENTITY FULL, column list is not allowed.

The column list can contain only simple column references. Complex
expressions, function calls etc. are not allowed. This restriction could
be relaxed in the future.

During the initial table synchronization, only columns included in the
column list are copied to the subscriber. If the subscription has
several publications, containing the same table with different column
lists, columns specified in any of the lists will be copied.

This means all columns are replicated if the table has no column list
at all (which is treated as column list with all columns), or when of
the publications is defined as FOR ALL TABLES (possibly IN SCHEMA that
matches the schema of the table).

For partitioned tables, publish_via_partition_root determines whether
the column list for the root or the leaf relation will be used. If the
parameter is 'false' (the default), the list defined for the leaf
relation is used. Otherwise, the column list for the root partition
will be used.

Psql commands \dRp+ and \d <table-name> now display any column lists.

Author: Tomas Vondra, Alvaro Herrera, Rahila Syed
Reviewed-by: Peter Eisentraut, Alvaro Herrera, Vignesh C, Ibrar Ahmed,
Amit Kapila, Hou zj, Peter Smith, Wang wei, Tang, Shi yu
Discussion: https://postgr.es/m/CAH2L28vddB_NFdRVpuyRBJEBWjz4BSyTB=_ektNRH8NJ1jf95g@mail.gmail.com
2022-03-26 01:01:27 +01:00
Tomas Vondra 05843b1aa4 Minor improvements in sequence decoding code and docs
A couple minor comment improvements and code cleanups, based on
post-commit feedback to the sequence decoding patch.

Author: Amit Kapila, vignesh C
Discussion: https://postgr.es/m/aeb2ba8d-e6f4-5486-cc4c-0d4982c291cb@enterprisedb.com
2022-03-25 21:07:17 +01:00
Thomas Munro 52b5568432 Improve command line options for pg_waldump.
Follow-up improvements for commit 127aea2a based on discussion:

* use fork name for --fork, not number
* use -R, -B as short switches for --relation, --block
* re-alphabetize the list of switches (code, --help and docs)

Suggested-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com> (fork name part)
Reviewed-by: David Christensen <david.christensen@crunchydata.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/3a4c2e93-7976-2320-fc0a-32097fe148a7%40enterprisedb.com
2022-03-25 13:52:27 +13:00
Tomas Vondra 75b1521dae Add decoding of sequences to built-in replication
This commit adds support for decoding of sequences to the built-in
replication (the infrastructure was added by commit 0da92dc530).

The syntax and behavior mostly mimics handling of tables, i.e. a
publication may be defined as FOR ALL SEQUENCES (replicating all
sequences in a database), FOR ALL SEQUENCES IN SCHEMA (replicating
all sequences in a particular schema) or individual sequences.

To publish sequence modifications, the publication has to include
'sequence' action. The protocol is extended with a new message,
describing sequence increments.

A new system view pg_publication_sequences lists all the sequences
added to a publication, both directly and indirectly. Various psql
commands (\d and \dRp) are improved to also display publications
including a given sequence, or sequences included in a publication.

Author: Tomas Vondra, Cary Huang
Reviewed-by: Peter Eisentraut, Amit Kapila, Hannu Krosing, Andres
             Freund, Petr Jelinek
Discussion: https://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com
Discussion: https://postgr.es/m/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
2022-03-24 18:49:27 +01:00
Tom Lane 0adb3dc68b Doc: add some documentation about serialization failure handling.
We weren't very explicit about when to retry such errors.

Simon Riggs

Discussion: https://postgr.es/m/CANbhV-E+u+Z4VBNyJ6GzeO1fd2wP_5S+f6+kmxnN+ALQE6iG9Q@mail.gmail.com
2022-03-24 13:34:36 -04:00
Tom Lane 0bd7af082a Invent recursive_worktable_factor GUC to replace hard-wired constant.
Up to now, the planner estimated the size of a recursive query's
worktable as 10 times the size of the non-recursive term.  It's hard
to see how to do significantly better than that automatically, but
we can give users control over the multiplier to allow tuning for
specific use-cases.  The default behavior remains the same.

Simon Riggs

Discussion: https://postgr.es/m/CANbhV-EuaLm4H3g0+BSTYHEGxJj3Kht0R+rJ8vT57Dejnh=_nA@mail.gmail.com
2022-03-24 11:47:41 -04:00
Michael Paquier bbd4951b73 doc: Improve postgres command for shared_memory_size_in_huge_pages
The command used in the documentation to retrieve the value of the
runtime-computed GUC shared_memory_size_in_huge_pages would also show to
the user all the log messages generated by the postmaster before and
after printing the wanted value.  This can be confusing, as the wanted
result could be masked with a lot of noise.

One way to avoid those log messages is to use something like "-c
log_min_messages=fatal" in the command (my idea, but that's not common
knowledge).  Rather than mentioning this option, suffix the command with
a redirection of stderr to /dev/null, which is the stream location where
the logs show up.  This is enough to show only the GUC value to the
user when copy-pasting the command.

Reported-by: Magnus Hagander
Author: Nathan Bossart
Discussion: https://postgr.es/m/20220314173417.GA1020555@nathanxps13
2022-03-24 20:56:31 +09:00
Thomas Munro 127aea2a65 Add additional filtering options to pg_waldump.
Allow filtering by RelFileNode, BlockNumber, ForkNum and FPW.

Author: David Christensen <david.christensen@crunchydata.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Cary Huang <cary.huang@highgo.ca>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/lzzgmgm6e5.fsf%40veeddrois.attlocal.net
2022-03-24 21:42:28 +13:00
Robert Haas ffd53659c4 Replace BASE_BACKUP COMPRESSION_LEVEL option with COMPRESSION_DETAIL.
There are more compression parameters that can be specified than just
an integer compression level, so rename the new COMPRESSION_LEVEL
option to COMPRESSION_DETAIL before it gets released. Introduce a
flexible syntax for that option to allow arbitrary options to be
specified without needing to adjust the main replication grammar,
and common code to parse it that is shared between the client and
the server.

This commit doesn't actually add any new compression parameters,
so the only user-visible change is that you can now type something
like pg_basebackup --compress gzip:level=5 instead of writing just
pg_basebackup --compress gzip:5. However, it should make it easy to
add new options. If for example gzip starts offering fries, we can
support pg_basebackup --compress gzip:level=5,fries=true for the
benefit of users who want fries with that.

Along the way, this fixes a few things in pg_basebackup so that the
pg_basebackup can be used with a server-side compression algorithm
that pg_basebackup itself does not understand. For example,
pg_basebackup --compress server-lz4 could still succeed even if
only the server and not the client has LZ4 support, provided that
the other options to pg_basebackup don't require the client to
decompress the archive.

Patch by me. Reviewed by Justin Pryzby and Dagfinn Ilmari Mannsåker.

Discussion: http://postgr.es/m/CA+TgmoYvpetyRAbbg1M8b3-iHsaN4nsgmWPjOENu5-doHuJ7fA@mail.gmail.com
2022-03-23 09:19:14 -04:00
Tatsuo Ishii 4a39f87acd Allow pgbench to retry in some cases.
When serialization or deadlock errors are reported by backend, allow
to retry and continue the benchmarking. For this purpose new options
"--max-tries", "--failures-detailed" and "--verbose-errors" are added.

Transactions with serialization errors or deadlock errors will be
repeated after rollbacks until they complete successfully or reach the
maximum number of tries (specified by the --max-tries option), or the
maximum time of tries (specified by the --latency-limit option).
These options can be specified at the same time. It is not possible to
use an unlimited number of tries (--max-tries=0) without the
--latency-limit option or the --time option. By default the option
--max-tries is set to 1, which means transactions with
serialization/deadlock errors are not retried. If the last try fails,
this transaction will be reported as failed, and the client variables
will be set as they were before the first run of this transaction.

Statistics on retries and failures are printed in the progress,
transaction / aggregation logs and in the end with other results (all
and for each script). Also retries and failures are printed
per-command with average latency by using option
(--report-per-command, -r).

Option --failures-detailed prints group failures by basic types
(serialization failures / deadlock failures).

Option --verbose-errors prints distinct reports on errors and failures
(errors without retrying) by type with detailed information like which
limit for retries was violated and how far it was exceeded for the
serialization/deadlock failures.

Patch originally written by Marina Polyakova then Yugo Nagata
inherited the discussion and heavily modified the patch to make it
commitable.

Authors: Yugo Nagata, Marina Polyakova
Reviewed-by: Fabien Coelho, Tatsuo Ishii, Alvaro Herrera, Kevin Grittner, Andres Freund, Arthur Zakirov, Alexander Korotkov, Teodor Sigaev, Ildus Kurbangaliev
Discussion: https://postgr.es/m/flat/72a0d590d6ba06f242d75c2e641820ec%40postgrespro.ru
2022-03-23 19:05:45 +09:00
Dean Rasheed 7faa5fc84b Add support for security invoker views.
A security invoker view checks permissions for accessing its
underlying base relations using the privileges of the user of the
view, rather than the privileges of the view owner. Additionally, if
any of the base relations are tables with RLS enabled, the policies of
the user of the view are applied, rather than those of the view owner.

This allows views to be defined without giving away additional
privileges on the underlying base relations, and matches a similar
feature available in other database systems.

It also allows views to operate more naturally with RLS, without
affecting the assignments of policies to users.

Christoph Heiss, with some additional hacking by me. Reviewed by
Laurenz Albe and Wolfgang Walther.

Discussion: https://postgr.es/m/b66dd6d6-ad3e-c6f2-8b90-47be773da240%40cybertec.at
2022-03-22 10:28:10 +00:00
Amit Kapila 208c5d65bb Add ALTER SUBSCRIPTION ... SKIP.
This feature allows skipping the transaction on subscriber nodes.

If incoming change violates any constraint, logical replication stops
until it's resolved. Currently, users need to either manually resolve the
conflict by updating a subscriber-side database or by using function
pg_replication_origin_advance() to skip the conflicting transaction. This
commit introduces a simpler way to skip the conflicting transactions.

The user can specify LSN by ALTER SUBSCRIPTION ... SKIP (lsn = XXX),
which allows the apply worker to skip the transaction finished at
specified LSN. The apply worker skips all data modification changes within
the transaction.

Author: Masahiko Sawada
Reviewed-by: Takamichi Osumi, Hou Zhijie, Peter Eisentraut, Amit Kapila, Shi Yu, Vignesh C, Greg Nancarrow, Haiying Tang, Euler Taveira
Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
2022-03-22 07:11:19 +05:30
Alvaro Herrera ba9a7e3921
Enforce foreign key correctly during cross-partition updates
When an update on a partitioned table referenced in foreign key
constraints causes a row to move from one partition to another,
the fact that the move is implemented as a delete followed by an insert
on the target partition causes the foreign key triggers to have
surprising behavior.  For example, a given foreign key's delete trigger
which implements the ON DELETE CASCADE clause of that key will delete
any referencing rows when triggered for that internal DELETE, although
it should not, because the referenced row is simply being moved from one
partition of the referenced root partitioned table into another, not
being deleted from it.

This commit teaches trigger.c to skip queuing such delete trigger events
on the leaf partitions in favor of an UPDATE event fired on the root
target relation.  Doing so is sensible because both the old and the new
tuple "logically" belong to the root relation.

The after trigger event queuing interface now allows passing the source
and the target partitions of a particular cross-partition update when
registering the update event for the root partitioned table.  Along with
the two ctids of the old and the new tuple, the after trigger event now
also stores the OIDs of those partitions. The tuples fetched from the
source and the target partitions are converted into the root table
format, if necessary, before they are passed to the trigger function.

The implementation currently has a limitation that only the foreign keys
pointing into the query's target relation are considered, not those of
its sub-partitioned partitions.  That seems like a reasonable
limitation, because it sounds rare to have distinct foreign keys
pointing to sub-partitioned partitions instead of to the root table.

This misbehavior stems from commit f56f8f8da6 (which added support for
foreign keys to reference partitioned tables) not paying sufficient
attention to commit 2f17844104 (which had introduced cross-partition
updates a year earlier).  Even though the former commit goes back to
Postgres 12, we're not backpatching this fix at this time for fear of
destabilizing things too much, and because there are a few ABI breaks in
it that we'd have to work around in older branches.  It also depends on
commit f4566345cf, which had its own share of backpatchability issues
as well.

Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Eduard Català <eduard.catala@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqFvkBCmfwkQX_yBqv2Wz8ugUGiBDxum8=WvVbfU1TXaNg@mail.gmail.com
Discussion: https://postgr.es/m/CAL54xNZsLwEM1XCk5yW9EqaRzsZYHuWsHQkA2L5MOSKXAwviCQ@mail.gmail.com
2022-03-20 18:43:40 +01:00
Tom Lane 3f513ac793 Doc: fix our example systemd script.
The example used "TimeoutSec=0", but systemd's documented way to get
the desired effect is "TimeoutSec=infinity".

Discussion: https://postgr.es/m/164770078557.670.5467111518383664377@wrigleys.postgresql.org
2022-03-20 12:39:40 -04:00
Michael Paquier eb8399cf1f Improve handling of SET ACCESS METHOD for ALTER MATERIALIZED VIEW
b048326 has added support for SET ACCESS METHOD in ALTER TABLE, but it
has missed a few things for materialized views:
- No documentation for this clause on the ALTER MATERIALIZED VIEW page.
- psql tab completion missing.
- No regression tests.

This commit closes the gap on all the points listed above.

Author: Yugo Nagata
Discussion: https://postgr.es/m/20220316133337.5dc9740abfa24c25ec9f67f5@sraoss.co.jp
2022-03-19 19:13:52 +09:00
Michael Paquier 9616da3ddb doc: Mention SET TABLESPACE clause for ALTER MATERIALIZED VIEW
This command flavor is supported, but there was nothing in the
documentation about it.

Author: Yugo Nagata
Discussion: https://postgr.es/m/20220316133337.5dc9740abfa24c25ec9f67f5@sraoss.co.jp
Backpatch-through: 10
2022-03-19 16:37:31 +09:00
Michael Paquier 7a7cd84893 doc: Remove mention to in-place tablespaces for pg_tablespace_location()
This paragraph has been added in the documentation by f6f0db4, but after
more discussion we found that this just makes things more confusing,
adding some cross-references between a general feature and something
only aimed at being used by developers.  The original documentation is
not wrong either, and this commit brings back this part of the docs to
the same state as before f6f0db4.

Per discussion with Kyotaro Horiguchi and Thomas Munro.

Discussion: https://postgr.es/m/CA+hUKGL2uaRKu=3+bMBpejHh4k7wqzWC05aiasTsSsHGRCWa8g@mail.gmail.com
2022-03-18 10:46:36 +09:00
Peter Eisentraut a9b7e92084 doc: Add documentation for new field pg_database.daticulocale
forgotten in f2553d4306

Author: Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>
2022-03-17 14:12:39 +01:00
Peter Eisentraut f2553d4306 Add option to use ICU as global locale provider
This adds the option to use ICU as the default locale provider for
either the whole cluster or a database.  New options for initdb,
createdb, and CREATE DATABASE are used to select this.

Since some (legacy) code still uses the libc locale facilities
directly, we still need to set the libc global locale settings even if
ICU is otherwise selected.  So pg_database now has three
locale-related fields: the existing datcollate and datctype, which are
always set, and a new daticulocale, which is only set if ICU is
selected.  A similar change is made in pg_collation for consistency,
but in that case, only the libc-related fields or the ICU-related
field is set, never both.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com
2022-03-17 11:13:16 +01:00
Michael Paquier f6f0db4d62 Fix pg_tablespace_location() with in-place tablespaces
Using this system function with an in-place tablespace (created when
allow_in_place_tablespaces is enabled by specifying an empty string as
location) caused a failure when using readlink(), as the tablespace is,
in this case, not a symbolic link in pg_tblspc/ but a directory.

Rather than getting a failure, the commit changes
pg_tablespace_location() so as a relative path to the data directory is
returned for in-place tablespaces, to make a difference between
tablespaces created when allow_in_place_tablespaces is enabled or not.
Getting a path rather than an empty string that would match the CREATE
TABLESPACE command in this case is more useful for tests that would like
to rely on this function.

While on it, a regression test is added for this case.  This is simple
to add in the main regression test suite thanks to regexp_replace() to
mask the part of the tablespace location dependent on its OID.

Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi, Thomas Munro
Discussion: https://postgr.es/m/YiG1RleON1WBcLnX@paquier.xyz
2022-03-17 11:25:02 +09:00
Daniel Gustafsson 3ac577b912 doc: Ensure intermediate path creation with mkdir
The mkdir command in the Installation from Source Short Version docs
didn't use the -p intermediate path creation parameter which likely
would cause the command to fail.  At the time of writing, -p wasn't
universally available but it can now be relied upon existing. The -p
parameter is defined by POSIX, at least since posix-2004.

Reported-by: Daniel Westermann <daniel.westermann@dbi-services.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/ZR0P278MB0920263E7F2D546A33E50079D20E9@ZR0P278MB0920.CHEP278.PROD.OUTLOOK.COM
2022-03-16 11:38:29 +01:00
Thomas Munro 501c66c81b Fix documentation typo in commit 5e6368b4.
Back-patch to 14.
2022-03-16 15:55:15 +13:00
Thomas Munro 3390ef1b7b Fix waiting in RegisterSyncRequest().
If we run out of space in the checkpointer sync request queue (which is
hopefully rare on real systems, but common with very small buffer pool),
we wait for it to drain.  While waiting, we should report that as a wait
event so that users know what is going on, and also handle postmaster
death, since otherwise the loop might never terminate if the
checkpointer has exited.

Back-patch to 12.  Although the problem exists in earlier releases too,
the code is structured differently before 12 so I haven't gone any
further for now, in the absence of field complaints.

Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220226213942.nb7uvb2pamyu26dj%40alap3.anarazel.de
2022-03-16 15:35:16 +13:00
Thomas Munro 5e6368b42e Wake up for latches in CheckpointWriteDelay().
The checkpointer shouldn't ignore its latch.  Other backends may be
waiting for it to drain the request queue.  Hopefully real systems don't
have a full queue often, but the condition is reached easily when
shared_buffers is small.

This involves defining a new wait event, which will appear in the
pg_stat_activity view often due to spread checkpoints.

Back-patch only to 14.  Even though the problem exists in earlier
branches too, it's hard to hit there.  In 14 we stopped using signal
handlers for latches on Linux, *BSD and macOS, which were previously
hiding this problem by interrupting the sleep (though not reliably, as
the signal could arrive before the sleep begins; precisely the problem
latches address).

Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220226213942.nb7uvb2pamyu26dj%40alap3.anarazel.de
2022-03-16 13:57:59 +13:00
Robert Haas c6306db24b Add 'basebackup_to_shell' contrib module.
As a demonstration of the sort of thing that can be done by adding a
custom backup target, this defines a 'shell' target which executes a
command defined by the system administrator. The command is executed
once for each tar archive generate by the backup and once for the
backup manifest, if any. Each time the command is executed, it
receives the contents of th file for which it is executed via standard
input.

The configured command can use %f to refer to the name of the archive
(e.g. base.tar, $TABLESPACE_OID.tar, backup_manifest) and %d to refer
to the target detail (pg_basebackup --target shell:DETAIL). A target
detail is required if %d appears in the configured command and
forbidden if it does not.

Patch by me, reviewed by Abhijit Menon-Sen.

Discussion: http://postgr.es/m/CA+TgmoaqvdT-u3nt+_kkZ7bgDAyqDB0i-+XOMmr5JN2Rd37hxw@mail.gmail.com
2022-03-15 13:24:23 -04:00
Michael Paquier 6bdf1a1400 Fix collection of typos in the code and the documentation
Some words were duplicated while other places were grammatically
incorrect, including one variable name in the code.

Author: Otto Kekalainen, Justin Pryzby
Discussion: https://postgr.es/m/7DDBEFC5-09B6-4325-B942-B563D1A24BDC@amazon.com
2022-03-15 11:29:35 +09:00
Amit Kapila 705e20f855 Optionally disable subscriptions on error.
Logical replication apply workers for a subscription can easily get stuck
in an infinite loop of attempting to apply a change, triggering an error
(such as a constraint violation), exiting with the error written to the
subscription server log, and restarting.

To partially remedy the situation, this patch adds a new subscription
option named 'disable_on_error'. To be consistent with old behavior, this
option defaults to false. When true, both the tablesync worker and apply
worker catch any errors thrown and disable the subscription in order to
break the loop. The error is still also written in the logs.

Once the subscription is disabled, users can either manually resolve the
conflict/error or skip the conflicting transaction by using
pg_replication_origin_advance() function. After resolving the conflict,
users need to enable the subscription to allow apply process to proceed.

Author: Osumi Takamichi and Mark Dilger
Reviewed-by: Greg Nancarrow, Vignesh C, Amit Kapila, Wang wei, Tang Haiying, Peter Smith, Masahiko Sawada, Shi Yu
Discussion : https://postgr.es/m/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com
2022-03-14 09:32:40 +05:30
Michael Paquier 8e375ea4a0 Bump XLOG_PAGE_MAGIC due to the addition of wal_compression=zstd
While on it, fix a thinko in the docs, introduced by the same commit.

Oversights in e953732.

Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20220311214900.GN28503@telsasoft.com
2022-03-12 09:39:13 +09:00
Michael Paquier 9198e63996 doc: Standardize capitalization of term "hot standby"/"Hot Standby"
"Hot Standby" was capitalized in a couple of places in the docs, as the
style primarily used when it was introduced, but this has not been much
respected across the years.  Per discussion, it is more natural for the
reader to use "hot standby" (aka lower-case only) when in the middle of
a sentence, and "Hot standby" (aka capitalized) in a title.  This commit
adjusts all the places in the docs to be consistent with this choice,
rather than applying one style or the other midway.

Author: Daniel Westermann
Reviewed-by: Kyotaro Horiguchi, Aleksander Alekseev, Robert Treat
Discussion: https://postgr.es/m/GVAP278MB093160025A779A1A5788D0EAD2039@GVAP278MB0931.CHEP278.PROD.OUTLOOK.COM
2022-03-11 15:16:21 +09:00
Michael Paquier e9537321a7 Add support for zstd with compression of full-page writes in WAL
wal_compression gains a new value, "zstd", to allow the compression of
full-page images using the compression method of the same name.

Compression is done using the default level recommended by the library,
as of ZSTD_CLEVEL_DEFAULT = 3.  Some benchmarking has shown that it
could make sense to use a level lower for the FPI compression, like 1 or
2, as the compression rate did not change much with a bit less CPU
consumed, but any tests done would only cover few scenarios so it is
hard to come to a clear conclusion.  Anyway, there is no reason to not
use the default level instead, which is the level recommended by the
library so it should be fine for most cases.

zstd outclasses easily pglz, and is better than LZ4 where one wants to
have more compression at the cost of extra CPU but both are good enough
in their own scenarios, so the choice between one or the other of these
comes to a study of the workload patterns and the schema involved,
mainly.

This commit relies heavily on 4035cd5, that reshaped the code creating
and restoring full-page writes to be aware of the compression type,
making this integration straight-forward.

This patch borrows some early work from Andrey Borodin, though the patch
got a complete rewrite.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20220222231948.GJ9008@telsasoft.com
2022-03-11 12:18:53 +09:00
Michael Paquier e3df32bbc3 doc: Add ALTER/DROP ROUTINE to the event trigger matrix
ALTER ROUTINE triggers the events ddl_command_start and ddl_command_end,
and DROP ROUTINE triggers sql_drop, ddl_command_start and
ddl_command_end, but this was not mention on the matrix table.

Reported-by: Leslie Lemaire
Discussion: https://postgr.es/m/164647533363.646.5802968483136493025@wrigleys.postgresql.org
Backpatch-through: 11
2022-03-09 14:59:08 +09:00
Michael Paquier 7687ca996e doc: Improve references to term "FSM" in pageinspect and pgfreespacemap
Author: Dong Wook Lee
Reviewed-by: Laurenz Albe
Discussion: https://postgr.es/m/CAAcBya+=F=HaHxJ7tGjAM1r=A=+bDbimpsex8Vqrb4GjqFDYsQ@mail.gmail.com
2022-03-09 10:43:25 +09:00
Robert Haas 7cf085f077 Add support for zstd base backup compression.
Both client-side compression and server-side compression are now
supported for zstd. In addition, a backup compressed by the server
using zstd can now be decompressed by the client in order to
accommodate the use of -Fp.

Jeevan Ladhe, with some edits by me.

Discussion: http://postgr.es/m/CA+Tgmobyzfbz=gyze2_LL1ZumZunmaEKbHQxjrFkOR7APZGu-g@mail.gmail.com
2022-03-08 09:52:43 -05:00
Amit Kapila d3e8368c4b Add the additional information to the logical replication worker errcontext.
This commits adds both the finish LSN (commit_lsn in case transaction got
committed, prepare_lsn in case of a prepared transaction, etc.) and
replication origin name to the existing error context message.

This will help users in specifying the origin name and transaction finish
LSN to pg_replication_origin_advance() SQL function to skip a particular
transaction.

Author: Masahiko Sawada
Reviewed-by: Takamichi Osumi, Euler Taveira, and Amit Kapila
Discussion: https://postgr.es/m/CAD21AoBarBf2oTF71ig2g_o=3Z_Dt6_sOpMQma1kFgbnA5OZ_w@mail.gmail.com
2022-03-08 08:08:32 +05:30
Andres Freund 4228cabb72 plpython: Adjust docs after removal of Python 2 support.
Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20211031184548.g4sxfe47n2kyi55r@alap3.anarazel.de
2022-03-07 18:30:57 -08:00
Michael Paquier b3c8aae008 doc: Fix description of pg_stop_backup()
The function was still documented as returning a set of records,
something not true as of 62ce0c7.

Reported-by: Tom Lane
Discussion: https://postgr.es/m/3159823.1646320180@sss.pgh.pa.us
2022-03-04 09:51:12 +09:00
Tom Lane 50f03473ed Doc: update libpq.sgml for root-owned SSL private keys.
My oversight in a59c79564.

Discussion: https://postgr.es/m/f4b7bc55-97ac-9e69-7398-335e212f7743@pgmasters.net
2022-03-02 11:29:11 -05:00
Peter Eisentraut e58791c6ad Add id's to various elements in protocol.sgml
For easier direct linking.

Author: Brar Piening <brar@gmx.de>
Discussion: https://www.postgresql.org/message-id/flat/dbad4f77-4dce-1b05-2b65-831acb5d5b66@gmx.de
2022-03-02 10:33:12 +01:00
Amit Kapila 7a85073290 Reconsider pg_stat_subscription_workers view.
It was decided (refer to the Discussion link below) that the stats
collector is not an appropriate place to store the error information of
subscription workers.

This patch changes the pg_stat_subscription_workers view (introduced by
commit 8d74fc96db) so that it stores only statistics counters:
apply_error_count and sync_error_count, and has one entry for
each subscription. The removed error information such as error-XID and
the error message would be stored in another way in the future which is
more reliable and persistent.

After removing these error details, there is no longer any relation
information, so the subscription statistics are now a cluster-wide
statistics.

The patch also changes the view name to pg_stat_subscription_stats since
the word "worker" is an implementation detail that we use one worker for
one tablesync and one apply.

Author: Masahiko Sawada, based on suggestions by Andres Freund
Reviewed-by: Peter Smith, Haiying Tang, Takamichi Osumi, Amit Kapila
Discussion: https://postgr.es/m/20220125063131.4cmvsxbz2tdg6g65@alap3.anarazel.de
2022-03-01 06:17:52 +05:30
Tom Lane 2e517818f4 Fix SPI's handling of errors during transaction commit.
SPI_commit previously left it up to the caller to recover from any error
occurring during commit.  Since that's complicated and requires use of
low-level xact.c facilities, it's not too surprising that no caller got
it right.  Let's move the responsibility for cleanup into spi.c.  Doing
that requires redefining SPI_commit as starting a new transaction, so
that it becomes equivalent to SPI_commit_and_chain except that you get
default transaction characteristics instead of preserving the prior
transaction's characteristics.  We can make this pretty transparent
API-wise by redefining SPI_start_transaction() as a no-op.  Callers
that expect to do something in between might be surprised, but
available evidence is that no callers do so.

Having made that API redefinition, we can fix this mess by having
SPI_commit[_and_chain] trap errors and start a new, clean transaction
before re-throwing the error.  Likewise for SPI_rollback[_and_chain].
Some cleanup is also needed in AtEOXact_SPI, which was nowhere near
smart enough to deal with SPI contexts nested inside a committing
context.

While plperl and pltcl need no changes beyond removing their now-useless
SPI_start_transaction() calls, plpython needs some more work because it
hadn't gotten the memo about catching commit/rollback errors in the
first place.  Such an error resulted in longjmp'ing out of the Python
interpreter, which leaks Python stack entries at present and is reported
to crash Python 3.11 altogether.  Add the missing logic to catch such
errors and convert them into Python exceptions.

We are probably going to have to back-patch this once Python 3.11 ships,
but it's a sufficiently basic change that I'm a bit nervous about doing
so immediately.  Let's let it bake awhile in HEAD first.

Peter Eisentraut and Tom Lane

Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com
Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
2022-02-28 12:45:36 -05:00
Etsuro Fujita 04e706d423 postgres_fdw: Add support for parallel commit.
postgres_fdw commits remote (sub)transactions opened on remote server(s)
in a local (sub)transaction one by one when the local (sub)transaction
commits.  This patch allows it to commit the remote (sub)transactions in
parallel to improve performance.  This is enabled by the server option
"parallel_commit".  The default is false.

Etsuro Fujita, reviewed by Fujii Masao and David Zhang.

Discussion: http://postgr.es/m/CAPmGK17dAZCXvwnfpr1eTfknTGdt%3DhYTV9405Gt5SqPOX8K84w%40mail.gmail.com
2022-02-24 14:30:00 +09:00
Amit Kapila 52e4f0cd47 Allow specifying row filters for logical replication of tables.
This feature adds row filtering for publication tables. When a publication
is defined or modified, an optional WHERE clause can be specified. Rows
that don't satisfy this WHERE clause will be filtered out. This allows a
set of tables to be partially replicated. The row filter is per table. A
new row filter can be added simply by specifying a WHERE clause after the
table name. The WHERE clause must be enclosed by parentheses.

The row filter WHERE clause for a table added to a publication that
publishes UPDATE and/or DELETE operations must contain only columns that
are covered by REPLICA IDENTITY. The row filter WHERE clause for a table
added to a publication that publishes INSERT can use any column. If the
row filter evaluates to NULL, it is regarded as "false". The WHERE clause
only allows simple expressions that don't have user-defined functions,
user-defined operators, user-defined types, user-defined collations,
non-immutable built-in functions, or references to system columns. These
restrictions could be addressed in the future.

If you choose to do the initial table synchronization, only data that
satisfies the row filters is copied to the subscriber. If the subscription
has several publications in which a table has been published with
different WHERE clauses, rows that satisfy ANY of the expressions will be
copied. If a subscriber is a pre-15 version, the initial table
synchronization won't use row filters even if they are defined in the
publisher.

The row filters are applied before publishing the changes. If the
subscription has several publications in which the same table has been
published with different filters (for the same publish operation), those
expressions get OR'ed together so that rows satisfying any of the
expressions will be replicated.

This means all the other filters become redundant if (a) one of the
publications have no filter at all, (b) one of the publications was
created using FOR ALL TABLES, (c) one of the publications was created
using FOR ALL TABLES IN SCHEMA and the table belongs to that same schema.

If your publication contains a partitioned table, the publication
parameter publish_via_partition_root determines if it uses the partition's
row filter (if the parameter is false, the default) or the root
partitioned table's row filter.

Psql commands \dRp+ and \d <table-name> will display any row filters.

Author: Hou Zhijie, Euler Taveira, Peter Smith, Ajin Cherian
Reviewed-by: Greg Nancarrow, Haiying Tang, Amit Kapila, Tomas Vondra, Dilip Kumar, Vignesh C, Alvaro Herrera, Andres Freund, Wei Wang
Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com
2022-02-22 08:11:50 +05:30
Michael Paquier ebf6c5249b Add compute_query_id = regress
"regress" is a new mode added to compute_query_id aimed at facilitating
regression testing when a module computing query IDs is loaded into the
backend, like pg_stat_statements.  It works the same way as "auto",
meaning that query IDs are computed if a module enables it, except that
query IDs are hidden in EXPLAIN outputs to ensure regression output
stability.

Like any GUCs of the kind (force_parallel_mode, etc.), this new
configuration can be added to an instance's postgresql.conf, or just
passed down with PGOPTIONS at command level.  compute_query_id uses an
enum for its set of option values, meaning that this addition ensures
ABI compatibility.

Using this new configuration mode allows installcheck-world to pass when
running the tests on an instance with pg_stat_statements enabled,
stabilizing the test output while checking the paths doing query ID
computations.

Reported-by: Anton Melnikov
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/1634283396.372373993@f75.i.mail.ru
Discussion: https://postgr.es/m/YgHlxgc/OimuPYhH@paquier.xyz
Backpatch-through: 14
2022-02-22 10:22:15 +09:00
Michael Paquier bf4ed12b58 doc: Mention environment variable ZSTD in the TAP tests for MSVC
6c417bb has added the build infrastructure to support ZSTD, but forgot
to update this section of the docs to mention the variable ZSTD, as per
the change done in vcregress.pl.

While on it, reword this section of the docs to describe what happens in
the default case, as per a suggestion from Robert Haas.

Discussion: https://postgr.es/m/YhCL0fKnDv/Zvtuo@paquier.xyz
2022-02-21 09:55:55 +09:00
Michael Paquier d7a978601d doc: Simplify description of --with-lz4
LZ4 is used in much more areas of the system now than just WAL and table
data.  This commit simplifies the installation documentation of Windows
and *nix by removing any details of the areas extended when building
with LZ4.

Author: Jeevan Ladhe
Discussion: https://postgr.es/m/CANm22Cgny8AF76pitomXp603NagwKXbA4dyN2Fac4yHPebqdqg@mail.gmail.com
2022-02-19 15:06:53 +09:00
Robert Haas 6c417bbcc8 Add support for building with ZSTD.
This commit doesn't actually add anything that uses ZSTD; that will be
done separately. It just puts the basic infrastructure into place.

Jeevan Ladhe, Robert Haas, and Michael Paquier. Reviewed by Justin
Pryzby and Andres Freund.

Discussion: http://postgr.es/m/CA+TgmoatQKGd+8SjcV+bzvw4XaoEwminHjU83yG12+NXtQzTTQ@mail.gmail.com
2022-02-18 13:40:31 -05:00
Tom Lane 2e372869aa Don't let libpq PGEVT_CONNRESET callbacks break a PGconn.
As currently implemented, failure of a PGEVT_CONNRESET callback
forces the PGconn into the CONNECTION_BAD state (without closing
the socket, which is inconsistent with other failure paths), and
prevents later callbacks from being called.  This seems highly
questionable, and indeed is questioned by comments in the source.

Instead, let's just ignore the result value of PGEVT_CONNRESET
calls.  Like the preceding commit, this converts event callbacks
into "pure observers" that cannot affect libpq's processing logic.

Discussion: https://postgr.es/m/3185105.1644960083@sss.pgh.pa.us
2022-02-18 11:43:04 -05:00
Tom Lane ce1e7a2f71 Don't let libpq "event" procs break the state of PGresult objects.
As currently implemented, failure of a PGEVT_RESULTCREATE callback
causes the PGresult to be converted to an error result.  This is
intellectually inconsistent (shouldn't a failing callback likewise
prevent creation of the error result? what about side-effects on the
behavior seen by other event procs? why does PQfireResultCreateEvents
act differently from PQgetResult?), but more importantly it destroys
any promises we might wish to make about the behavior of libpq in
nontrivial operating modes, such as pipeline mode.  For example,
it's not possible to promise that PGRES_PIPELINE_SYNC results will
be returned if an event callback fails on those.  With this
definition, expecting applications to behave sanely in the face of
possibly-failing callbacks seems like a very big lift.

Hence, redefine the result of a callback failure as being simply
that that event procedure won't be called any more for this PGresult
(which was true already).  Event procedures can still signal failure
back to the application through out-of-band mechanisms, for example
via their passthrough arguments.

Similarly, don't let failure of a PGEVT_RESULTCOPY callback prevent
PQcopyResult from succeeding.  That definition allowed a misbehaving
event proc to break single-row mode (our sole internal use of
PQcopyResult), and it probably had equally deleterious effects for
outside uses.

Discussion: https://postgr.es/m/3185105.1644960083@sss.pgh.pa.us
2022-02-18 11:37:27 -05:00
Fujii Masao 94c49d5340 postgres_fdw: Make postgres_fdw.application_name support more escape sequences.
Commit 6e0cb3dec1 allowed postgres_fdw.application_name to include
escape sequences %a (application name), %d (database name), %u (user name)
and %p (pid). In addition to them, this commit makes it support
the escape sequences for session ID (%c) and cluster name (%C).
These are helpful to investigate where each remote transactions came from.

Author: Fujii Masao
Reviewed-by: Ryohei Takahashi, Kyotaro Horiguchi
Discussion: https://postgr.es/m/1041dc9a-c976-049f-9f14-e7d94c29c4b2@oss.nttdata.com
2022-02-18 11:38:12 +09:00
Andres Freund 19252e8ec9 plpython: Reject Python 2 during build configuration.
Python 2.7 went EOL 2020-01-01 and the support for Python 2 requires a fair
bit of infrastructure. Therefore we are removing Python 2 support in plpython.

This patch just rejects Python 2 during configure / mkvcbuild.pl. Future
commits will remove the code and infrastructure for Python 2 support and
adjust more of the documentation. This way we can see the buildfarm state
after the removal sooner and we can be sure that failures are due to
desupporting Python 2, rather than caused by infrastructure cleanup.

Reviewed-By: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20211031184548.g4sxfe47n2kyi55r@alap3.anarazel.de
2022-02-16 22:47:35 -08:00
Peter Geoghegan 8f388f6f55 Increase hash_mem_multiplier default to 2.0.
Double the default setting for hash_mem_multiplier, from 1.0 to 2.0.
This setting makes hash-based executor nodes use twice the usual
work_mem limit.

The PostgreSQL 15 release notes should have a compatibility note about
this change.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzndc_ROk6CY-bC6p9O53q974Y0Ey4WX8jcPbuTZYM4Q3A@mail.gmail.com
2022-02-16 18:41:52 -08:00
Etsuro Fujita 27d195a578 Doc: Update documentation for modifying postgres_fdw foreign tables.
Document that they can be modified using COPY as well.

Back-patch to v11 where commit 3d956d956 added support for COPY in
postgres_fdw.
2022-02-16 15:15:00 +09:00
Heikki Linnakangas 853c6400bf Fix race condition in 028_pitr_timelines.pl test, add note to docs.
The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL
segment that was just filled up to be archived. It was because the
test used 'pg_stat_archiver.last_archived_wal' to check if a file was
archived, but the order that WAL files are archived when a standby is
promoted is not fully deterministic, and 'last_archived_wal' tracks
the last segment that was archived, not the highest-numbered WAL
segment. Because of that, if the archiver archived segment 3, and then
2, 'last_archived_wal' say 2, and the test query would think that 3
has not been archived yet.

Normally, WAL files are marked ready for archival in order, and the
archiver process will process them in order, so that issue doesn't
arise.  We have used the same query on 'last_archived_wal' in a few
other tests with no problem. But when a standby is promoted, things
are a bit chaotic. After promotion, the server will try to archive all
the WAL segments from the old timeline that are in pg_wal, as well as
the history file and any new WAL segments on the new timeline. The
end-of-recovery checkpoint will create the .ready files for all the
WAL files on the old timeline, but at the same time, the new timeline
is opened up for business. A file from the new timeline can therefore
be archived before the files from the old timeline have been marked as
ready for archival.

It turns out that we don't really need to wait for the archival in
this particular test, because the standby server is about to be
stopped, and stopping a server will wait for the end-of-recovery
checkpoint and all WAL archivals to finish, anyway. So we can just
remove it from the test.

Add a note to the docs on 'pg_stat_archiver' view that files can be
archived out of order.

Reviewed-by: Tom Lane
Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 01:37:48 +02:00
Andres Freund 1f6e0ce3be docs: Work around bug in the docbook xsl stylesheets.
docbook-xsl's index generation stylesheet (autoidx.xsl) has a small bug: It
doesn't include xlink in exclude-result-prefixes. Normally just leads to a a
single xmlns:xlink in the <div> containing the index, but because our
customization emits that, xmlns:xlink intead gets added to every element
output by autoidx.xsl below the <div>, totalling around 100kB.

Adding the spurious xmlns:xlink to the <div> ourselves isn't great, but avoids
the duplication.

Reviewed-By: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20220213201618.qz6p6noon3wagr3f%40alap3.anarazel.de
2022-02-15 13:52:40 -08:00
Peter Eisentraut 6538be9e1e Fix XML namespace declarations
The XSL stylesheets used a mix of incorrect or outdated namespace
declarations for XHTML, probably based on ancient advice and examples.
Clean all this up.

Besides improving correctness (although probably no impact in
practice, other than possible validation failures), this removes a
bunch of useless namespace declarations in the HTML output.

Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/20220213201618.qz6p6noon3wagr3f%40alap3.anarazel.de
2022-02-15 11:13:49 +01:00
John Naylor a59135a81a Spell "startup process" with lower case in the documentation
Most uses were already lower case, so this just makes all user-visible
spellings consistent.

Bharath Rupireddy

The proposed patch also had analagous changes for the code comments,
but I decided that wasn't worth the churn.

Discussion:
https://www.postgresql.org/message-id/flat/CALj2ACW7%2Bv_0QBPoWB%3DqKr67JKC019Htm%3DX8sKewS17bOquefg%40mail.gmail.com
2022-02-15 14:30:57 +07:00
Peter Eisentraut 37851a8b83 Database-level collation version tracking
This adds to database objects the same version tracking that collation
objects have.  There is a new pg_database column datcollversion that
stores the version, a new function
pg_database_collation_actual_version() to get the version from the
operating system, and a new subcommand ALTER DATABASE ... REFRESH
COLLATION VERSION.

This was not originally added together with pg_collation.collversion,
since originally version tracking was only supported for ICU, and ICU
on a database-level is not currently supported.  But we now have
version tracking for glibc (since PG13), FreeBSD (since PG14), and
Windows (since PG13), so this is useful to have now.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f0ff3190-29a3-5b39-a179-fa32eee57db6%40enterprisedb.com
2022-02-14 08:27:26 +01:00
Thomas Munro cba5b994c9 Use WL_SOCKET_CLOSED for client_connection_check_interval.
Previously we used poll() directly to check for a POLLRDHUP event.
Instead, use the WaitEventSet API to poll the socket for
WL_SOCKET_CLOSED, which knows how to detect this condition on many more
operating systems.

Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: Maksim Milyutin <milyutinma@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/77def86b27e41f0efcba411460e929ae%40postgrespro.ru
2022-02-14 16:52:23 +13:00
Amit Kapila 5e01001ffb WAL log unchanged toasted replica identity key attributes.
Currently, during UPDATE, the unchanged replica identity key attributes
are not logged separately because they are getting logged as part of the
new tuple. But if they are stored externally then the untoasted values are
not getting logged as part of the new tuple and logical replication won't
be able to replicate such UPDATEs. So we need to log such attributes as
part of the old_key_tuple during UPDATE.

Reported-by: Haiying Tang
Author: Dilip Kumar and Amit Kapila
Reviewed-by: Alvaro Herrera, Haiying Tang, Andres Freund
Backpatch-through: 10
Discussion: https://postgr.es/m/OS0PR01MB611342D0A92D4F4BF26C0F47FB229@OS0PR01MB6113.jpnprd01.prod.outlook.com
2022-02-14 08:55:58 +05:30
Robert Haas 751b8d23b7 pg_basebackup: Allow client-side LZ4 (de)compression.
LZ4 compression can now be performed on the client using
pg_basebackup -Ft --compress client-lz4, and LZ4 decompression of
a backup compressed on the server can be performed on the client
using pg_basebackup -Fp --compress server-lz4.

Dipesh Pandit, reviewed and tested by Jeevan Ladhe and Tushar Ahuja,
with a few corrections - and some documentation - by me.

Discussion: http://postgr.es/m/CAN1g5_FeDmiA9D8wdG2W6Lkq5CpubxOAqTmd2et9hsinTJtsMQ@mail.gmail.com
2022-02-11 09:41:42 -05:00
Robert Haas dab298471f Add suport for server-side LZ4 base backup compression.
LZ4 compression can be a lot faster than gzip compression, so users
may prefer it even if the compression ratio is not as good. We will
want pg_basebackup to support LZ4 compression and decompression on the
client side as well, and there is a pending patch for that, but it's
by a different author, so I am committing this part separately for
that reason.

Jeevan Ladhe, reviewed by Tushar Ahuja and by me.

Discussion: http://postgr.es/m/CANm22Cg9cArXEaYgHVZhCnzPLfqXCZLAzjwTq7Fc0quXRPfbxA@mail.gmail.com
2022-02-11 08:29:38 -05:00
Tomas Vondra 0da92dc530 Logical decoding of sequences
This extends the logical decoding to also decode sequence increments.
We differentiate between sequences created in the current (in-progress)
transaction, and sequences created earlier. This mixed behavior is
necessary because while sequences are not transactional (increments are
not subject to ROLLBACK), relfilenode changes are. So we do this:

* Changes for sequences created in the same top-level transaction are
  treated as transactional, i.e. just like any other change from that
  transaction, and discarded in case of a rollback.

* Changes for sequences created earlier are applied immediately, as if
  performed outside any transaction. This applies also after ALTER
  SEQUENCE, which may create a new relfilenode.

Moreover, if we ever get support for DDL replication, the sequence
won't exist until the transaction gets applied.

Sequences created in the current transaction are tracked in a simple
hash table, identified by a relfilenode. That means a sequence may
already exist, but if a transaction does ALTER SEQUENCE then the
increments for the new relfilenode will be treated as transactional.

For each relfilenode we track the XID of (sub)transaction that created
it, which is needed for cleanup at transaction end. We don't need to
check the XID to decide if an increment is transactional - if we find a
match in the hash table, it has to be the same transaction.

This requires two minor changes to WAL-logging. Firstly, we need to
ensure the sequence record has a valid XID - until now the the increment
might have XID 0 if it was the first change in a subxact. But the
sequence might have been created in the same top-level transaction. So
we ensure the XID is assigned when WAL-logging increments.

The other change is addition of "created" flag, marking increments for
newly created relfilenodes. This makes it easier to maintain the hash
table of sequences that need transactional handling.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.

This does not include any changes to test_decoding and/or the built-in
replication - those will be committed in separate patches.

A patch adding decoding of sequences was originally submitted by Cary
Huang. This commit reworks various important aspects (e.g. the WAL
logging and transactional/non-transactional handling). However, the
original patch and reviews were very useful.

Author: Tomas Vondra, Cary Huang
Reviewed-by: Peter Eisentraut, Hannu Krosing, Andres Freund
Discussion: https://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com
Discussion: https://postgr.es/m/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
2022-02-10 18:43:51 +01:00
Robert Haas 0d4513b613 Remove server support for the previous base backup protocol.
Commit cc333f3233 added a new COPY
sub-protocol for taking base backups, but retained support for the
previous protocol. For the same reasons articulated in the message
for commit 9cd28c2e5f, remove support
for the previous protocol from the server.

Discussion: http://postgr.es/m/CA+TgmoazKcKUWtqVa0xZqSzbKgTH+X-aw4V7GyLD68EpDLMh8A@mail.gmail.com
2022-02-10 12:12:43 -05:00
Robert Haas 9cd28c2e5f Remove server support for old BASE_BACKUP command syntax.
Commit 0ba281cb4b added a new syntax
for the BASE_BACKUP command, with extensible options, but maintained
support for the legacy syntax. This isn't important for PostgreSQL,
where pg_basebackup works with older server versions but not newer
ones, but it could in theory matter for out-of-core users of the
replication protocol.

Discussion on pgsql-hackers, however, suggests that no one is aware
of any out-of-core use of the BASE_BACKUP command, and the consensus
is in favor of removing support for the old syntax to simplify the
code, so do that.

Discussion: http://postgr.es/m/CA+TgmoazKcKUWtqVa0xZqSzbKgTH+X-aw4V7GyLD68EpDLMh8A@mail.gmail.com
2022-02-10 10:48:33 -05:00
Fujii Masao 400fc6b648 Add min() and max() aggregates for xid8.
Bump catalog version.

Author: Ken Kato
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/47d77b18c44f87f8222c4c7a3e2dee6b@oss.nttdata.com
2022-02-10 12:33:41 +09:00
Daniel Gustafsson f48385c132 Fix typo in archive modules docs
Discussion: https://postgr.es/m/4F8E8D8F-45CA-4833-AB19-CC6105326583@yesql.se
2022-02-09 15:36:46 +01:00
Michael Paquier 38bfae3652 pg_upgrade: Move all the files generated internally to a subdirectory
Historically, the location of any files generated by pg_upgrade, as of
the per-database logs and internal dumps, has been the current working
directory, leaving all those files behind when using --retain or on a
failure.

Putting all those contents in a targeted subdirectory makes the whole
easier to debug, and simplifies the code in charge of cleaning up the
logs.  Note that another reason is that this facilitates the move of
pg_upgrade to TAP with a fixed location for all the logs to grab if the
test fails repeatedly.

Initially, we thought about being able to specify the output directory
with a new option, but we have settled on using a subdirectory located
at the root of the new cluster's data folder, "pg_upgrade_output.d",
instead, as at the end the new data directory is the location of all the
data generated by pg_upgrade.  There is a take with group permissions
here though: if the new data folder has been initialized with this
option, we need to create all the files and paths with the correct
permissions or a base backup taken after a pg_upgrade --retain would
fail, meaning that GetDataDirectoryCreatePerm() has to be called before
creating the log paths, before a couple of sanity checks on the clusters
and before getting the socket directory for the cluster's host settings.
The idea of the new location is based on a suggestion from Peter
Eisentraut.

Also thanks to Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson, Tom
Lane and Bruce Momjian for the discussion (in alphabetical order).

Author: Justin Pryzby
Discussion: https://postgr.es/m/20211212025017.GN17618@telsasoft.com
2022-02-06 12:27:29 +09:00
Tom Lane cbadfc1f8a Doc: be clearer that foreign-table partitions need user-added constraints.
A very well-informed user might deduce this from what we said already,
but I'd bet against it.  Lay it out explicitly.

While here, rewrite the comment about tuple routing to be more
intelligible to an average SQL user.

Per bug #17395 from Alexander Lakhin.  Back-patch to v11.  (The text
in this area is different in v10 and I'm not sufficiently excited
about this point to adapt the patch.)

Discussion: https://postgr.es/m/17395-8c326292078d1a57@postgresql.org
2022-02-05 12:55:44 -05:00
Robert Haas 5ef1eefd76 Allow archiving via loadable modules.
Running a shell command for each file to be archived has a lot of
overhead and may not offer as much error checking as you want, or the
exact semantics that you want. So, offer the option to call a loadable
module for each file to be archived, rather than running a shell command.

Also, add a 'basic_archive' contrib module as an example implementation
that archives to a local directory.

Nathan Bossart, with a little bit of kibitzing by me.

Discussion: http://postgr.es/m/20220202224433.GA1036711@nathanxps13
2022-02-03 14:05:02 -05:00
Peter Eisentraut 94aa7cc5f7 Add UNIQUE null treatment option
The SQL standard has been ambiguous about whether null values in
unique constraints should be considered equal or not.  Different
implementations have different behaviors.  In the SQL:202x draft, this
has been formalized by making this implementation-defined and adding
an option on unique constraint definitions UNIQUE [ NULLS [NOT]
DISTINCT ] to choose a behavior explicitly.

This patch adds this option to PostgreSQL.  The default behavior
remains UNIQUE NULLS DISTINCT.  Making this happen in the btree code
is pretty easy; most of the patch is just to carry the flag around to
all the places that need it.

The CREATE UNIQUE INDEX syntax extension is not from the standard,
it's my own invention.

I named all the internal flags, catalog columns, etc. in the negative
("nulls not distinct") so that the default PostgreSQL behavior is the
default if the flag is false.

Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/84e5ee1b-387e-9a54-c326-9082674bde78@enterprisedb.com
2022-02-03 11:48:21 +01:00
Bruce Momjian 9d179d9c23 doc: clarify syntax notation, particularly parentheses
Also move TCL syntax to the PL/tcl section.

Reported-by: davs2rt@gmail.com

Discussion: https://postgr.es/m/164308146320.12460.3590769444508751574@wrigleys.postgresql.org

Backpatch-through: 10
2022-02-02 21:53:52 -05:00
Peter Eisentraut 87669de72c Some cleanup for change of collate and ctype fields to type text
Some cleanup for commit 54637508f87bd5f07fb9406bac6b08240283be3b:
Reformat pg_database.dat to reflect the new field order.  Also update
the corresponding example in bki.sgml.  Reorder the way the fields are
filled in dbcommands.c to correspond to the new order.
2022-02-02 11:58:55 +01:00
Peter Eisentraut cb2bab14ff doc: Fix mistake in PL/Python documentation
Small thinko introduced by 94aceed317

Reported-by: nassehk@gmail.com
2022-02-02 09:14:26 +01:00
Tom Lane a5a9d77b8b Doc: modernize documentation for lo_create()/lo_creat().
At this point lo_creat() is a legacy function with little if any
real use-case, so describing it first doesn't make much sense.
Describe lo_create() first, and then explain lo_creat() as a
backwards-compatibility alternative.

Discussion: https://postgr.es/m/164353261519.713.8748040527537500758@wrigleys.postgresql.org
2022-02-01 10:57:38 -05:00
Michael Paquier d10e41d423 Introduce pg_settings_get_flags() to find flags associated to a GUC
The most meaningful flags are shown, which are the ones useful for the
user and for automating and extending the set of tests supported
currently by check_guc.

This script may actually be removed in the future, but we are not
completely sure yet if and how we want to support the remaining sanity
checks performed there, that are now integrated in the main regression
test suite as of this commit.

Thanks also to Peter Eisentraut and Kyotaro Horiguchi for the
discussion.

Bump catalog version.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20211129030833.GJ17618@telsasoft.com
2022-01-31 08:56:41 +09:00
Tom Lane 02b8048ba5 psql: improve tab-complete's handling of variant SQL names.
This patch improves tab completion's ability to deal with
valid variant spellings of SQL identifiers.  Notably:

* Unquoted upper-case identifiers are now downcased as the backend
would do, allowing them to be completed correctly.

* Tab completion can now match identifiers that are quoted even
though they don't need to be; for example "f<TAB> now completes
to "foo" if that's the only available name.  Previously, only
names that require quotes would be offered.

* Schema-qualified identifiers are now supported where SQL syntax
allows it; many lesser-used completion rules neglected this.

* Completion operations that refer back to some previously-typed
name (for example, to complete names of columns belonging to a
previously-mentioned table) now allow variant spellings of the
previous name too.

In addition, performance of tab completion queries has been
improved for databases containing many objects, although
you'd only be likely to notice with a heavily-loaded server.

Authors of future tab-completion patches should note that this
commit changes many details about how tab completion queries
must be written:

* Tab completion queries now deal in raw object names; do not
use quote_ident().

* The name-matching restriction in a query must now be written
as "outputcol LIKE '%s'", not "substring(outputcol,1,%d)='%s'".

* The SchemaQuery mechanism has been extended so that it can
handle queries that refer back to a previous name.  Most completion
queries that do that should be converted to SchemaQuery form.
Only consider using a literal query if the previous name can
never be schema-qualified.  Don't use a literal query if the
name-to-be-completed can validly be schema-qualified, either.

* Use set_completion_reference() to specify which word is the previous
name to consider, for either a SchemaQuery or a literal query.

* If you want to offer some keywords in addition to a query result
(for example, offer COLUMN in addition to column names after
"ALTER TABLE t RENAME"), do not use the old hack of tacking the
keywords on with UNION.  Instead use the new QUERY_PLUS macros
to write such keywords separately from the query proper.  The
"addon" macro arguments that used to be used for this purpose
are gone.

* If your query returns something that's not a SQL identifier
(such as an attribute number or enum label), use the new
QUERY_VERBATIM macros to prevent the result from incorrectly
getting double-quoted.  You may still need to use quote_literal
in such a query, too.

Tom Lane and Haiying Tang

Discussion: https://postgr.es/m/a63cbd45e3884cf9b3961c2a6a95dcb7@G08CNEXMBPEKD05.g08.fujitsu.local
2022-01-30 13:33:23 -05:00
Robert Haas 7f6772317b Adjust server-side backup to depend on pg_write_server_files.
I had made it depend on superuser, but that seems clearly inferior.
Also document the permissions requirement in the straming replication
protocol section of the documentation, rather than only in the
section having to do with pg_basebackup.

Idea and patch from Dagfinn Ilmari Mannsåker.

Discussion: http://postgr.es/m/87bkzw160u.fsf@wibble.ilmari.org
2022-01-28 12:31:40 -05:00
Robert Haas d45099425e Allow server-side compression to be used with -Fp.
If you have a low-bandwidth connection between the client and the
server, it's reasonable to want to compress on the server side but
then decompress and extract the backup on the client side. This
commit allows you do to do just that.

Dipesh Pandit, with minor and mostly cosmetic changes by me.

Discussion: http://postgr.es/m/CAN1g5_HiSh8ajUMd4ePtGyCXo89iKZTzaNyzP_qv1eJbi4YHXA@mail.gmail.com
2022-01-28 08:41:25 -05:00
Peter Eisentraut 43f33dc018 Add HEADER support to COPY text format
The COPY CSV format supports the HEADER option to output a header
line.  This patch adds the same option to the default text format.  On
input, the HEADER option causes the first line to be skipped, same as
with CSV.

Author: Rémi Lapeyre <remi.lapeyre@lenstra.fr>
Discussion: https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com
2022-01-28 09:44:47 +01:00
Peter Eisentraut 9a50f2e51c doc: Update ALTER COLLATION wording
The original text on refreshing collation versions was written
specifically for ICU, and then information for other providers was
incrementally tacked on at the end.  Reword this to be a bit more
general and less reflective of how it was added.
2022-01-28 08:21:36 +01:00
Peter Eisentraut c9cfc861fc Remove some trailing whitespace in documentation files 2022-01-27 18:31:01 +01:00
Peter Eisentraut 54637508f8 Change collate and ctype fields to type text
This changes the data type of the catalog fields datcollate, datctype,
collcollate, and collctype from name to text.  There wasn't ever a
really good reason for them to be of type name; presumably this was
just carried over from when they were fixed-size fields in pg_control,
first into the corresponding pg_database fields, and then to
pg_collation.  The values are not identifiers or object names, and we
don't ever look them up that way.

Changing to type text saves space in the typical case, since locale
names are typically only a few bytes long.  But it is also possible
that an ICU locale name with several customization options appended
could be longer than 63 bytes, so this also enables that case, which
was previously probably broken.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a@2ndquadrant.com
2022-01-27 08:54:25 +01:00
Tom Lane bd233bdd8d Replace use of deprecated Python module distutils.sysconfig, take 2.
With Python 3.10, configure spits out warnings about the module
distutils.sysconfig being deprecated and scheduled for removal in
Python 3.12.  Change the uses in configure to use the module sysconfig
instead.  The logic stays largely the same, although we have to
rely on INCLUDEPY instead of the deprecated get_python_inc function.

Note that sysconfig exists since Python 2.7, so this moves the minimum
required version up from Python 2.6.  Also, sysconfig didn't exist in
Python 3.1, so the minimum 3.x version is now 3.2.

We should consider back-patching this if it gives no further trouble,
as the no-longer-supported versions are old enough to probably not
be interesting to anyone.

Peter Eisentraut, Tom Lane, Andres Freund

Discussion: https://postgr.es/m/c74add3c-09c4-a9dd-1a03-a846e5b2fc52@enterprisedb.com
2022-01-25 18:52:44 -05:00
Robert Haas e1f860f134 Tidy up a few cosmetic issues related to pg_basebackup.
Commit 0ad8032910 failed to update
the pg_basebackup documentation to mention that "client-" or
"server-" can now be prepended to the compression method name. Fix
it there, and also in the --help output that you get from running
the binary.

Also in the documentation, there's an old issue that the arguments to
--checkpoint shouldn't be marked as parameters, because "fast" and
"spread" are literal strings. Fix that too.

Dagfinn Ilmari Mannsåker, partly as per a report from
Shinoda Noriyoshi.

Discussion: http://postgr.es/m/PH7PR84MB1885C1CF433057807551172BEE5F9@PH7PR84MB1885.NAMPRD84.PROD.OUTLOOK.COM
2022-01-25 14:59:37 -05:00
Michael Paquier 410aa248e5 Fix various typos, grammar and code style in comments and docs
This fixes a set of issues that have accumulated over the past months
(or years) in various code areas.  Most fixes are related to some recent
additions, as of the development of v15.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20220124030001.GQ23027@telsasoft.com
2022-01-25 09:40:04 +09:00
Robert Haas 0ad8032910 Server-side gzip compression.
pg_basebackup's --compression option now lets you write either
"client-gzip" or "server-gzip" instead of just "gzip" to specify
where the compression should be performed. If you write simply
"gzip" it's taken to mean "client-gzip" unless you also use
--target, in which case it is interpreted to mean "server-gzip",
because that's the only thing that makes any sense in that case.

To make this work, the BASE_BACKUP command now takes new
COMPRESSION and COMPRESSION_LEVEL options.

At present, pg_basebackup cannot decompress .gz files, so
server-side compression will cause a failure if (1) -Ft is not
used or (2) -R is used or (3) -D- is used without --no-manifest.

Along the way, I removed the information message added by commit
5c649fe153 which occurred if you
specified no compression level and told you that the default level
had been used instead. That seemed like more output than most
people would want.

Also along the way, this adds a check to the server for
unrecognized base backup options. This repairs a bug introduced
by commit 0ba281cb4b.

This commit also adds some new test cases for pg_verifybackup.
They take a server-side backup with and without compression, and
then extract the backup if we have the OS facilities available
to do so, and then run pg_verifybackup on the extracted
directory. That is a good test of the functionality added by
this commit and also improves test coverage for the backup target
patch (commit 3500ccc39b) and for
pg_verifybackup itself.

Patch by me, with a bug fix by Jeevan Ladhe.  The patch set of which
this is a part has also had review and/or testing from Tushar Ahuja,
Suraj Kharage, Dipesh Pandit, and Mark Dilger.

Discussion: http://postgr.es/m/CA+Tgmoa-ST7fMLsVJduOB7Eub=2WjfpHS+QxHVEpUoinf4bOSg@mail.gmail.com
2022-01-24 15:13:18 -05:00
Robert Haas aa01051418 pg_upgrade: Preserve database OIDs.
Commit 9a974cbcba arranged to preserve
relfilenodes and tablespace OIDs. For similar reasons, also arrange
to preserve database OIDs.

One problem is that, up until now, the OIDs assigned to the template0
and postgres databases have not been fixed. This could be a problem
when upgrading, because pg_upgrade might try to migrate a database
from the old cluster to the new cluster while keeping the OID and find
a different database with that OID, resulting in a failure. If it finds
a database with the same name and the same OID that's OK: it will be
dropped and recreated. But the same OID and a different name is a
problem.

To prevent that, fix the OIDs for postgres and template0 to specific
values less than 16384. To avoid running afoul of this rule, these
values should not be changed in future releases. It's not a problem
that these OIDs aren't fixed in existing releases, because the OIDs
that we're assigning here weren't used for either of these databases
in any previous release. Thus, there's no chance that an upgrade of
a cluster from any previous release will collide with the OIDs we're
assigning here. And going forward, the OIDs will always be fixed, so
the only potential collision is with a system database having the
same name and the same OID, which is OK.

This patch lets users assign a specific OID to a database as well,
provided however that it can't be less than 16384. I (rhaas) thought
it might be better not to expose this capability to users, but the
consensus was otherwise, so the syntax is documented. Letting users
assign OIDs below 16384 would not be OK, though, because a
user-created database with a low-numbered OID might collide with a
system-created database in a future release. We therefore prohibit
that.

Shruthi KC, based on an earlier patch from Antonin Houska, reviewed
and with some adjustments by me.

Discussion: http://postgr.es/m/CA+TgmoYgTwYcUmB=e8+hRHOFA0kkS6Kde85+UNdon6q7bt1niQ@mail.gmail.com
Discussion: http://postgr.es/m/CAASxf_Mnwm1Dh2vd5FAhVX6S1nwNSZUB1z12VddYtM++H2+p7w@mail.gmail.com
2022-01-24 14:23:43 -05:00
Tom Lane 4f02cbcb68 Doc: add cross-references between array_to_string and string_to_array.
These functions aren't exact inverses, but they're closely related;
yet we document them in two different sections.  Add cross-ref
<link>s to improve that situation.

While here, move the strpos and substr entries to re-alphabetize
Table 9.10.  Also, drop an ancient compatibility note about
string_to_array, which wasn't even on the right page, so probably
few people ever saw it.

Discussion: https://postgr.es/m/164287017550.704.5840412183184961677@wrigleys.postgresql.org
2022-01-22 15:44:51 -05:00
Michael Paquier 5c649fe153 Extend the options of pg_basebackup to control compression
The option --compress is extended to accept a compression method and an
optional compression level, as of the grammar METHOD[:LEVEL].  The
methods currently support are "none" and "gzip", for client-side
compression.  Any of those methods use only an integer value for the
compression level, but any method implemented in the future could use
more specific keywords if necessary.

This commit keeps the logic backward-compatible.  Hence, the following
compatibility rules apply for the new format of the option --compress:
* -z/--gzip is a synonym of --compress=gzip.
* --compress=NUM implies:
** --compress=none if NUM = 0.
** --compress=gzip:NUM if NUM > 0.

Note that there are also plans to extend more this grammar with
server-side compression.

Reviewed-by: Robert Haas, Magnus Hagander, Álvaro Herrera, David
G. Johnston, Georgios Kokolatos
Discussion: https://postgr.es/m/Yb3GEgWwcu4wZDuA@paquier.xyz
2022-01-21 11:08:43 +09:00
Tom Lane 512fc2dd79 Revert "Make configure prefer python3 to plain python."
This reverts commit f201da39ed.
The buildfarm is not ready for python3, evidently, so we'll
give the owners some more time to get set up.

Discussion: https://postgr.es/m/2872c9a0-4b0a-1354-d5f6-94d6f85ba354@enterprisedb.com
2022-01-20 17:32:21 -05:00
Robert Haas 3500ccc39b Support base backup targets.
pg_basebackup now has a --target=TARGET[:DETAIL] option. If specfied,
it is sent to the server as the value of the TARGET option to the
BASE_BACKUP command. If DETAIL is included, it is sent as the value of
the new TARGET_DETAIL option to the BASE_BACKUP command.  If the
target is anything other than 'client', pg_basebackup assumes that it
will now be the server's job to write the backup in a location somehow
defined by the target, and that it therefore needs to write nothing
locally. However, the server will still send messages to the client
for progress reporting purposes.

On the server side, we now support two additional types of backup
targets.  There is a 'blackhole' target, which just throws away the
backup data without doing anything at all with it. Naturally, this
should only be used for testing and debugging purposes, since you will
not actually have a backup when it finishes running. More usefully,
there is also a 'server' target, so you can now use something like
'pg_basebackup -Xnone -t server:/SOME/PATH' to write a backup to some
location on the server. We can extend this to more types of targets
in the future, and might even want to create an extensibility
mechanism for adding new target types.

Since WAL fetching is handled with separate client-side logic, it's
not part of this mechanism; thus, backups with non-default targets
must use -Xnone or -Xfetch.

Patch by me, with a bug fix by Jeevan Ladhe.  The patch set of which
this is a part has also had review and/or testing from Tushar Ahuja,
Suraj Kharage, Dipesh Pandit, and Mark Dilger.

Discussion: http://postgr.es/m/CA+TgmoaYZbz0=Yk797aOJwkGJC-LK3iXn+wzzMx7KdwNpZhS5g@mail.gmail.com
2022-01-20 10:46:33 -05:00
Robert Haas ab4fd4f868 Remove 'datlastsysoid'.
It hasn't been used for anything for a long time. Up until recently,
we still queried it when dumping very old servers, but since
commit 30e7c175b8, there's no longer any
code at all that cares about it.

Discussion: http://postgr.es/m/CA+Tgmoa14=BRq0WEd0eevjEMn9EkghDB1FZEkBw7+UAb7tF49A@mail.gmail.com
2022-01-20 09:01:12 -05:00
Michael Paquier b2a76bb7d0 doc: Mention the level of locks taken on objects in COMMENT
This information was nowhere to be found.  This adds one note on the
page of COMMENT, and one note in the section dedicated to explicit
locking, both telling that a SHARE UPDATE EXCLUSIVE lock is taken on the
object commented.

Author: Nikolai Berkoff
Reviewed-by: Laurenz Albe
Discussion: https://postgr.es/m/_0HDHIGcCdCsUyXn22QwI2FEuNR6Fs71rtgGX6hfyBlUh5rrnE2qMmvIFu9EY4Pijr2gUmJEAXCjuNU2Oxku9TryLp9CdHllpsCfN3gD0-Y=@pm.me
Backpatch-through: 10
2022-01-20 16:54:47 +09:00
Tom Lane f201da39ed Make configure prefer python3 to plain python.
This avoids possibly selecting Python 2.x on systems that have
both Python 2 and Python 3.  We used to feel that what "python"
links to is a user choice that we should honor.  However, we're
about to cease support for Python 2, so users will no longer have
any choice of that sort.  This small change is being made ahead
of the big Python-2-ectomy so that we can see how much of the
buildfarm is not yet prepared for that.  Systems with only
Python 2 will continue to build that way, for now.

Discussion: https://postgr.es/m/2872c9a0-4b0a-1354-d5f6-94d6f85ba354@enterprisedb.com
2022-01-19 15:38:58 -05:00
Michael Paquier 44129c0432 doc: Fix description of pg_replication_origin_oid() in error case
This function returns NULL if the replication origin given in input
argument does not exist, contrary to what the docs described
previously.

Author: Ian Barwick
Discussion: https://postgr.es/m/CAB8KJ=htJjBL=103URqjOxV2mqb4rjphDpMeKdyKq_QXt6h05w@mail.gmail.com
Backpatch-through: 10
2022-01-19 10:37:40 +09:00
Tom Lane 5987feb70b Make PQcancel use the PGconn's tcp_user_timeout and keepalives settings.
If connectivity to the server has been lost or become flaky, the
user might well try to send a query cancel.  It's highly annoying
if PQcancel hangs up in such a case, but that's exactly what's likely
to happen.  To ameliorate this problem, apply the PGconn's
tcp_user_timeout and keepalives settings to the TCP connection used
to send the cancel.  This should be safe on Unix machines, since POSIX
specifies that setsockopt() is async-signal-safe.  We are guessing
that WSAIoctl(SIO_KEEPALIVE_VALS) is similarly safe on Windows.
(Note that at least in psql and our other frontend programs, there's
no safety issue involved anyway, since we run PQcancel in its own
thread rather than in a signal handler.)

Most of the value here comes from the expectation that tcp_user_timeout
will be applied as a connection timeout.  That appears to happen on
Linux, even though its tcp(7) man page claims differently.  The
keepalive options probably won't help much, but as long as we can
apply them for not much code, we might as well.

Jelte Fennema, reviewed by Fujii Masao and myself

Discussion: https://postgr.es/m/AM5PR83MB017870DE81FC84D5E21E9D1EF7AA9@AM5PR83MB0178.EURPRD83.prod.outlook.com
2022-01-18 14:13:13 -05:00
Robert Haas cc333f3233 Modify pg_basebackup to use a new COPY subprotocol for base backups.
In the new approach, all files across all tablespaces are sent in a
single COPY OUT operation. The CopyData messages are no longer raw
archive content; rather, each message is prefixed with a type byte
that describes its purpose, e.g. 'n' signifies the start of a new
archive and 'd' signifies archive or manifest data. This protocol
is significantly more extensible than the old approach, since we can
later create more message types, though not without concern for
backward compatibility.

The new protocol sends a few things to the client that the old one
did not. First, it sends the name of each archive explicitly, instead
of letting the client compute it. This is intended to make it easier
to write future patches that might send archives in a format other
that tar (e.g. cpio, pax, tar.gz). Second, it sends explicit progress
messages rather than allowing the client to assume that progress is
defined by the number of bytes received. This will help with future
features where the server compresses the data, or sends it someplace
directly rather than transmitting it to the client.

The old protocol is still supported for compatibility with previous
releases. The new protocol is selected by means of a new
TARGET option to the BASE_BACKUP command. Currently, the
only supported target is 'client'. Support for additional
targets will be added in a later commit.

Patch by me. The patch set of which this is a part has had review
and/or testing from Jeevan Ladhe, Tushar Ahuja, Suraj Kharage,
Dipesh Pandit, and Mark Dilger.

Discussion: http://postgr.es/m/CA+TgmoaYZbz0=Yk797aOJwkGJC-LK3iXn+wzzMx7KdwNpZhS5g@mail.gmail.com
2022-01-18 13:47:49 -05:00
Peter Eisentraut dda42ff8e6 Revert "Replace use of deprecated Python module distutils.sysconfig"
This reverts commit e0e567a106.

On various platforms, the new approach using the sysconfig module
reported incorrect values for the include directory, and so any
Python-related compilations failed.  Revert for now and revisit later.
2022-01-18 17:42:29 +01:00
Peter Eisentraut e0e567a106 Replace use of deprecated Python module distutils.sysconfig
With Python 3.10, configure spits out warnings about the module
distutils.sysconfig being deprecated and scheduled for removal in
Python 3.12.  Change the uses in configure to use the module sysconfig
instead.  The logic stays the same.

Note that sysconfig exists since Python 2.7, so this moves the minimum
required version up from Python 2.6.

Discussion: https://www.postgresql.org/message-id/flat/c74add3c-09c4-a9dd-1a03-a846e5b2fc52%40enterprisedb.com
2022-01-18 06:37:02 +01:00
Michael Paquier 2158628864 Add support for --no-table-access-method in pg_{dump,dumpall,restore}
The logic is similar to default_tablespace in some ways, so as no SET
queries on default_table_access_method are generated before dumping or
restoring an object (table or materialized view support table AMs) when
specifying this new option.

This option is useful to enforce the use of a default access method even
if some tables included in a dump use an AM different than the system's
default.

There are already two cases in the TAP tests of pg_dump with a table and
a materialized view that use a non-default table AM, and these are
extended that the new option does not generate SET clauses on
default_table_access_method.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20211207153930.GR17618@telsasoft.com
2022-01-17 14:51:46 +09:00
Thomas Munro f47ed79cc8 Test replay of regression tests, attempt II.
See commit message for 123828a7fa.  The
only change this time is the order of the arguments passed to
pg_regress.  The previously version broke in the build farm environment
due to the contents of EXTRA_REGRESS_OPTS (see also commit 8cade04c
which had to do something similar).

Discussion: https://postgr.es/m/CA%2BhUKGKpRWQ9SxdxxDmTBCJoR0YnFpMBe7kyzY8SUQk%2BHeskxg%40mail.gmail.com
2022-01-17 16:34:55 +13:00
Michael Paquier dc686681e0 Introduce log_destination=jsonlog
"jsonlog" is a new value that can be added to log_destination to provide
logs in the JSON format, with its output written to a file, making it
the third type of destination of this kind, after "stderr" and
"csvlog".  The format is convenient to feed logs to other applications.
There is also a plugin external to core that provided this feature using
the hook in elog.c, but this had to overwrite the output of "stderr" to
work, so being able to do both at the same time was not possible.  The
files generated by this log format are suffixed with ".json", and use
the same rotation policies as the other two formats depending on the
backend configuration.

This takes advantage of the refactoring work done previously in ac7c807,
bed6ed3, 8b76f89 and 2d77d83 for the backend parts, and 72b76f7 for the
TAP tests, making the addition of any new file-based format rather
straight-forward.

The documentation is updated to list all the keys and the values that
can exist in this new format.  pg_current_logfile() also required a
refresh for the new option.

Author: Sehrope Sarkuni, Michael Paquier
Reviewed-by: Nathan Bossart, Justin Pryzby
Discussion: https://postgr.es/m/CAH7T-aqswBM6JWe4pDehi1uOiufqe06DJWaU5=X7dDLyqUExHg@mail.gmail.com
2022-01-17 10:16:53 +09:00
Tomas Vondra 269b532aef Add stxdinherit flag to pg_statistic_ext_data
Add pg_statistic_ext_data.stxdinherit flag, so that for each extended
statistics definition we can store two versions of data - one for the
relation alone, one for the whole inheritance tree. This is analogous to
pg_statistic.stainherit, but we failed to include such flag in catalogs
for extended statistics, and we had to work around it (see commits
859b3003de, 36c4bc6e72 and 20b9fa308e).

This changes the relationship between the two catalogs storing extended
statistics objects (pg_statistic_ext and pg_statistic_ext_data). Until
now, there was a simple 1:1 mapping - for each definition there was one
pg_statistic_ext_data row, and this row was inserted while creating the
statistics (and then updated during ANALYZE). With the stxdinherit flag,
we don't know how many rows there will be (child relations may be added
after the statistics object is defined), so there may be up to two rows.

We could make CREATE STATISTICS to always create both rows, but that
seems wasteful - without partitioning we only need stxdinherit=false
rows, and declaratively partitioned tables need only stxdinherit=true.
So we no longer initialize pg_statistic_ext_data in CREATE STATISTICS,
and instead make that a responsibility of ANALYZE. Which is what we do
for regular statistics too.

Patch by me, with extensive improvements and fixes by Justin Pryzby.

Author: Tomas Vondra, Justin Pryzby
Reviewed-by: Tomas Vondra, Justin Pryzby
Discussion: https://postgr.es/m/20210923212624.GI831%40telsasoft.com
2022-01-16 13:38:01 +01:00
Tom Lane 4483b2cf29 Remove standby_schedule and associated test files.
Since this test schedule is not run by default, it's next door to
unused.  Moreover, its test coverage is very thin, and what there is
is just about entirely superseded by the src/test/recovery tests.
Let's drop it instead of carrying obsolete tests.

Discussion: https://postgr.es/m/3911012.1641246643@sss.pgh.pa.us
2022-01-15 15:54:10 -05:00
Peter Geoghegan 49c9d9fcfa Unify VACUUM VERBOSE and autovacuum logging.
The log_autovacuum_min_duration instrumentation used its own dedicated
code for logging, which was not reused by VACUUM VERBOSE.  This was
highly duplicative, and sometimes led to each code path using slightly
different accounting for essentially the same information.

Clean things up by making VACUUM VERBOSE reuse the same instrumentation
code.  This code restructuring changes the structure of the VACUUM
VERBOSE output itself, but that seems like an overall improvement.  The
most noticeable change in VACUUM VERBOSE output is that it no longer
outputs a distinct message per index per round of index vacuuming.  Most
of the same information (about each index) is now shown in its new
per-operation summary message.  This is far more legible.

A few details are no longer displayed by VACUUM VERBOSE, but that's no
real loss in practice, especially in the common case where we don't need
multiple index scans/rounds of vacuuming.  This super fine-grained
information is still available via DEBUG2 messages, which might still be
useful in debugging scenarios.

VACUUM VERBOSE now shows new instrumentation, which is typically very
useful: all of the log_autovacuum_min_duration instrumentation that it
missed out on before now.  This includes information about WAL overhead,
buffers hit/missed/dirtied information, and I/O timing information.

VACUUM VERBOSE still retains a few INFO messages of its own.  This is
limited to output concerning the progress of heap rel truncation, as
well as some basic information about parallel workers.  These details
are still potentially quite useful.  They aren't a good fit for the log
output, which must summarize the whole operation.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-WzmW4Me7_qR4X4ka7pxP-jGmn7=Npma_-Z-9Y1eD0MQRLw@mail.gmail.com
2022-01-14 16:50:34 -08:00
Thomas Munro dccee0f2b7 Revert "Test replay of regression tests."
This reverts commit 123828a7fa.

Discussion: https://postgr.es/m/CA%2BhUKG%2BGBC-6QhOKt6Y7ccrXSjbRHB7Di295%3D0rAGhE7a7hSrQ%40mail.gmail.com
2022-01-15 00:44:32 +13:00