Commit Graph

15849 Commits

Author SHA1 Message Date
Michael Paquier 9198e63996 doc: Standardize capitalization of term "hot standby"/"Hot Standby"
"Hot Standby" was capitalized in a couple of places in the docs, as the
style primarily used when it was introduced, but this has not been much
respected across the years.  Per discussion, it is more natural for the
reader to use "hot standby" (aka lower-case only) when in the middle of
a sentence, and "Hot standby" (aka capitalized) in a title.  This commit
adjusts all the places in the docs to be consistent with this choice,
rather than applying one style or the other midway.

Author: Daniel Westermann
Reviewed-by: Kyotaro Horiguchi, Aleksander Alekseev, Robert Treat
Discussion: https://postgr.es/m/GVAP278MB093160025A779A1A5788D0EAD2039@GVAP278MB0931.CHEP278.PROD.OUTLOOK.COM
2022-03-11 15:16:21 +09:00
Michael Paquier e9537321a7 Add support for zstd with compression of full-page writes in WAL
wal_compression gains a new value, "zstd", to allow the compression of
full-page images using the compression method of the same name.

Compression is done using the default level recommended by the library,
as of ZSTD_CLEVEL_DEFAULT = 3.  Some benchmarking has shown that it
could make sense to use a level lower for the FPI compression, like 1 or
2, as the compression rate did not change much with a bit less CPU
consumed, but any tests done would only cover few scenarios so it is
hard to come to a clear conclusion.  Anyway, there is no reason to not
use the default level instead, which is the level recommended by the
library so it should be fine for most cases.

zstd outclasses easily pglz, and is better than LZ4 where one wants to
have more compression at the cost of extra CPU but both are good enough
in their own scenarios, so the choice between one or the other of these
comes to a study of the workload patterns and the schema involved,
mainly.

This commit relies heavily on 4035cd5, that reshaped the code creating
and restoring full-page writes to be aware of the compression type,
making this integration straight-forward.

This patch borrows some early work from Andrey Borodin, though the patch
got a complete rewrite.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20220222231948.GJ9008@telsasoft.com
2022-03-11 12:18:53 +09:00
Michael Paquier e3df32bbc3 doc: Add ALTER/DROP ROUTINE to the event trigger matrix
ALTER ROUTINE triggers the events ddl_command_start and ddl_command_end,
and DROP ROUTINE triggers sql_drop, ddl_command_start and
ddl_command_end, but this was not mention on the matrix table.

Reported-by: Leslie Lemaire
Discussion: https://postgr.es/m/164647533363.646.5802968483136493025@wrigleys.postgresql.org
Backpatch-through: 11
2022-03-09 14:59:08 +09:00
Michael Paquier 7687ca996e doc: Improve references to term "FSM" in pageinspect and pgfreespacemap
Author: Dong Wook Lee
Reviewed-by: Laurenz Albe
Discussion: https://postgr.es/m/CAAcBya+=F=HaHxJ7tGjAM1r=A=+bDbimpsex8Vqrb4GjqFDYsQ@mail.gmail.com
2022-03-09 10:43:25 +09:00
Robert Haas 7cf085f077 Add support for zstd base backup compression.
Both client-side compression and server-side compression are now
supported for zstd. In addition, a backup compressed by the server
using zstd can now be decompressed by the client in order to
accommodate the use of -Fp.

Jeevan Ladhe, with some edits by me.

Discussion: http://postgr.es/m/CA+Tgmobyzfbz=gyze2_LL1ZumZunmaEKbHQxjrFkOR7APZGu-g@mail.gmail.com
2022-03-08 09:52:43 -05:00
Amit Kapila d3e8368c4b Add the additional information to the logical replication worker errcontext.
This commits adds both the finish LSN (commit_lsn in case transaction got
committed, prepare_lsn in case of a prepared transaction, etc.) and
replication origin name to the existing error context message.

This will help users in specifying the origin name and transaction finish
LSN to pg_replication_origin_advance() SQL function to skip a particular
transaction.

Author: Masahiko Sawada
Reviewed-by: Takamichi Osumi, Euler Taveira, and Amit Kapila
Discussion: https://postgr.es/m/CAD21AoBarBf2oTF71ig2g_o=3Z_Dt6_sOpMQma1kFgbnA5OZ_w@mail.gmail.com
2022-03-08 08:08:32 +05:30
Andres Freund 4228cabb72 plpython: Adjust docs after removal of Python 2 support.
Reviewed-By: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20211031184548.g4sxfe47n2kyi55r@alap3.anarazel.de
2022-03-07 18:30:57 -08:00
Michael Paquier b3c8aae008 doc: Fix description of pg_stop_backup()
The function was still documented as returning a set of records,
something not true as of 62ce0c7.

Reported-by: Tom Lane
Discussion: https://postgr.es/m/3159823.1646320180@sss.pgh.pa.us
2022-03-04 09:51:12 +09:00
Tom Lane 50f03473ed Doc: update libpq.sgml for root-owned SSL private keys.
My oversight in a59c79564.

Discussion: https://postgr.es/m/f4b7bc55-97ac-9e69-7398-335e212f7743@pgmasters.net
2022-03-02 11:29:11 -05:00
Peter Eisentraut e58791c6ad Add id's to various elements in protocol.sgml
For easier direct linking.

Author: Brar Piening <brar@gmx.de>
Discussion: https://www.postgresql.org/message-id/flat/dbad4f77-4dce-1b05-2b65-831acb5d5b66@gmx.de
2022-03-02 10:33:12 +01:00
Amit Kapila 7a85073290 Reconsider pg_stat_subscription_workers view.
It was decided (refer to the Discussion link below) that the stats
collector is not an appropriate place to store the error information of
subscription workers.

This patch changes the pg_stat_subscription_workers view (introduced by
commit 8d74fc96db) so that it stores only statistics counters:
apply_error_count and sync_error_count, and has one entry for
each subscription. The removed error information such as error-XID and
the error message would be stored in another way in the future which is
more reliable and persistent.

After removing these error details, there is no longer any relation
information, so the subscription statistics are now a cluster-wide
statistics.

The patch also changes the view name to pg_stat_subscription_stats since
the word "worker" is an implementation detail that we use one worker for
one tablesync and one apply.

Author: Masahiko Sawada, based on suggestions by Andres Freund
Reviewed-by: Peter Smith, Haiying Tang, Takamichi Osumi, Amit Kapila
Discussion: https://postgr.es/m/20220125063131.4cmvsxbz2tdg6g65@alap3.anarazel.de
2022-03-01 06:17:52 +05:30
Tom Lane 2e517818f4 Fix SPI's handling of errors during transaction commit.
SPI_commit previously left it up to the caller to recover from any error
occurring during commit.  Since that's complicated and requires use of
low-level xact.c facilities, it's not too surprising that no caller got
it right.  Let's move the responsibility for cleanup into spi.c.  Doing
that requires redefining SPI_commit as starting a new transaction, so
that it becomes equivalent to SPI_commit_and_chain except that you get
default transaction characteristics instead of preserving the prior
transaction's characteristics.  We can make this pretty transparent
API-wise by redefining SPI_start_transaction() as a no-op.  Callers
that expect to do something in between might be surprised, but
available evidence is that no callers do so.

Having made that API redefinition, we can fix this mess by having
SPI_commit[_and_chain] trap errors and start a new, clean transaction
before re-throwing the error.  Likewise for SPI_rollback[_and_chain].
Some cleanup is also needed in AtEOXact_SPI, which was nowhere near
smart enough to deal with SPI contexts nested inside a committing
context.

While plperl and pltcl need no changes beyond removing their now-useless
SPI_start_transaction() calls, plpython needs some more work because it
hadn't gotten the memo about catching commit/rollback errors in the
first place.  Such an error resulted in longjmp'ing out of the Python
interpreter, which leaks Python stack entries at present and is reported
to crash Python 3.11 altogether.  Add the missing logic to catch such
errors and convert them into Python exceptions.

We are probably going to have to back-patch this once Python 3.11 ships,
but it's a sufficiently basic change that I'm a bit nervous about doing
so immediately.  Let's let it bake awhile in HEAD first.

Peter Eisentraut and Tom Lane

Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com
Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
2022-02-28 12:45:36 -05:00
Etsuro Fujita 04e706d423 postgres_fdw: Add support for parallel commit.
postgres_fdw commits remote (sub)transactions opened on remote server(s)
in a local (sub)transaction one by one when the local (sub)transaction
commits.  This patch allows it to commit the remote (sub)transactions in
parallel to improve performance.  This is enabled by the server option
"parallel_commit".  The default is false.

Etsuro Fujita, reviewed by Fujii Masao and David Zhang.

Discussion: http://postgr.es/m/CAPmGK17dAZCXvwnfpr1eTfknTGdt%3DhYTV9405Gt5SqPOX8K84w%40mail.gmail.com
2022-02-24 14:30:00 +09:00
Amit Kapila 52e4f0cd47 Allow specifying row filters for logical replication of tables.
This feature adds row filtering for publication tables. When a publication
is defined or modified, an optional WHERE clause can be specified. Rows
that don't satisfy this WHERE clause will be filtered out. This allows a
set of tables to be partially replicated. The row filter is per table. A
new row filter can be added simply by specifying a WHERE clause after the
table name. The WHERE clause must be enclosed by parentheses.

The row filter WHERE clause for a table added to a publication that
publishes UPDATE and/or DELETE operations must contain only columns that
are covered by REPLICA IDENTITY. The row filter WHERE clause for a table
added to a publication that publishes INSERT can use any column. If the
row filter evaluates to NULL, it is regarded as "false". The WHERE clause
only allows simple expressions that don't have user-defined functions,
user-defined operators, user-defined types, user-defined collations,
non-immutable built-in functions, or references to system columns. These
restrictions could be addressed in the future.

If you choose to do the initial table synchronization, only data that
satisfies the row filters is copied to the subscriber. If the subscription
has several publications in which a table has been published with
different WHERE clauses, rows that satisfy ANY of the expressions will be
copied. If a subscriber is a pre-15 version, the initial table
synchronization won't use row filters even if they are defined in the
publisher.

The row filters are applied before publishing the changes. If the
subscription has several publications in which the same table has been
published with different filters (for the same publish operation), those
expressions get OR'ed together so that rows satisfying any of the
expressions will be replicated.

This means all the other filters become redundant if (a) one of the
publications have no filter at all, (b) one of the publications was
created using FOR ALL TABLES, (c) one of the publications was created
using FOR ALL TABLES IN SCHEMA and the table belongs to that same schema.

If your publication contains a partitioned table, the publication
parameter publish_via_partition_root determines if it uses the partition's
row filter (if the parameter is false, the default) or the root
partitioned table's row filter.

Psql commands \dRp+ and \d <table-name> will display any row filters.

Author: Hou Zhijie, Euler Taveira, Peter Smith, Ajin Cherian
Reviewed-by: Greg Nancarrow, Haiying Tang, Amit Kapila, Tomas Vondra, Dilip Kumar, Vignesh C, Alvaro Herrera, Andres Freund, Wei Wang
Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com
2022-02-22 08:11:50 +05:30
Michael Paquier ebf6c5249b Add compute_query_id = regress
"regress" is a new mode added to compute_query_id aimed at facilitating
regression testing when a module computing query IDs is loaded into the
backend, like pg_stat_statements.  It works the same way as "auto",
meaning that query IDs are computed if a module enables it, except that
query IDs are hidden in EXPLAIN outputs to ensure regression output
stability.

Like any GUCs of the kind (force_parallel_mode, etc.), this new
configuration can be added to an instance's postgresql.conf, or just
passed down with PGOPTIONS at command level.  compute_query_id uses an
enum for its set of option values, meaning that this addition ensures
ABI compatibility.

Using this new configuration mode allows installcheck-world to pass when
running the tests on an instance with pg_stat_statements enabled,
stabilizing the test output while checking the paths doing query ID
computations.

Reported-by: Anton Melnikov
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/1634283396.372373993@f75.i.mail.ru
Discussion: https://postgr.es/m/YgHlxgc/OimuPYhH@paquier.xyz
Backpatch-through: 14
2022-02-22 10:22:15 +09:00
Michael Paquier bf4ed12b58 doc: Mention environment variable ZSTD in the TAP tests for MSVC
6c417bb has added the build infrastructure to support ZSTD, but forgot
to update this section of the docs to mention the variable ZSTD, as per
the change done in vcregress.pl.

While on it, reword this section of the docs to describe what happens in
the default case, as per a suggestion from Robert Haas.

Discussion: https://postgr.es/m/YhCL0fKnDv/Zvtuo@paquier.xyz
2022-02-21 09:55:55 +09:00
Michael Paquier d7a978601d doc: Simplify description of --with-lz4
LZ4 is used in much more areas of the system now than just WAL and table
data.  This commit simplifies the installation documentation of Windows
and *nix by removing any details of the areas extended when building
with LZ4.

Author: Jeevan Ladhe
Discussion: https://postgr.es/m/CANm22Cgny8AF76pitomXp603NagwKXbA4dyN2Fac4yHPebqdqg@mail.gmail.com
2022-02-19 15:06:53 +09:00
Robert Haas 6c417bbcc8 Add support for building with ZSTD.
This commit doesn't actually add anything that uses ZSTD; that will be
done separately. It just puts the basic infrastructure into place.

Jeevan Ladhe, Robert Haas, and Michael Paquier. Reviewed by Justin
Pryzby and Andres Freund.

Discussion: http://postgr.es/m/CA+TgmoatQKGd+8SjcV+bzvw4XaoEwminHjU83yG12+NXtQzTTQ@mail.gmail.com
2022-02-18 13:40:31 -05:00
Tom Lane 2e372869aa Don't let libpq PGEVT_CONNRESET callbacks break a PGconn.
As currently implemented, failure of a PGEVT_CONNRESET callback
forces the PGconn into the CONNECTION_BAD state (without closing
the socket, which is inconsistent with other failure paths), and
prevents later callbacks from being called.  This seems highly
questionable, and indeed is questioned by comments in the source.

Instead, let's just ignore the result value of PGEVT_CONNRESET
calls.  Like the preceding commit, this converts event callbacks
into "pure observers" that cannot affect libpq's processing logic.

Discussion: https://postgr.es/m/3185105.1644960083@sss.pgh.pa.us
2022-02-18 11:43:04 -05:00
Tom Lane ce1e7a2f71 Don't let libpq "event" procs break the state of PGresult objects.
As currently implemented, failure of a PGEVT_RESULTCREATE callback
causes the PGresult to be converted to an error result.  This is
intellectually inconsistent (shouldn't a failing callback likewise
prevent creation of the error result? what about side-effects on the
behavior seen by other event procs? why does PQfireResultCreateEvents
act differently from PQgetResult?), but more importantly it destroys
any promises we might wish to make about the behavior of libpq in
nontrivial operating modes, such as pipeline mode.  For example,
it's not possible to promise that PGRES_PIPELINE_SYNC results will
be returned if an event callback fails on those.  With this
definition, expecting applications to behave sanely in the face of
possibly-failing callbacks seems like a very big lift.

Hence, redefine the result of a callback failure as being simply
that that event procedure won't be called any more for this PGresult
(which was true already).  Event procedures can still signal failure
back to the application through out-of-band mechanisms, for example
via their passthrough arguments.

Similarly, don't let failure of a PGEVT_RESULTCOPY callback prevent
PQcopyResult from succeeding.  That definition allowed a misbehaving
event proc to break single-row mode (our sole internal use of
PQcopyResult), and it probably had equally deleterious effects for
outside uses.

Discussion: https://postgr.es/m/3185105.1644960083@sss.pgh.pa.us
2022-02-18 11:37:27 -05:00
Fujii Masao 94c49d5340 postgres_fdw: Make postgres_fdw.application_name support more escape sequences.
Commit 6e0cb3dec1 allowed postgres_fdw.application_name to include
escape sequences %a (application name), %d (database name), %u (user name)
and %p (pid). In addition to them, this commit makes it support
the escape sequences for session ID (%c) and cluster name (%C).
These are helpful to investigate where each remote transactions came from.

Author: Fujii Masao
Reviewed-by: Ryohei Takahashi, Kyotaro Horiguchi
Discussion: https://postgr.es/m/1041dc9a-c976-049f-9f14-e7d94c29c4b2@oss.nttdata.com
2022-02-18 11:38:12 +09:00
Andres Freund 19252e8ec9 plpython: Reject Python 2 during build configuration.
Python 2.7 went EOL 2020-01-01 and the support for Python 2 requires a fair
bit of infrastructure. Therefore we are removing Python 2 support in plpython.

This patch just rejects Python 2 during configure / mkvcbuild.pl. Future
commits will remove the code and infrastructure for Python 2 support and
adjust more of the documentation. This way we can see the buildfarm state
after the removal sooner and we can be sure that failures are due to
desupporting Python 2, rather than caused by infrastructure cleanup.

Reviewed-By: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20211031184548.g4sxfe47n2kyi55r@alap3.anarazel.de
2022-02-16 22:47:35 -08:00
Peter Geoghegan 8f388f6f55 Increase hash_mem_multiplier default to 2.0.
Double the default setting for hash_mem_multiplier, from 1.0 to 2.0.
This setting makes hash-based executor nodes use twice the usual
work_mem limit.

The PostgreSQL 15 release notes should have a compatibility note about
this change.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzndc_ROk6CY-bC6p9O53q974Y0Ey4WX8jcPbuTZYM4Q3A@mail.gmail.com
2022-02-16 18:41:52 -08:00
Etsuro Fujita 27d195a578 Doc: Update documentation for modifying postgres_fdw foreign tables.
Document that they can be modified using COPY as well.

Back-patch to v11 where commit 3d956d956 added support for COPY in
postgres_fdw.
2022-02-16 15:15:00 +09:00
Heikki Linnakangas 853c6400bf Fix race condition in 028_pitr_timelines.pl test, add note to docs.
The 028_pitr_timelines.pl test would sometimes hang, waiting for a WAL
segment that was just filled up to be archived. It was because the
test used 'pg_stat_archiver.last_archived_wal' to check if a file was
archived, but the order that WAL files are archived when a standby is
promoted is not fully deterministic, and 'last_archived_wal' tracks
the last segment that was archived, not the highest-numbered WAL
segment. Because of that, if the archiver archived segment 3, and then
2, 'last_archived_wal' say 2, and the test query would think that 3
has not been archived yet.

Normally, WAL files are marked ready for archival in order, and the
archiver process will process them in order, so that issue doesn't
arise.  We have used the same query on 'last_archived_wal' in a few
other tests with no problem. But when a standby is promoted, things
are a bit chaotic. After promotion, the server will try to archive all
the WAL segments from the old timeline that are in pg_wal, as well as
the history file and any new WAL segments on the new timeline. The
end-of-recovery checkpoint will create the .ready files for all the
WAL files on the old timeline, but at the same time, the new timeline
is opened up for business. A file from the new timeline can therefore
be archived before the files from the old timeline have been marked as
ready for archival.

It turns out that we don't really need to wait for the archival in
this particular test, because the standby server is about to be
stopped, and stopping a server will wait for the end-of-recovery
checkpoint and all WAL archivals to finish, anyway. So we can just
remove it from the test.

Add a note to the docs on 'pg_stat_archiver' view that files can be
archived out of order.

Reviewed-by: Tom Lane
Discussion: https://www.postgresql.org/message-id/3186114.1644960507@sss.pgh.pa.us
2022-02-16 01:37:48 +02:00
Andres Freund 1f6e0ce3be docs: Work around bug in the docbook xsl stylesheets.
docbook-xsl's index generation stylesheet (autoidx.xsl) has a small bug: It
doesn't include xlink in exclude-result-prefixes. Normally just leads to a a
single xmlns:xlink in the <div> containing the index, but because our
customization emits that, xmlns:xlink intead gets added to every element
output by autoidx.xsl below the <div>, totalling around 100kB.

Adding the spurious xmlns:xlink to the <div> ourselves isn't great, but avoids
the duplication.

Reviewed-By: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/20220213201618.qz6p6noon3wagr3f%40alap3.anarazel.de
2022-02-15 13:52:40 -08:00
Peter Eisentraut 6538be9e1e Fix XML namespace declarations
The XSL stylesheets used a mix of incorrect or outdated namespace
declarations for XHTML, probably based on ancient advice and examples.
Clean all this up.

Besides improving correctness (although probably no impact in
practice, other than possible validation failures), this removes a
bunch of useless namespace declarations in the HTML output.

Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/20220213201618.qz6p6noon3wagr3f%40alap3.anarazel.de
2022-02-15 11:13:49 +01:00
John Naylor a59135a81a Spell "startup process" with lower case in the documentation
Most uses were already lower case, so this just makes all user-visible
spellings consistent.

Bharath Rupireddy

The proposed patch also had analagous changes for the code comments,
but I decided that wasn't worth the churn.

Discussion:
https://www.postgresql.org/message-id/flat/CALj2ACW7%2Bv_0QBPoWB%3DqKr67JKC019Htm%3DX8sKewS17bOquefg%40mail.gmail.com
2022-02-15 14:30:57 +07:00
Peter Eisentraut 37851a8b83 Database-level collation version tracking
This adds to database objects the same version tracking that collation
objects have.  There is a new pg_database column datcollversion that
stores the version, a new function
pg_database_collation_actual_version() to get the version from the
operating system, and a new subcommand ALTER DATABASE ... REFRESH
COLLATION VERSION.

This was not originally added together with pg_collation.collversion,
since originally version tracking was only supported for ICU, and ICU
on a database-level is not currently supported.  But we now have
version tracking for glibc (since PG13), FreeBSD (since PG14), and
Windows (since PG13), so this is useful to have now.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f0ff3190-29a3-5b39-a179-fa32eee57db6%40enterprisedb.com
2022-02-14 08:27:26 +01:00
Thomas Munro cba5b994c9 Use WL_SOCKET_CLOSED for client_connection_check_interval.
Previously we used poll() directly to check for a POLLRDHUP event.
Instead, use the WaitEventSet API to poll the socket for
WL_SOCKET_CLOSED, which knows how to detect this condition on many more
operating systems.

Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: Maksim Milyutin <milyutinma@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/77def86b27e41f0efcba411460e929ae%40postgrespro.ru
2022-02-14 16:52:23 +13:00
Amit Kapila 5e01001ffb WAL log unchanged toasted replica identity key attributes.
Currently, during UPDATE, the unchanged replica identity key attributes
are not logged separately because they are getting logged as part of the
new tuple. But if they are stored externally then the untoasted values are
not getting logged as part of the new tuple and logical replication won't
be able to replicate such UPDATEs. So we need to log such attributes as
part of the old_key_tuple during UPDATE.

Reported-by: Haiying Tang
Author: Dilip Kumar and Amit Kapila
Reviewed-by: Alvaro Herrera, Haiying Tang, Andres Freund
Backpatch-through: 10
Discussion: https://postgr.es/m/OS0PR01MB611342D0A92D4F4BF26C0F47FB229@OS0PR01MB6113.jpnprd01.prod.outlook.com
2022-02-14 08:55:58 +05:30
Robert Haas 751b8d23b7 pg_basebackup: Allow client-side LZ4 (de)compression.
LZ4 compression can now be performed on the client using
pg_basebackup -Ft --compress client-lz4, and LZ4 decompression of
a backup compressed on the server can be performed on the client
using pg_basebackup -Fp --compress server-lz4.

Dipesh Pandit, reviewed and tested by Jeevan Ladhe and Tushar Ahuja,
with a few corrections - and some documentation - by me.

Discussion: http://postgr.es/m/CAN1g5_FeDmiA9D8wdG2W6Lkq5CpubxOAqTmd2et9hsinTJtsMQ@mail.gmail.com
2022-02-11 09:41:42 -05:00
Robert Haas dab298471f Add suport for server-side LZ4 base backup compression.
LZ4 compression can be a lot faster than gzip compression, so users
may prefer it even if the compression ratio is not as good. We will
want pg_basebackup to support LZ4 compression and decompression on the
client side as well, and there is a pending patch for that, but it's
by a different author, so I am committing this part separately for
that reason.

Jeevan Ladhe, reviewed by Tushar Ahuja and by me.

Discussion: http://postgr.es/m/CANm22Cg9cArXEaYgHVZhCnzPLfqXCZLAzjwTq7Fc0quXRPfbxA@mail.gmail.com
2022-02-11 08:29:38 -05:00
Tomas Vondra 0da92dc530 Logical decoding of sequences
This extends the logical decoding to also decode sequence increments.
We differentiate between sequences created in the current (in-progress)
transaction, and sequences created earlier. This mixed behavior is
necessary because while sequences are not transactional (increments are
not subject to ROLLBACK), relfilenode changes are. So we do this:

* Changes for sequences created in the same top-level transaction are
  treated as transactional, i.e. just like any other change from that
  transaction, and discarded in case of a rollback.

* Changes for sequences created earlier are applied immediately, as if
  performed outside any transaction. This applies also after ALTER
  SEQUENCE, which may create a new relfilenode.

Moreover, if we ever get support for DDL replication, the sequence
won't exist until the transaction gets applied.

Sequences created in the current transaction are tracked in a simple
hash table, identified by a relfilenode. That means a sequence may
already exist, but if a transaction does ALTER SEQUENCE then the
increments for the new relfilenode will be treated as transactional.

For each relfilenode we track the XID of (sub)transaction that created
it, which is needed for cleanup at transaction end. We don't need to
check the XID to decide if an increment is transactional - if we find a
match in the hash table, it has to be the same transaction.

This requires two minor changes to WAL-logging. Firstly, we need to
ensure the sequence record has a valid XID - until now the the increment
might have XID 0 if it was the first change in a subxact. But the
sequence might have been created in the same top-level transaction. So
we ensure the XID is assigned when WAL-logging increments.

The other change is addition of "created" flag, marking increments for
newly created relfilenodes. This makes it easier to maintain the hash
table of sequences that need transactional handling.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.

This does not include any changes to test_decoding and/or the built-in
replication - those will be committed in separate patches.

A patch adding decoding of sequences was originally submitted by Cary
Huang. This commit reworks various important aspects (e.g. the WAL
logging and transactional/non-transactional handling). However, the
original patch and reviews were very useful.

Author: Tomas Vondra, Cary Huang
Reviewed-by: Peter Eisentraut, Hannu Krosing, Andres Freund
Discussion: https://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com
Discussion: https://postgr.es/m/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
2022-02-10 18:43:51 +01:00
Robert Haas 0d4513b613 Remove server support for the previous base backup protocol.
Commit cc333f3233 added a new COPY
sub-protocol for taking base backups, but retained support for the
previous protocol. For the same reasons articulated in the message
for commit 9cd28c2e5f, remove support
for the previous protocol from the server.

Discussion: http://postgr.es/m/CA+TgmoazKcKUWtqVa0xZqSzbKgTH+X-aw4V7GyLD68EpDLMh8A@mail.gmail.com
2022-02-10 12:12:43 -05:00
Robert Haas 9cd28c2e5f Remove server support for old BASE_BACKUP command syntax.
Commit 0ba281cb4b added a new syntax
for the BASE_BACKUP command, with extensible options, but maintained
support for the legacy syntax. This isn't important for PostgreSQL,
where pg_basebackup works with older server versions but not newer
ones, but it could in theory matter for out-of-core users of the
replication protocol.

Discussion on pgsql-hackers, however, suggests that no one is aware
of any out-of-core use of the BASE_BACKUP command, and the consensus
is in favor of removing support for the old syntax to simplify the
code, so do that.

Discussion: http://postgr.es/m/CA+TgmoazKcKUWtqVa0xZqSzbKgTH+X-aw4V7GyLD68EpDLMh8A@mail.gmail.com
2022-02-10 10:48:33 -05:00
Fujii Masao 400fc6b648 Add min() and max() aggregates for xid8.
Bump catalog version.

Author: Ken Kato
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/47d77b18c44f87f8222c4c7a3e2dee6b@oss.nttdata.com
2022-02-10 12:33:41 +09:00
Daniel Gustafsson f48385c132 Fix typo in archive modules docs
Discussion: https://postgr.es/m/4F8E8D8F-45CA-4833-AB19-CC6105326583@yesql.se
2022-02-09 15:36:46 +01:00
Michael Paquier 38bfae3652 pg_upgrade: Move all the files generated internally to a subdirectory
Historically, the location of any files generated by pg_upgrade, as of
the per-database logs and internal dumps, has been the current working
directory, leaving all those files behind when using --retain or on a
failure.

Putting all those contents in a targeted subdirectory makes the whole
easier to debug, and simplifies the code in charge of cleaning up the
logs.  Note that another reason is that this facilitates the move of
pg_upgrade to TAP with a fixed location for all the logs to grab if the
test fails repeatedly.

Initially, we thought about being able to specify the output directory
with a new option, but we have settled on using a subdirectory located
at the root of the new cluster's data folder, "pg_upgrade_output.d",
instead, as at the end the new data directory is the location of all the
data generated by pg_upgrade.  There is a take with group permissions
here though: if the new data folder has been initialized with this
option, we need to create all the files and paths with the correct
permissions or a base backup taken after a pg_upgrade --retain would
fail, meaning that GetDataDirectoryCreatePerm() has to be called before
creating the log paths, before a couple of sanity checks on the clusters
and before getting the socket directory for the cluster's host settings.
The idea of the new location is based on a suggestion from Peter
Eisentraut.

Also thanks to Andrew Dunstan, Peter Eisentraut, Daniel Gustafsson, Tom
Lane and Bruce Momjian for the discussion (in alphabetical order).

Author: Justin Pryzby
Discussion: https://postgr.es/m/20211212025017.GN17618@telsasoft.com
2022-02-06 12:27:29 +09:00
Tom Lane cbadfc1f8a Doc: be clearer that foreign-table partitions need user-added constraints.
A very well-informed user might deduce this from what we said already,
but I'd bet against it.  Lay it out explicitly.

While here, rewrite the comment about tuple routing to be more
intelligible to an average SQL user.

Per bug #17395 from Alexander Lakhin.  Back-patch to v11.  (The text
in this area is different in v10 and I'm not sufficiently excited
about this point to adapt the patch.)

Discussion: https://postgr.es/m/17395-8c326292078d1a57@postgresql.org
2022-02-05 12:55:44 -05:00
Robert Haas 5ef1eefd76 Allow archiving via loadable modules.
Running a shell command for each file to be archived has a lot of
overhead and may not offer as much error checking as you want, or the
exact semantics that you want. So, offer the option to call a loadable
module for each file to be archived, rather than running a shell command.

Also, add a 'basic_archive' contrib module as an example implementation
that archives to a local directory.

Nathan Bossart, with a little bit of kibitzing by me.

Discussion: http://postgr.es/m/20220202224433.GA1036711@nathanxps13
2022-02-03 14:05:02 -05:00
Peter Eisentraut 94aa7cc5f7 Add UNIQUE null treatment option
The SQL standard has been ambiguous about whether null values in
unique constraints should be considered equal or not.  Different
implementations have different behaviors.  In the SQL:202x draft, this
has been formalized by making this implementation-defined and adding
an option on unique constraint definitions UNIQUE [ NULLS [NOT]
DISTINCT ] to choose a behavior explicitly.

This patch adds this option to PostgreSQL.  The default behavior
remains UNIQUE NULLS DISTINCT.  Making this happen in the btree code
is pretty easy; most of the patch is just to carry the flag around to
all the places that need it.

The CREATE UNIQUE INDEX syntax extension is not from the standard,
it's my own invention.

I named all the internal flags, catalog columns, etc. in the negative
("nulls not distinct") so that the default PostgreSQL behavior is the
default if the flag is false.

Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/84e5ee1b-387e-9a54-c326-9082674bde78@enterprisedb.com
2022-02-03 11:48:21 +01:00
Bruce Momjian 9d179d9c23 doc: clarify syntax notation, particularly parentheses
Also move TCL syntax to the PL/tcl section.

Reported-by: davs2rt@gmail.com

Discussion: https://postgr.es/m/164308146320.12460.3590769444508751574@wrigleys.postgresql.org

Backpatch-through: 10
2022-02-02 21:53:52 -05:00
Peter Eisentraut 87669de72c Some cleanup for change of collate and ctype fields to type text
Some cleanup for commit 54637508f87bd5f07fb9406bac6b08240283be3b:
Reformat pg_database.dat to reflect the new field order.  Also update
the corresponding example in bki.sgml.  Reorder the way the fields are
filled in dbcommands.c to correspond to the new order.
2022-02-02 11:58:55 +01:00
Peter Eisentraut cb2bab14ff doc: Fix mistake in PL/Python documentation
Small thinko introduced by 94aceed317

Reported-by: nassehk@gmail.com
2022-02-02 09:14:26 +01:00
Tom Lane a5a9d77b8b Doc: modernize documentation for lo_create()/lo_creat().
At this point lo_creat() is a legacy function with little if any
real use-case, so describing it first doesn't make much sense.
Describe lo_create() first, and then explain lo_creat() as a
backwards-compatibility alternative.

Discussion: https://postgr.es/m/164353261519.713.8748040527537500758@wrigleys.postgresql.org
2022-02-01 10:57:38 -05:00
Michael Paquier d10e41d423 Introduce pg_settings_get_flags() to find flags associated to a GUC
The most meaningful flags are shown, which are the ones useful for the
user and for automating and extending the set of tests supported
currently by check_guc.

This script may actually be removed in the future, but we are not
completely sure yet if and how we want to support the remaining sanity
checks performed there, that are now integrated in the main regression
test suite as of this commit.

Thanks also to Peter Eisentraut and Kyotaro Horiguchi for the
discussion.

Bump catalog version.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20211129030833.GJ17618@telsasoft.com
2022-01-31 08:56:41 +09:00
Tom Lane 02b8048ba5 psql: improve tab-complete's handling of variant SQL names.
This patch improves tab completion's ability to deal with
valid variant spellings of SQL identifiers.  Notably:

* Unquoted upper-case identifiers are now downcased as the backend
would do, allowing them to be completed correctly.

* Tab completion can now match identifiers that are quoted even
though they don't need to be; for example "f<TAB> now completes
to "foo" if that's the only available name.  Previously, only
names that require quotes would be offered.

* Schema-qualified identifiers are now supported where SQL syntax
allows it; many lesser-used completion rules neglected this.

* Completion operations that refer back to some previously-typed
name (for example, to complete names of columns belonging to a
previously-mentioned table) now allow variant spellings of the
previous name too.

In addition, performance of tab completion queries has been
improved for databases containing many objects, although
you'd only be likely to notice with a heavily-loaded server.

Authors of future tab-completion patches should note that this
commit changes many details about how tab completion queries
must be written:

* Tab completion queries now deal in raw object names; do not
use quote_ident().

* The name-matching restriction in a query must now be written
as "outputcol LIKE '%s'", not "substring(outputcol,1,%d)='%s'".

* The SchemaQuery mechanism has been extended so that it can
handle queries that refer back to a previous name.  Most completion
queries that do that should be converted to SchemaQuery form.
Only consider using a literal query if the previous name can
never be schema-qualified.  Don't use a literal query if the
name-to-be-completed can validly be schema-qualified, either.

* Use set_completion_reference() to specify which word is the previous
name to consider, for either a SchemaQuery or a literal query.

* If you want to offer some keywords in addition to a query result
(for example, offer COLUMN in addition to column names after
"ALTER TABLE t RENAME"), do not use the old hack of tacking the
keywords on with UNION.  Instead use the new QUERY_PLUS macros
to write such keywords separately from the query proper.  The
"addon" macro arguments that used to be used for this purpose
are gone.

* If your query returns something that's not a SQL identifier
(such as an attribute number or enum label), use the new
QUERY_VERBATIM macros to prevent the result from incorrectly
getting double-quoted.  You may still need to use quote_literal
in such a query, too.

Tom Lane and Haiying Tang

Discussion: https://postgr.es/m/a63cbd45e3884cf9b3961c2a6a95dcb7@G08CNEXMBPEKD05.g08.fujitsu.local
2022-01-30 13:33:23 -05:00
Robert Haas 7f6772317b Adjust server-side backup to depend on pg_write_server_files.
I had made it depend on superuser, but that seems clearly inferior.
Also document the permissions requirement in the straming replication
protocol section of the documentation, rather than only in the
section having to do with pg_basebackup.

Idea and patch from Dagfinn Ilmari Mannsåker.

Discussion: http://postgr.es/m/87bkzw160u.fsf@wibble.ilmari.org
2022-01-28 12:31:40 -05:00
Robert Haas d45099425e Allow server-side compression to be used with -Fp.
If you have a low-bandwidth connection between the client and the
server, it's reasonable to want to compress on the server side but
then decompress and extract the backup on the client side. This
commit allows you do to do just that.

Dipesh Pandit, with minor and mostly cosmetic changes by me.

Discussion: http://postgr.es/m/CAN1g5_HiSh8ajUMd4ePtGyCXo89iKZTzaNyzP_qv1eJbi4YHXA@mail.gmail.com
2022-01-28 08:41:25 -05:00