Commit Graph

38714 Commits

Author SHA1 Message Date
Michael Paquier 62ce0c758d Fix catalog data of pg_stop_backup(), labelled v2
This function has been incorrectly marked as a set-returning function
with prorows (estimated number of rows) set to 1 since its creation in
7117685, that introduced non-exclusive backups.  There is no need for
that as the function is designed to return only one tuple.

This commit fixes the catalog definition of pg_stop_backup_v2() so as it
is not marked as proretset anymore, with prorows set to 0.  This
simplifies its internals by removing one tuplestore (used for one single
record anyway) and by removing all the checks related to a set-returning
function.

Issue found during my quest to simplify some of the logic used in
in-core system functions.

Bump catalog version.

Reviewed-by: Aleksander Alekseev, Kyotaro Horiguchi
Discussion: https://postgr.es/m/Yh8guT78f1Ercfzw@paquier.xyz
2022-03-03 10:51:57 +09:00
Tatsuo Ishii 506035b0b8 Fix typo in pgbench messages.
Author: KAWAMOTO Masaya
Reviewed-by: Fabien COELHO
Discussion: https://postgr.es/m/20220224115622.41e671e3449ebd8c270e9103%40sraoss.co.jp
2022-03-02 08:28:12 +09:00
Michael Paquier dc57366c58 Fix check for PGHOST[ADDR] in pg_upgrade with Windows and temporary paths
The checks currently done at the startup of pg_upgrade on PGHOST and
PGHOSTADDR to avoid any attempts to access to an external cluster fail
setting those parameters to Windows paths or even temporary paths
prefixed by an '@', as it only considers as a valid path strings
beginning with a slash.

As mentioned by Andres, is_unixsock_path() is designed to detect such
cases, so, like any other code paths dealing with the same problem (psql
and libpq), use it rather than assuming that all valid paths are
prefixed with just a slash.

This issue has been found while testing the TAP tests of pg_upgrade
through the CI on Windows.  This is a bug, but nobody has complained
about it since pg_upgrade exists so no backpatch is done, at least for
now.

Analyzed-by: Andres Freund, Michael Paquier
Discussion: https://postgr.es/m/YeYj4DU5qY/rtKXT@paquier.xyz
2022-03-02 07:37:07 +09:00
Peter Eisentraut 9028cce426 psql: Additional tests
Add a few TAP tests for things that happen while a user query is being
sent:

- \timing
- client encoding handling
- notifications

Discussion: https://www.postgresql.org/message-id/3199e176-424e-1bef-f180-c1548466c2da@enterprisedb.com
2022-03-01 11:23:28 +01:00
Michael Paquier a33e17f210 Rework internal command generation of pg_rewind
pg_rewind generates and executes internally up to two commands to work
on the target cluster, depending on the options given by its caller:
- postgres -C to retrieve the value of restore_command, when using
-c/--restore-target-wal.
- postgres --single to enforce recovery once and get the target cluster
in a clean shutdown state.

Both commands have been applying incorrect quoting rules, which could
lead to failures when for example using a target data directory with
unexpected characters like CRLFs.  Those commands are now generated with
PQExpBuffer, making use of string_utils.h to quote those commands as
they should.  We may extend those commands in the future with more
options, so this makes any upcoming additions easier.

This is arguably a bug fix, but nobody has complained about the existing
code being a problem either, so no backpatch is done.

Extracted from a larger patch by the same author.

Author: Gunnar "Nick" Bluth
Discussion: https://postgr.es/m/7c59265d-ac50-b0aa-ca1e-65e8bd27642a@pro-open.de
2022-03-01 12:52:25 +09:00
Amit Kapila 7a85073290 Reconsider pg_stat_subscription_workers view.
It was decided (refer to the Discussion link below) that the stats
collector is not an appropriate place to store the error information of
subscription workers.

This patch changes the pg_stat_subscription_workers view (introduced by
commit 8d74fc96db) so that it stores only statistics counters:
apply_error_count and sync_error_count, and has one entry for
each subscription. The removed error information such as error-XID and
the error message would be stored in another way in the future which is
more reliable and persistent.

After removing these error details, there is no longer any relation
information, so the subscription statistics are now a cluster-wide
statistics.

The patch also changes the view name to pg_stat_subscription_stats since
the word "worker" is an implementation detail that we use one worker for
one tablesync and one apply.

Author: Masahiko Sawada, based on suggestions by Andres Freund
Reviewed-by: Peter Smith, Haiying Tang, Takamichi Osumi, Amit Kapila
Discussion: https://postgr.es/m/20220125063131.4cmvsxbz2tdg6g65@alap3.anarazel.de
2022-03-01 06:17:52 +05:30
Tom Lane 54bd1e43ca Handle integer overflow in interval justification functions.
justify_interval, justify_hours, and justify_days didn't check for
overflow when promoting hours to days or days to months; but that's
possible when the upper field's value is already large.  Detect and
report any such overflow.

Also, we can avoid unnecessary overflow in some cases in justify_interval
by pre-justifying the days field.  (Thanks to Nathan Bossart for this
idea.)

Joe Koshakow

Discussion: https://postgr.es/m/CAAvxfHeNqsJ2xYFbPUf_8nNQUiJqkag04NW6aBQQ0dbZsxfWHA@mail.gmail.com
2022-02-28 15:36:54 -05:00
Tom Lane a59c79564b Allow root-owned SSL private keys in libpq, not only the backend.
This change makes libpq apply the same private-key-file ownership
and permissions checks that we have used in the backend since commit
9a83564c5.  Namely, that the private key can be owned by either the
current user or root (with different file permissions allowed in the
two cases).  This allows system-wide management of key files, which
is just as sensible on the client side as the server, particularly
when the client is itself some application daemon.

Sync the comments about this between libpq and the backend, too.

David Steele

Discussion: https://postgr.es/m/f4b7bc55-97ac-9e69-7398-335e212f7743@pgmasters.net
2022-02-28 14:12:52 -05:00
Tom Lane 12d768e704 Don't use static storage for SaveTransactionCharacteristics().
This is pretty queasy-making on general principles, and the more so
once you notice that CommitTransactionCommand() is actually stomping
on the values saved by _SPI_commit().  It's okay as long as the
active values didn't change during HoldPinnedPortals(); but that's
a larger assumption than I think we want to make, especially since
the fix is so simple.

Discussion: https://postgr.es/m/1533956.1645731245@sss.pgh.pa.us
2022-02-28 12:54:12 -05:00
Tom Lane 2e517818f4 Fix SPI's handling of errors during transaction commit.
SPI_commit previously left it up to the caller to recover from any error
occurring during commit.  Since that's complicated and requires use of
low-level xact.c facilities, it's not too surprising that no caller got
it right.  Let's move the responsibility for cleanup into spi.c.  Doing
that requires redefining SPI_commit as starting a new transaction, so
that it becomes equivalent to SPI_commit_and_chain except that you get
default transaction characteristics instead of preserving the prior
transaction's characteristics.  We can make this pretty transparent
API-wise by redefining SPI_start_transaction() as a no-op.  Callers
that expect to do something in between might be surprised, but
available evidence is that no callers do so.

Having made that API redefinition, we can fix this mess by having
SPI_commit[_and_chain] trap errors and start a new, clean transaction
before re-throwing the error.  Likewise for SPI_rollback[_and_chain].
Some cleanup is also needed in AtEOXact_SPI, which was nowhere near
smart enough to deal with SPI contexts nested inside a committing
context.

While plperl and pltcl need no changes beyond removing their now-useless
SPI_start_transaction() calls, plpython needs some more work because it
hadn't gotten the memo about catching commit/rollback errors in the
first place.  Such an error resulted in longjmp'ing out of the Python
interpreter, which leaks Python stack entries at present and is reported
to crash Python 3.11 altogether.  Add the missing logic to catch such
errors and convert them into Python exceptions.

We are probably going to have to back-patch this once Python 3.11 ships,
but it's a sufficiently basic change that I'm a bit nervous about doing
so immediately.  Let's let it bake awhile in HEAD first.

Peter Eisentraut and Tom Lane

Discussion: https://postgr.es/m/3375ffd8-d71c-2565-e348-a597d6e739e3@enterprisedb.com
Discussion: https://postgr.es/m/17416-ed8fe5d7213d6c25@postgresql.org
2022-02-28 12:45:36 -05:00
Tom Lane b15f254466 Adjust interaction of libpq pipeline mode with errorMessage resets.
Since commit ffa2e4670, libpq resets conn->errorMessage only when
starting a new query.  However, the later introduction of pipelining
requires a further refinement: the "start of query" isn't necessarily
when it's submitted to PQsendQueryStart.  If we clear at that point
then we risk dropping text for an error that the application has not
noticed yet.  Instead, when queuing a query while a previous query is
still in flight, leave errorMessage alone; reset it when we begin
to process the next query in pqPipelineProcessQueue.

Perhaps this should be back-patched to v14 where ffa2e4670 came in.
However I'm uncertain about whether it interacts with 618c16707.
In the absence of user complaints, leave v14 alone.

Discussion: https://postgr.es/m/1421785.1645723238@sss.pgh.pa.us
2022-02-28 11:31:30 -05:00
Peter Eisentraut fbee60f6a4 Improve some psql test code
Split psql_like() into two functions psql_like() and psql_fails_like()
and make them mirror the existing command_like() and
command_fails_like() more closely.  In particular, follow the
universal convention that the test name is the last argument.

Discussion: https://www.postgresql.org/message-id/3199e176-424e-1bef-f180-c1548466c2da@enterprisedb.com
2022-02-28 14:06:25 +01:00
Dean Rasheed d1b307eef2 Optimise numeric division for one and two base-NBASE digit divisors.
Formerly div_var() had "fast path" short division code that was
significantly faster when the divisor was just one base-NBASE digit,
but otherwise used long division.

This commit adds a new function div_var_int() that divides by an
arbitrary 32-bit integer, using the fast short division algorithm, and
updates both div_var() and div_var_fast() to use it for one and two
digit divisors. In the case of div_var(), this is slightly faster in
the one-digit case, because it avoids some digit array copying, and is
much faster in the two-digit case where it replaces long division. For
div_var_fast(), it is much faster in both cases because the main
div_var_fast() algorithm is optimised for larger inputs.

Additionally, optimise exp() and ln() by using div_var_int(), allowing
a NumericVar to be replaced by an int in a couple of places, most
notably in the Taylor series code. This produces a significant speedup
of exp(), ln() and the numeric_big regression test.

Dean Rasheed, reviewed by Tom Lane.

Discussion: https://postgr.es/m/CAEZATCVwsBi-ND-t82Cuuh1=8ee6jdOpzsmGN+CUZB6yjLg9jw@mail.gmail.com
2022-02-27 11:12:30 +00:00
Dean Rasheed d996d648f3 Simplify the inner loop of numeric division in div_var().
In the standard numeric division algorithm, the inner loop multiplies
the divisor by the next quotient digit and subtracts that from the
working dividend. As suggested by the original code comment, the
separate "carry" and "borrow" variables (from the multiplication and
subtraction steps respectively) can be folded together into a single
variable. Doing so significantly improves performance, as well as
simplifying the code.

Dean Rasheed, reviewed by Tom Lane.

Discussion: https://postgr.es/m/CAEZATCVwsBi-ND-t82Cuuh1=8ee6jdOpzsmGN+CUZB6yjLg9jw@mail.gmail.com
2022-02-27 10:41:12 +00:00
Dean Rasheed e3d41d08a1 Apply auto-vectorization to the inner loop of div_var_fast().
This loop is basically the same as the inner loop of mul_var(), which
was auto-vectorized in commit 8870917623, but the compiler will only
consider auto-vectorizing the div_var_fast() loop if the assignment
target div[qi + i] is replaced by div_qi[i], where div_qi = &div[qi].

Additionally, since the compiler doesn't know that qdigit is
guaranteed to fit in a 16-bit NumericDigit, cast it to NumericDigit
before multiplying to make the resulting auto-vectorized code more
efficient (avoiding unnecessary multiplication of the high 16 bits).

While at it, per suggestion from Tom Lane, change var1digit in
mul_var() to be a NumericDigit rather than an int for the same
reason. This actually makes no difference with modern gcc, but it
might help other compilers generate more efficient assembly.

Dean Rasheed, reviewed by Tom Lane.

Discussion: https://postgr.es/m/CAEZATCVwsBi-ND-t82Cuuh1=8ee6jdOpzsmGN+CUZB6yjLg9jw@mail.gmail.com
2022-02-27 10:15:46 +00:00
Andres Freund 6b04abdfc5 Run tap tests in src/interfaces/libpq.
To be able to run binaries in the test/ directory, prove_[install]check need
to be executable in a single shell invocation, so that test/ can be added to
PATH.

Discussion: https://postgr.es/m/20220223203031.ezrd73ohvjgfksow@alap3.anarazel.de
2022-02-26 16:51:47 -08:00
Andres Freund ac25173cdb Convert src/interfaces/libpq/test to a tap test.
The old form of the test needed a bunch of custom infrastructure. These days
tap tests provide the necessary infrastructure to do better.

We discussed whether to move this test to src/test/modules, alongside
libpq_pipeline, but concluded that the opposite direction would be
better. libpq_pipeline will be moved at a later date, once the buildfarm and
msvc build infrastructure is ready for it.

The invocation of the tap test will be added in the next commit. It involves
just enough buildsystem changes to be worth commiting separately. Can't happen
the other way round because prove errors out when invoked without tests.

Discussion: https://postgr.es/m/20220223203031.ezrd73ohvjgfksow@alap3.anarazel.de
2022-02-26 16:51:47 -08:00
Andres Freund 1155d8b8d5 Fix use of wrong variable in pg_receivewal's get_destination_dir().
The global variable wrongly used is always the one passed to
get_destination_dir(), so there currently are no negative consequences.

Author: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/CALj2ACUT0C2LQwhyLXTQdj8T9SxZa5j7cmu-UOz0cZ8_D5edjg@mail.gmail.com
2022-02-26 16:43:54 -08:00
Andres Freund d33aeefd9b Fix warning on mingw due to pid_t width, introduced in fe0972ee5e. 2022-02-26 16:07:07 -08:00
Amit Kapila a89850a57e Fix typo in logicalfuncs.c.
Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACX1mVtw8LWEnZgnpPdk2bPFR1xX2ZN+8GfXCffyip_9=Q@mail.gmail.com
2022-02-26 10:38:37 +05:30
Andres Freund fe0972ee5e Add further debug info to help debug 019_replslot_limit.pl failures.
See also afdeff1052. Failures after that commit provided a few more hints,
but not yet enough to understand what's going on.

In 019_replslot_limit.pl shut down nodes with fast instead of immediate mode
if we observe the failure mode. That should tell us whether the failures we're
observing are just a timing issue under high load. PGCTLTIMEOUT should prevent
buildfarm animals from hanging endlessly.

Also adds a bit more logging to replication slot drop and ShutdownPostgres().

Discussion: https://postgr.es/m/20220225192941.hqnvefgdzaro6gzg@alap3.anarazel.de
2022-02-25 17:04:39 -08:00
Tom Lane 638300fef5 Disallow execution of SPI functions during plperl function compilation.
Perl can be convinced to execute user-defined code during compilation
of a plperl function (or at least a plperlu function).  That's not
such a big problem as long as the activity is confined within the
Perl interpreter, and it's not clear we could do anything about that
anyway.  However, if such code tries to use plperl's SPI functions,
we have a bigger problem.  In the first place, those functions are
likely to crash because current_call_data->prodesc isn't set up yet.
In the second place, because it isn't set up, we lack critical info
such as whether the function is supposed to be read-only.  And in
the third place, this path allows code execution during function
validation, which is strongly discouraged because of the potential
for security exploits.  Hence, reject execution of the SPI functions
until compilation is finished.

While here, add check_spi_usage_allowed() calls to various functions
that hadn't gotten the memo about checking that.  I think that perhaps
plperl_sv_to_literal may have been intentionally omitted on the grounds
that it was safe at the time; but if so, the addition of transforms
functionality changed that.  The others are more recently added and
seem to be flat-out oversights.

Per report from Mark Murawski.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/9acdf918-7fff-4f40-f750-2ffa84f083d2@intellasoft.net
2022-02-25 17:40:43 -05:00
Andres Freund cd83cb9536 pg_waldump: Fix error message for WAL files smaller than XLOG_BLCKSZ.
When opening a WAL file smaller than XLOG_BLCKSZ (e.g. 0 bytes long) while
determining the wal_segment_size, pg_waldump checked errno, despite errno not
being set by the short read. Resulting in a bogus error message.

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220214.181847.775024684568733277.horikyota.ntt@gmail.com
Backpatch: 11-, the bug was introducedin fc49e24fa
2022-02-25 10:30:05 -08:00
Peter Geoghegan 73c61a50a1 vacuumlazy.c: Remove obsolete num_tuples field.
Commit 49c9d9fc unified VACUUM VERBOSE and autovacuum logging.  It
neglected to remove an old vacrel field that was only used by the old
VACUUM VERBOSE, so remove it now.

The previous num_tuples approach doesn't seem to have any real advantage
over the approach VACUUM VERBOSE takes now (also the approach used by
the autovacuum logging code), which is to show new_rel_tuples.
new_rel_tuples is the possibly-estimated total number of tuples left in
the table, whereas num_tuples meant the number of tuples encountered
during the VACUUM operation, after pruning, without regard for tuples
from pages skipped via the visibility map.

In passing, reorder a related vacrel field for consistency.
2022-02-24 19:01:54 -08:00
Amit Kapila 22eb12cfff Fix few values in pg_proc for pg_stat_get_replication_slot.
The function pg_stat_get_replication_slot() is not a SRF but marked
incorrectly in the pg_proc.

Reported-by: Michael Paquier
Discussion: https://postgr.es/m/YhMk4RjoMK3CCXy2@paquier.xyz
2022-02-25 07:51:21 +05:30
Peter Geoghegan cf879d3069 Remove unnecessary heap_tuple_needs_freeze argument.
The buffer argument hasn't been used since the function was first added
by commit bbb6e559c4.  The sibling heap_prepare_freeze_tuple function
doesn't have such an argument either.  Remove it.
2022-02-24 18:31:07 -08:00
Daniel Gustafsson 31d8d4740f Guard against reallocation failure in pg_regress
realloc() will return NULL on a failed reallocation, so the destination
pointer must be inspected to avoid null pointer dereference.  Further,
assigning the return value to the source pointer leak the allocation in
the case of reallocation failure.  Fix by using pg_realloc instead which
has full error handling.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/9FC7E603-9246-4C62-B466-A39CFAF454AE@yesql.se
2022-02-24 20:58:18 +01:00
Heikki Linnakangas 6c46e8a5df Fix data loss on crash after sorted GiST index build.
If a checkpoint happens during sorted GiST index build, and the system
crashes after the checkpoint and after the index build has finished,
the data written to the index before the checkpoint started could be
lost. The checkpoint won't fsync it, and it won't be replayed at crash
recovery either. Fix by calling smgrimmedsync() after the index build,
just like in B-tree index build.

Backpatch to v14 where the sorted GiST index build was introduced.

Reported-by: Melanie Plageman
Discussion: https://www.postgresql.org/message-id/CAAKRu_ZJJynimxKj5xYBSziL62-iEtPE+fx-B=JzR=jUtP92mw@mail.gmail.com
2022-02-24 16:15:12 +02:00
Michael Paquier e77216fcb0 Simplify more checks related to set-returning functions
This makes more consistent the SRF-related checks in the area of
PL/pgSQL, PL/Perl, PL/Tcl, pageinspect and some of the JSON worker
functions, making it easier to grep for the same error patterns through
the code, reducing a bit the translation work.

It is worth noting that each_worker_jsonb()/each_worker() in jsonfuncs.c
and pageinspect's brin_page_items() were doing a check on expectedDesc
that is not required as they fetch their tuple descriptor directly from
get_call_result_type().  This looks like a set of copy-paste errors that
have spread over the years.

This commit is a continuation of the changes begun in 07daca5, for any
remaining code paths on sight.  Like fcc2817, this makes the code more
consistent, easing the integration of a larger patch that will refactor
the way tuplestores are created and checked in a good portion of the
set-returning functions present in core.

I have worked my way through the changes of this patch by myself, and
Ranier has proposed the same changes in a different thread in parallel,
though there were some inconsistencies related in expectedDesc in what
was proposed by him.

Author: Michael Paquier, Ranier Vilela
Discussion: https://postgr.es/m/CAAKRu_azyd1Z3W_r7Ou4sorTjRCs+PxeHw1CWJeXKofkE6TuZg@mail.gmail.com
Discussion: https://postgr.es/m/CAEudQApm=AFuJjEHLBjBcJbxcw4pBMwg2sHwXyCXYcbBOj3hpg@mail.gmail.com
2022-02-24 16:54:59 +09:00
Michael Paquier fcc28178c6 Clean up and simplify code in a couple of set-returning functions
The following set-returning functions have their logic simplified, to be
more consistent with other in-core areas:
- pg_prepared_statement()'s tuple descriptor is now created with
get_call_result_type() instead of being created from scratch, saving
from some duplication with pg_proc.dat.
- show_all_file_settings(), similarly, now uses get_call_result_type()
to build its tuple descriptor instead of creating it from scratch.
- pg_options_to_table() made use of a static routine called only once.
This commit removes this internal routine to make the function easier to
follow.
- pg_config() was using a unique logic style, doing checks on the tuple
descriptor passed down in expectedDesc, but it has no need to do so.
This switches the function to use a tuplestore with a tuple descriptor
retrieved from get_call_result_type(), instead.

This simplifies an upcoming patch aimed at refactoring the way
tuplestores are created and checked in set-returning functions, this
change making sense as its own independent cleanup by shaving some
code.

Author: Melanie Plageman, Michael Paquier
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/CAAKRu_azyd1Z3W_r7Ou4sorTjRCs+PxeHw1CWJeXKofkE6TuZg@mail.gmail.com
2022-02-24 16:11:34 +09:00
Amit Kapila cfb4e209ec Fix one of the tests introduced in commit 52e4f0cd47.
In the Publisher-Subscriber setup, after performing a DML operation on the
publisher, we need to wait for it to be replayed on the subscriber before
querying the same data on the subscriber. One of the tests missed the wait
step.

As per buildfarm.

Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Pv=e9Qd1TSYo8Og6x6Abfz3b9_htwinLp4ENPgV45DACQ@mail.gmail.com
2022-02-24 08:54:39 +05:30
Tom Lane bd74c4037c Re-allow underscore as first character of custom GUC names.
Commit 3db826bd5 intended that valid_custom_variable_name's
rules for valid identifiers match those of scan.l.  However,
I (tgl) had some kind of brain fade and put "_" in the wrong
list.

Fix by Japin Li, per bug #17415 from Daniel Polski.

Discussion: https://postgr.es/m/17415-ebdb683d7e09a51c@postgresql.org
2022-02-23 11:10:46 -05:00
Daniel Gustafsson 0475a97f74 Quick exit on log stream child exit in pg_basebackup
If the log streaming child process (thread on Windows) dies during
backup then the whole backup will be aborted at the end of the
backup.  Instead, trap ungraceful termination of the log streaming
child and exit early.  This also adds a TAP test for simulating this
by terminating the responsible backend.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Reviewed-by: Magnus Hagander <magnus@hagander.net>
Discussion: https://postgr.es/m/0F69E282-97F9-4DB7-8D6D-F927AA6340C8@yesql.se
Discussion: https://postgr.es/m/VI1PR83MB0189818B82C19059CB62E26199A89@VI1PR83MB0189.EURPRD83.prod.outlook.com
2022-02-23 14:24:43 +01:00
Daniel Gustafsson c7d7e12039 Remove duplicated word in comment
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/B7C15416-BD61-4926-9843-5C557BCD7007@yesql.se
2022-02-23 14:23:50 +01:00
Daniel Gustafsson 6da65a3f9a Add function to pump IPC process until string match
Refactor the recovery tests to not carry a local duplicated copy of
the pump_until function which pumps a process until a defined string
is seen on a stream. This reduces duplication, and is in preparation
for another patch which will also use this functionality.

Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion https://postgr.es/m/YgynUafCyIu3jIhC@paquier.xyz
2022-02-23 14:22:16 +01:00
Daniel Gustafsson 91d3580535 Use test functions in pg_rewind test module
Commit 61081e75c introduced pg_rewind along with the test suite, which
ensured that subroutines didn't incur more than one test to plan.  Now
that we no longer explicitly plan tests (since 549ec201d),  we can use
the usual Test::More functions.

Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/AA527525-F0CC-4AA2-AF98-543CABFDAF59@yesql.se
2022-02-23 11:22:46 +01:00
Daniel Gustafsson 2313a3ee22 Fix statenames in mergejoin comments
The names in the comments were on a few states not consistent with
the documented state.

Author: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CALNJ-vQVthfQXVqmrHR8BKHtC4fMGbhM1xbvJNJAPexTq_dH=w@mail.gmail.com
2022-02-23 10:54:03 +01:00
Andres Freund afdeff1052 Add temporary debug info to help debug 019_replslot_limit.pl failures.
I have not been able to reproduce the occasional failures of
019_replslot_limit.pl we are seeing in the buildfarm and not for lack of
trying. The additional logging and increased log level will hopefully help.

Will be reverted once the cause is identified.

Discussion: https://postgr.es/m/20220218231415.c4plkp4i3reqcwip@alap3.anarazel.de
2022-02-22 18:02:34 -08:00
Peter Eisentraut 9467321649 Put typtype letters back into consistent order 2022-02-22 10:11:38 +01:00
Amit Kapila 52e4f0cd47 Allow specifying row filters for logical replication of tables.
This feature adds row filtering for publication tables. When a publication
is defined or modified, an optional WHERE clause can be specified. Rows
that don't satisfy this WHERE clause will be filtered out. This allows a
set of tables to be partially replicated. The row filter is per table. A
new row filter can be added simply by specifying a WHERE clause after the
table name. The WHERE clause must be enclosed by parentheses.

The row filter WHERE clause for a table added to a publication that
publishes UPDATE and/or DELETE operations must contain only columns that
are covered by REPLICA IDENTITY. The row filter WHERE clause for a table
added to a publication that publishes INSERT can use any column. If the
row filter evaluates to NULL, it is regarded as "false". The WHERE clause
only allows simple expressions that don't have user-defined functions,
user-defined operators, user-defined types, user-defined collations,
non-immutable built-in functions, or references to system columns. These
restrictions could be addressed in the future.

If you choose to do the initial table synchronization, only data that
satisfies the row filters is copied to the subscriber. If the subscription
has several publications in which a table has been published with
different WHERE clauses, rows that satisfy ANY of the expressions will be
copied. If a subscriber is a pre-15 version, the initial table
synchronization won't use row filters even if they are defined in the
publisher.

The row filters are applied before publishing the changes. If the
subscription has several publications in which the same table has been
published with different filters (for the same publish operation), those
expressions get OR'ed together so that rows satisfying any of the
expressions will be replicated.

This means all the other filters become redundant if (a) one of the
publications have no filter at all, (b) one of the publications was
created using FOR ALL TABLES, (c) one of the publications was created
using FOR ALL TABLES IN SCHEMA and the table belongs to that same schema.

If your publication contains a partitioned table, the publication
parameter publish_via_partition_root determines if it uses the partition's
row filter (if the parameter is false, the default) or the root
partitioned table's row filter.

Psql commands \dRp+ and \d <table-name> will display any row filters.

Author: Hou Zhijie, Euler Taveira, Peter Smith, Ajin Cherian
Reviewed-by: Greg Nancarrow, Haiying Tang, Amit Kapila, Tomas Vondra, Dilip Kumar, Vignesh C, Alvaro Herrera, Andres Freund, Wei Wang
Discussion: https://www.postgresql.org/message-id/flat/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com
2022-02-22 08:11:50 +05:30
Michael Paquier ebf6c5249b Add compute_query_id = regress
"regress" is a new mode added to compute_query_id aimed at facilitating
regression testing when a module computing query IDs is loaded into the
backend, like pg_stat_statements.  It works the same way as "auto",
meaning that query IDs are computed if a module enables it, except that
query IDs are hidden in EXPLAIN outputs to ensure regression output
stability.

Like any GUCs of the kind (force_parallel_mode, etc.), this new
configuration can be added to an instance's postgresql.conf, or just
passed down with PGOPTIONS at command level.  compute_query_id uses an
enum for its set of option values, meaning that this addition ensures
ABI compatibility.

Using this new configuration mode allows installcheck-world to pass when
running the tests on an instance with pg_stat_statements enabled,
stabilizing the test output while checking the paths doing query ID
computations.

Reported-by: Anton Melnikov
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/1634283396.372373993@f75.i.mail.ru
Discussion: https://postgr.es/m/YgHlxgc/OimuPYhH@paquier.xyz
Backpatch-through: 14
2022-02-22 10:22:15 +09:00
Tom Lane 88103567cb Disallow setting bogus GUCs within an extension's reserved namespace.
Commit 75d22069e tried to throw a warning for setting a custom GUC whose
prefix belongs to a previously-loaded extension, if there is no such GUC
defined by the extension.  But that caused unstable behavior with
parallel workers, because workers don't necessarily load extensions and
GUCs in the same order their leader did.  To make that work safely, we
have to completely disallow the case.  We now actually remove any such
GUCs at the time of initial extension load, and then throw an error not
just a warning if you try to add one later.  While this might create a
compatibility issue for a few people, the improvement in error-detection
capability seems worth it; it's hard to believe that there's any good
use-case for choosing such GUC names.

This also un-reverts 5609cc01c (Rename EmitWarningsOnPlaceholders() to
MarkGUCPrefixReserved()), since that function's old name is now even
more of a misnomer.

Florin Irion and Tom Lane

Discussion: https://postgr.es/m/1902182.1640711215@sss.pgh.pa.us
2022-02-21 14:10:43 -05:00
Andres Freund 2776922201 Assert in init_toast_snapshot() that some snapshot registered or active.
Commit <FIXME> fixed the bug that RemoveTempRelationsCallback() did not
push/register a snapshot. That only went unnoticed because often a valid
catalog snapshot exists and is returned by GetOldestSnapshot(). But due to
invalidation processing that is not reliable.

Thus assert in init_toast_snapshot() that there is a registered or active
snapshot, using the new HaveRegisteredOrActiveSnapshot().

Author: Andres Freund
Discussion: https://postgr.es/m/20220219180002.6tubjq7iw7m52bgd@alap3.anarazel.de
2022-02-21 08:58:29 -08:00
Andres Freund 7c38ef2a5d Fix temporary object cleanup failing due to toast access without snapshot.
When cleaning up temporary objects during process exit the cleanup could fail
with:
  FATAL: cannot fetch toast data without an active snapshot

The bug is caused by RemoveTempRelationsCallback() not setting up a
snapshot. If an object with toasted catalog data needs to be cleaned up,
init_toast_snapshot() could fail with the above error.

Most of the time however the the problem is masked due to cached catalog
snapshots being returned by GetOldestSnapshot(). But dropping an object can
cause catalog invalidations to be emitted. If no further catalog accesses are
necessary between the invalidation processing and the next toast datum
deletion, the bug becomes visible.

It's easy to miss this bug because it typically happens after clients
disconnect and the FATAL error just ends up in the log.

Luckily temporary table cleanup at the next use of the same temporary schema
or during DISCARD ALL does not have the same problem.

Fix the bug by pushing a snapshot in RemoveTempRelationsCallback(). Also add
isolation tests for temporary object cleanup, including objects with toasted
catalog data.

A future HEAD only commit will add an assertion trying to make this more
visible.

Reported-By: Miles Delahunty
Author: Andres Freund
Discussion: https://postgr.es/m/CAOFAq3BU5Mf2TTvu8D9n_ZOoFAeQswuzk7yziAb7xuw_qyw5gw@mail.gmail.com
Backpatch: 10-
2022-02-21 08:57:34 -08:00
Andres Freund 27b02e070f pg_upgrade: Don't print progress status when output is not a tty.
Until this change pg_upgrade with output redirected to a file / pipe would end
up printing all files in the cluster. This has made check-world output
exceedingly verbose.

Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-By: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CA+hUKGKjrV61ZVJ8OSag+3rKRmCZXPc03bDyWMqhXg3rdZ=fOw@mail.gmail.com
2022-02-21 08:34:59 -08:00
Peter Eisentraut 5c868c92ca Fix possible null pointer reference
Per Coverity.  Introduced in 37851a8b83.
2022-02-21 09:42:46 +01:00
Andres Freund fbabdf8f9a Fix meaning-changing typo introduced in fa0e03c15a. 2022-02-20 13:51:36 -08:00
Tom Lane 83a7637e2c Reset conn->errorReported when PQrequestCancel sets errorMessage.
Oversight in commit 618c16707.  This is mainly neatnik-ism, since
if PQrequestCancel is used per its API contract, we should perform
pqClearConnErrorState before reaching any place that would consult
errorReported.  But still, it seems like a bad idea to potentially
leave errorReported pointing past errorMessage.len.
2022-02-20 15:02:41 -05:00
Andrew Dunstan 1c6d462939
Remove most msys special processing in TAP tests
Following migration of Windows buildfarm members running TAP tests to
use of ucrt64 perl for those tests, special processing for msys perl is
no longer necessary and so is removed.

Backpatch to release 10

Discussion: https://postgr.es/m/c65a8781-77ac-ea95-d185-6db291e1baeb@dunslane.net
2022-02-20 11:51:45 -05:00
Andrew Dunstan 95d981338b
Remove PostgreSQL::Test::Utils::perl2host completely
Commit f1ac4a74de disabled this processing, and as nothing has broken (as
expected) here we proceed to remove the routine and adjust all the call
sites.

Backpatch to release 10

Discussion: https://postgr.es/m/0ba775a2-8aa0-0d56-d780-69427cf6f33d@dunslane.net
Discussion: https://postgr.es/m/20220125023609.5ohu3nslxgoygihl@alap3.anarazel.de
2022-02-20 11:51:45 -05:00