Commit Graph

47668 Commits

Author SHA1 Message Date
Michael Paquier
403d82dd54 Remove recovery test 011_crash_recovery.pl
This test has been added as of 857ee8e that has introduced the SQL
function txid_status(), with the purpose of checking that a transaction
ID still in-progress during a crash is correctly marked as aborted after
recovery finishes.

This test is unstable, and some configuration scenarios may that easier
to reproduce (wal_level=minimal, wal_compression=on) because the WAL
holding the information about the in-progress transaction ID may not
have made it to disk yet, hence a post-crash recovery may cause the same
XID to be reused, triggering a test failure.

We have discussed a few approaches, like making this function force a
WAL flush to make it reliable across crashes, but we don't want to pay a
performance penalty in some scenarios, as well.  The test could have
been tweaked to enforce a checkpoint but that actually breaks the
promise of the test to rely on a stable result of txid_status() after
a crash.

This issue has been reported a few times across the past years, with an
original report from Kyotaro Horiguchi.  The buildfarm machines tanager,
hachi and gokiburi enable wal_compression, and fail on this test
periodically.

Discussion: https://postgr.es/m/3163112.1674762209@sss.pgh.pa.us
Discussion: https://postgr.es/m/20210305.115011.558061052471425531.horikyota.ntt@gmail.com
Backpatch-through: 11
2023-01-31 12:47:20 +09:00
Thomas Munro
d95dcc9ab5 Fix rare sharedtuplestore.c corruption.
If the final chunk of an oversized tuple being written out to disk was
exactly 32760 bytes, it would be corrupted due to a fencepost bug.

Bug #17619.  Back-patch to 11 where the code arrived.

While testing that (see test module in archives), I (tmunro) noticed
that the per-participant page counter was not initialized to zero as it
should have been; that wasn't a live bug when it was written since DSM
memory was originally always zeroed, but since 14
min_dynamic_shared_memory might be configured and it supplies non-zeroed
memory, so that is also fixed here.

Author: Dmitry Astapov <dastapov@gmail.com>
Discussion: https://postgr.es/m/17619-0de62ceda812b8b5%40postgresql.org
2023-01-26 14:55:37 +13:00
Andres Freund
243373159f Fix error handling in libpqrcv_connect()
When libpqrcv_connect (also known as walrcv_connect()) failed, it leaked the
libpq connection. In most paths that's fairly harmless, as the calling process
will exit soon after. But e.g. CREATE SUBSCRIPTION could lead to a somewhat
longer lived leak.

Fix by releasing resources, including the libpq connection, on error.

Add a test exercising the error code path. To make it reliable and safe, the
test tries to connect to port=-1, which happens to fail during connection
establishment, rather than during connection string parsing.

Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20230121011237.q52apbvlarfv6jm6@awork3.anarazel.de
Backpatch: 11-
2023-01-23 18:27:58 -08:00
Tom Lane
6c122eddec Allow REPLICA IDENTITY to be set on an index that's not (yet) valid.
The motivation for this change is that when pg_dump dumps a
partitioned index that's marked REPLICA IDENTITY, it generates a
command sequence that applies REPLICA IDENTITY before the partitioned
index has been marked valid, causing restore to fail.  We could
perhaps change pg_dump to not do it like that, but that would be
difficult and would not fix existing dump files with the problem.
There seems to be very little reason for the backend to disallow
this anyway --- the code ignores indisreplident when the index
isn't valid --- so instead let's fix it by allowing the case.

Commit 9511fb37a previously expressed a concern that allowing
indisreplident to be set on invalid indexes might allow us to
wind up in a situation where a table could have indisreplident
set on multiple indexes.  I'm not sure I follow that concern
exactly, but in any case the only way that could happen is because
relation_mark_replica_identity is too trusting about the existing set
of markings being valid.  Let's just rip out its early-exit code path
(which sure looks like premature optimization anyway; what are we
doing expending code to make redundant ALTER TABLE ... REPLICA
IDENTITY commands marginally faster and not-redundant ones marginally
slower?) and fix it to positively guarantee that no more than one
index is marked indisreplident.

The pg_dump failure can be demonstrated in all supported branches,
so back-patch all the way.  I chose to back-patch 9511fb37a as well,
just to keep indisreplident handling the same in all branches.

Per bug #17756 from Sergey Belyashov.

Discussion: https://postgr.es/m/17756-dd50e8e0c8dd4a40@postgresql.org
2023-01-21 13:10:30 -05:00
Noah Misch
8f70de7e01 Reject CancelRequestPacket having unexpected length.
When the length was too short, the server read outside the allocation.
That yielded the same log noise as sending the correct length with
(backendPID,cancelAuthCode) matching nothing.  Change to a message about
the unexpected length.  Given the attacker's lack of control over the
memory layout and the general lack of diversity in memory layouts at the
code in question, we doubt a would-be attacker could cause a segfault.
Hence, while the report arrived via security@postgresql.org, this is not
a vulnerability.  Back-patch to v11 (all supported versions).

Andrey Borodin, reviewed by Tom Lane.  Reported by Andrey Borodin.
2023-01-21 06:08:05 -08:00
Tom Lane
b69e9dfab1 Make our back branches build under -fkeep-inline-functions.
Add "#ifndef FRONTEND" where necessary to make pg_waldump build
on compilers that don't elide unused static-inline functions.

This back-patches relevant parts of commit 3e9ca5260, fixing build
breakage from dc7420c2c and back-patching of f10f0ae42.

Per recently-resurrected buildfarm member castoroides.  We aren't
expecting castoroides to build anything newer than v11, but we
might as well clean up the intermediate branches while at it.
2023-01-20 11:58:12 -05:00
Tom Lane
0a269527f6 Log the correct ending timestamp in recovery_target_xid mode.
When ending recovery based on recovery_target_xid matching with
recovery_target_inclusive = off, we printed an incorrect timestamp
(always 2000-01-01) in the "recovery stopping before ... transaction"
log message.  This is a consequence of sloppy refactoring in
c945af80c: the code to fetch recordXtime out of the commit/abort
record used to be executed unconditionally, but it was changed
to get called only in the RECOVERY_TARGET_TIME case.  We need only
flip the order of operations to restore the intended behavior.

Per report from Torsten Förtsch.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/CAKkG4_kUevPqbmyOfLajx7opAQk6Cvwkvx0HRcFjSPfRPTXanA@mail.gmail.com
2023-01-19 12:23:20 -05:00
Michael Paquier
0c2f34af7e Add missing assign hook for GUC checkpoint_completion_target
This is wrong since 88e9823, that has switched the WAL sizing
configuration from checkpoint_segments to min_wal_size and
max_wal_size.  This missed the recalculation of the internal value of
the internal "CheckPointSegments", that works as a mapping of the old
GUC checkpoint_segments, on reload, for example, and it controls the
timing of checkpoints depending on the volume of WAL generated.

Most users tend to leave checkpoint_completion_target at 0.9 to smooth
the I/O workload, which is why I guess this has gone unnoticed for so
long, still it can be useful to tweak and reload the value dynamically
in some cases to control the timing of checkpoints.

Author: Bharath Rupireddy
Discussion: https://postgr.es/m/CALj2ACXgPPAm28mruojSBno+F_=9cTOOxHAywu_dfZPeBdybQw@mail.gmail.com
Backpatch-through: 11
2023-01-19 13:13:34 +09:00
Michael Paquier
145bc5debf Fix failure with perlcritic in psql's create_help.pl
No buildfarm members have reported that yet, but a recently-refreshed
Debian host did.

Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/Y8ey5z4Nav62g4/K@paquier.xyz
Backpatch-through: 11
2023-01-19 10:02:15 +09:00
Tom Lane
c94d684bf4 AdjustUpgrade.pm should zap test_ext_cine, too.
test_extensions' test_ext_cine extension has the same upgrade hazard
as test_ext7: the regression test leaves it in an updated state
from which no downgrade path to default is provided.  This causes
the update_extensions.sql script helpfully provided by pg_upgrade
to fail.  So drop it in cross-version-upgrade testing.

Not entirely sure how come I didn't hit this in testing yesterday;
possibly I'd built the upgrade reference databases with
testmodules-install-check disabled.

Backpatch to v10 where this module was introduced.
2023-01-17 16:01:11 -05:00
Tom Lane
f02a752228 Create common infrastructure for cross-version upgrade testing.
To test pg_upgrade across major PG versions, we have to be able to
modify or drop any old objects with no-longer-supported properties,
and we have to be able to deal with cosmetic changes in pg_dump output.
Up to now, the buildfarm and pg_upgrade's own test infrastructure had
separate implementations of the former, and we had nothing but very
ad-hoc rules for the latter (including an arbitrary threshold on how
many lines of unchecked diff were okay!).  This patch creates a Perl
module that can be shared by both those use-cases, and adds logic
that deals with pg_dump output diffs in a much more tightly defined
fashion.

This largely supersedes previous efforts in commits 0df9641d3,
9814ff550, and 62be9e4cd, which developed a SQL-script-based solution
for the task of dropping old objects.  There was nothing fundamentally
wrong with that work in itself, but it had no basis for solving the
output-formatting problem.  The most plausible way to deal with
formatting is to build a Perl module that can perform editing on the
dump files; and once we commit to that, it makes more sense for the
same module to also embed the knowledge of what has to be done for
dropping old objects.

Back-patch versions of the helper module as far as 9.2, to
support buildfarm animals that still test that far back.
It's also necessary to back-patch PostgreSQL/Version.pm,
because the new code depends on that.  I fixed up pg_upgrade's
002_pg_upgrade.pl in v15, but did not look into back-patching
it further than that.

Tom Lane and Andrew Dunstan

Discussion: https://postgr.es/m/891521.1673657296@sss.pgh.pa.us
2023-01-16 20:35:53 -05:00
Thomas Munro
1b40710a8c Fix WaitEventSetWait() buffer overrun.
The WAIT_USE_EPOLL and WAIT_USE_KQUEUE implementations of
WaitEventSetWaitBlock() confused the size of their internal buffer with
the size of the caller's output buffer, and could ask the kernel for too
many events.  In fact the set of events retrieved from the kernel needs
to be able to fit in both buffers, so take the smaller of the two.

The WAIT_USE_POLL and WAIT_USE WIN32 implementations didn't have this
confusion.

This probably didn't come up before because we always used the same
number in both places, but commit 7389aad6 calculates a dynamic size at
construction time, while using MAXLISTEN for its output event buffer on
the stack.  That seems like a reasonable thing to want to do, so
consider this to be a pre-existing bug worth fixing.

As discovered by valgrind on skink.

Back-patch to all supported releases for epoll, and to release 13 for
the kqueue part, which copied the incorrect epoll code.

Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/901504.1673504836%40sss.pgh.pa.us
2023-01-13 10:54:20 +13:00
Dean Rasheed
c54b888700 Fix tab completion of ALTER FUNCTION/PROCEDURE/ROUTINE ... SET SCHEMA.
The ALTER DATABASE|FUNCTION|PROCEDURE|ROLE|ROUTINE|USER ... SET <name>
case in psql tab completion failed to exclude <name> = "SCHEMA", which
caused ALTER FUNCTION|PROCEDURE|ROUTINE ... SET SCHEMA to complete
with "FROM CURRENT" and "TO", which won't work.

Fix that, so that those cases now complete with the list of schemas,
like other ALTER ... SET SCHEMA commands.

Noticed while testing the recent patch to improve tab completion for
ALTER FUNCTION/PROCEDURE/ROUTINE, but this is not directly related to
that patch. Rather, this is a long-standing bug, so back-patch to all
supported branches.

Discussion: https://postgr.es/m/CALDaNm0s7GQmkLP_mx5Cvk=UzYMnjhPmXBxU8DsHEunFbC5sTg@mail.gmail.com
2023-01-06 11:09:56 +00:00
Robert Haas
0b496bc988 Improve documentation of the CREATEROLE attibute.
In user-manag.sgml, document precisely what privileges are conveyed
by CREATEROLE. Make particular note of the fact that it allows
changing passwords and granting access to high-privilege roles.
Also remove the suggestion of using a user with CREATEROLE and
CREATEDB instead of a superuser, as there is no real security
advantage to this approach.

Elsewhere in the documentation, adjust text that suggests that
<literal>CREATEROLE</literal> only allows for role creation, and
refer to the documentation in user-manag.sgml as appropriate.

Patch by me, reviewed by Álvaro Herrera

Discussion: http://postgr.es/m/CA+TgmoZBsPL8nPhvYecx7iGo5qpDRqa9k_AcaW1SbOjugAY1Ag@mail.gmail.com
2023-01-03 15:08:08 -05:00
Michael Paquier
a80740a7c9 Fix typos in comments, code and documentation
While on it, newlines are removed from the end of two elog() strings.
The others are simple grammar mistakes.  One comment in pg_upgrade
referred incorrectly to sequences since a7e5457.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20221230231257.GI1153@telsasoft.com
Backpatch-through: 11
2023-01-03 16:26:38 +09:00
Andres Freund
99f8bc335c perl: Hide warnings inside perl.h when using gcc compatible compiler
New versions of perl trigger warnings within perl.h with our compiler
flags. At least -Wdeclaration-after-statement, -Wshadow=compatible-local are
known to be problematic.

To avoid these warnings, conditionally use #pragma GCC system_header before
including plperl.h.

Alternatively, we could add the include paths for problematic headers with
-isystem, but that is a larger hammer and is harder to search for.

A more granular alternative would be to use #pragma GCC diagnostic
push/ignored/pop, but gcc warns about unknown warnings being ignored, so every
to-be-ignored-temporarily compiler warning would require its own pg_config.h
symbol and #ifdef.

As the warnings are voluminous, it makes sense to backpatch this change. But
don't do so yet, we first want gather buildfarm coverage - it's e.g. possible
that some compiler claiming to be gcc compatible has issues with the pragma.

Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: Discussion: https://postgr.es/m/20221228182455.hfdwd22zztvkojy2@awork3.anarazel.de
2023-01-02 15:51:05 -08:00
Tom Lane
982b9b1eba Avoid reference to nonexistent array element in ExecInitAgg().
When considering an empty grouping set, we fetched
phasedata->eqfunctions[-1].  Because the eqfunctions array is
palloc'd, that would always be an aset pointer in released versions,
and thus the code accidentally failed to malfunction (since it would
do nothing unless it found a null pointer).  Nonetheless this seems
like trouble waiting to happen, so add a check for length == 0.

It's depressing that our valgrind testing did not catch this.
Maybe we should reconsider the choice to not mark that word NOACCESS?

Richard Guo

Discussion: https://postgr.es/m/CAMbWs4-vZuuPOZsKOYnSAaPYGKhmacxhki+vpOKk0O7rymccXQ@mail.gmail.com
2023-01-02 16:17:00 -05:00
Bruce Momjian
ef3de55576 Update copyright for 2023
Backpatch-through: 11
2023-01-02 15:00:36 -05:00
Michael Paquier
df6fea51f0 Fix come incorrect elog() messages in aclchk.c
Three error strings used with cache lookup failures were referring to
incorrect object types for ACL checks:
- Schemas
- Types
- Foreign Servers
There errors should never be triggered, but if they do incorrect
information would be reported.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20221222153041.GN1153@telsasoft.com
Backpatch-through: 11
2022-12-23 10:04:37 +09:00
Tom Lane
8cd700cc5a Add some recursion and looping defenses in prepjointree.c.
Andrey Lepikhov demonstrated a case where we spend an unreasonable
amount of time in pull_up_subqueries().  Not only is that recursing
with no explicit check for stack overrun, but the code seems not
interruptable by control-C.  Let's stick a CHECK_FOR_INTERRUPTS
there, along with sprinkling some stack depth checks.

An actual fix for the excessive time consumption seems a bit
risky to back-patch; but this isn't, so let's do so.

Discussion: https://postgr.es/m/703c09a2-08f3-d2ec-b33d-dbecd62428b8@postgrespro.ru
2022-12-22 10:35:03 -05:00
Tom Lane
0ff4056b8c Fix contrib/seg to be more wary of long input numbers.
seg stores the number of significant digits in an input number
in a "char" field.  If char is signed, and the input is more than
127 digits long, the count can read out as negative causing
seg_out() to print garbage (or, if you're really unlucky,
even crash).

To fix, clamp the digit count to be not more than FLT_DIG.
(In theory this loses some information about what the original
input was, but it doesn't seem like useful information; it would
not survive dump/restore in any case.)

Also, in case there are stored values of the seg type containing
bad data, add a clamp in seg_out's restore() subroutine.

Per bug #17725 from Robins Tharakan.  It's been like this
forever, so back-patch to all supported branches.

Discussion: https://postgr.es/m/17725-0a09313b67fbe86e@postgresql.org
2022-12-21 17:51:50 -05:00
Tom Lane
f48aa5df4e Rethink handling of [Prevent|Is]InTransactionBlock in pipeline mode.
Commits f92944137 et al. made IsInTransactionBlock() set the
XACT_FLAGS_NEEDIMMEDIATECOMMIT flag before returning "false",
on the grounds that that kept its API promises equivalent to those of
PreventInTransactionBlock().  This turns out to be a bad idea though,
because it allows an ANALYZE in a pipelined series of commands to
cause an immediate commit, which is unexpected.

Furthermore, if we return "false" then we have another issue,
which is that ANALYZE will decide it's allowed to do internal
commit-and-start-transaction sequences, thus possibly unexpectedly
committing the effects of previous commands in the pipeline.

To fix the latter situation, invent another transaction state flag
XACT_FLAGS_PIPELINING, which explicitly records the fact that we
have executed some extended-protocol command and not yet seen a
commit for it.  Then, require that flag to not be set before allowing
InTransactionBlock() to return "false".

Having done that, we can remove its setting of NEEDIMMEDIATECOMMIT
without fear of causing problems.  This means that the API guarantees
of IsInTransactionBlock now diverge from PreventInTransactionBlock,
which is mildly annoying, but it seems OK given the very limited usage
of IsInTransactionBlock.  (In any case, a caller preferring the old
behavior could always set NEEDIMMEDIATECOMMIT for itself.)

For consistency also require XACT_FLAGS_PIPELINING to not be set
in PreventInTransactionBlock.  This too is meant to prevent commands
such as CREATE DATABASE from silently committing previous commands
in a pipeline.

Per report from Peter Eisentraut.  As before, back-patch to all
supported branches (which sadly no longer includes v10).

Discussion: https://postgr.es/m/65a899dd-aebc-f667-1d0a-abb89ff3abf8@enterprisedb.com
2022-12-13 14:23:59 -05:00
Michael Paquier
990a773ab6 doc: Add missing <varlistentry> markups for developer GUCs
Missing such markups makes it impossible to create links back to these
GUCs, and all the other parameters have one already.

Author: Ian Lawrence Barwick
Discussion: https://postgr.es/m/CAB8KJ=jx=6dFB_EN3j0UkuvG3cPu5OmQiM-ZKRAz+fKvS+u8Ng@mail.gmail.com
Backpatch-through: 11
2022-12-05 11:23:36 +09:00
Tom Lane
2df0733139 Fix generate_partitionwise_join_paths() to tolerate failure.
We might fail to generate a partitionwise join, because
reparameterize_path_by_child() does not support all path types.
This should not be a hard failure condition: we should just fall back
to a non-partitioned join.  However, generate_partitionwise_join_paths
did not consider this possibility and would emit the (misleading)
error "could not devise a query plan for the given query" if we'd
failed to make any paths for a child join.  Fix it to give up on
partitionwise joining if so.  (The accepted technique for giving up
appears to be to set rel->nparts = 0, which I find pretty bizarre,
but there you have it.)

I have not added a test case because there'd be little point:
any omissions of this sort that we identify would soon get fixed
by extending reparameterize_path_by_child(), so the test would stop
proving anything.  However, right now there is a known test case based
on failure to cover MaterialPath, and with that I've found that this
is broken in all supported versions.  Hence, patch all the way back.

Original report and patch by me; thanks to Richard Guo for
identifying a test case that works against committed versions.

Discussion: https://postgr.es/m/1854233.1669949723@sss.pgh.pa.us
2022-12-04 13:17:18 -05:00
Dean Rasheed
30f9b03a08 Fix DEFAULT handling for multi-row INSERT rules.
When updating a relation with a rule whose action performed an INSERT
from a multi-row VALUES list, the rewriter might skip processing the
VALUES list, and therefore fail to replace any DEFAULTs in it. This
would lead to an "unrecognized node type" error.

The reason was that RewriteQuery() assumed that a query doing an
INSERT from a multi-row VALUES list would necessarily only have one
item in its fromlist, pointing to the VALUES RTE to read from. That
assumption is correct for the original query, but not for product
queries produced for rule actions. In such cases, there may be
multiple items in the fromlist, possibly including multiple VALUES
RTEs.

What is required instead is for RewriteQuery() to skip any RTEs from
the product query's originating query, which might include one or more
already-processed VALUES RTEs. What's left should then include at most
one VALUES RTE (from the rule action) to be processed.

Patch by me. Thanks to Tom Lane for reviewing.

Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAEZATCV39OOW7LAR_Xq4i%2BLc1Byux%3DeK3Q%3DHD_pF1o9LBt%3DphA%40mail.gmail.com
2022-12-03 12:20:02 +00:00
Andres Freund
af3517c15c Prevent pgstats from getting confused when relkind of a relation changes
When the relkind of a relache entry changes, because a table is converted into
a view, pgstats can get confused in 15+, leading to crashes or assertion
failures.

For HEAD, Tom fixed this in b23cd185fd, by removing support for converting a
table to a view, removing the source of the inconsistency. This commit just
adds an assertion that a relcache entry's relkind does not change, just in
case we end up with another case of that in the future. As there's no cases of
changing relkind anymore, we can't add a test that that's handled correctly.

For 15, fix the problem by not maintaining the association with the old pgstat
entry when the relkind changes during a relcache invalidation processing. In
that case the pgstat entry needs to be unlinked first, to avoid
PgStat_TableStatus->relation getting out of sync. Also add a test reproducing
the issues.

No known problem exists in 11-14, so just add the test there.

Reported-by: vignesh C <vignesh21@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm3oZA-8Wbps2Jd1g5_Gjrr-x3YWrJPek-mF5Asrrvz2Dg@mail.gmail.com
Backpatch: 15-
2022-12-02 18:17:54 -08:00
Bruce Momjian
01e248f796 revert: add transaction processing chapter with internals info
This doc patch (master hash 66bc9d2d3e) was decided to be too
significant for backpatching, so reverted in all but master.  Also fix
SGML file header comment in master.

Reported-by:  	Peter Eisentraut

Discussion: https://postgr.es/m/c6304b19-6ff7-f3af-0148-cf7aa7e2fbfd@enterprisedb.com

Backpatch-through: 11
2022-12-01 10:45:07 -05:00
Tom Lane
04f3acc97e Reject missing database name in pg_regress and cohorts.
Writing "pg_regress --dbname= ..." led to a crash, because
we weren't expecting there to be no database name supplied.
It doesn't seem like a great idea to run regression tests
in whatever is the user's default database; so rather than
supporting this case let's explicitly reject it.

Per report from Xing Guo.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/CACpMh+A8cRvtvtOWVAZsCM1DU81GK4DL26R83y6ugZ1osV=ifA@mail.gmail.com
2022-11-30 13:01:41 -05:00
Bruce Momjian
80e591676b doc: add transaction processing chapter with internals info
This also adds references to this new chapter at relevant sections of
our documentation.  Previously much of these internal details were
exposed to users, but not explained.  This also updates RELEASE
SAVEPOINT.

Discussion: https://postgr.es/m/CANbhV-E_iy9fmrErxrCh8TZTyenpfo72Hf_XD2HLDppva4dUNA@mail.gmail.com

Author: Simon Riggs, Laurenz Albe

Reviewed-by: Bruce Momjian

Backpatch-through: 11
2022-11-29 20:49:51 -05:00
Michael Paquier
a282d5f034 Fix comment in fe-auth-scram.c
The frontend-side routine in charge of building a SCRAM verifier
mentioned that the restrictions applying to SASLprep on the password
with the encoding are described at the top of fe-auth-scram.c, but this
information is in auth-scram.c.

This is wrong since 8f8b9be, so backpatch all the way down as this is an
important documentation bit.

Spotted while reviewing a different patch.

Backpatch-through: 11
2022-11-30 08:38:40 +09:00
Tom Lane
a6c9e1db2b Improve heuristics for compressing the KnownAssignedXids array.
Previously, we'd compress only when the active range of array entries
reached Max(4 * PROCARRAY_MAXPROCS, 2 * pArray->numKnownAssignedXids).
If max_connections is large, the first term could result in not
compressing for a long time, resulting in much wastage of cycles in
hot-standby backends scanning the array to take snapshots.  Get rid
of that term, and just bound it to 2 * pArray->numKnownAssignedXids.

That however creates the opposite risk, that we might spend too much
effort compressing.  Hence, consider compressing only once every 128
commit records.  (This frequency was chosen by benchmarking.  While
we only tried one benchmark scenario, the results seem stable over
a fairly wide range of frequencies.)

Also, force compression when processing RecoveryInfo WAL records
(which should be infrequent); the old code could perform compression
then, but would do so only after the same array-range check as for
the transaction-commit path.

Also, opportunistically run compression if the startup process is about
to wait for WAL, though not oftener than once a second.  This should
prevent cases where we waste lots of time by leaving the array
not-compressed for long intervals due to low WAL traffic.

Lastly, add a simple check to keep us from uselessly compressing
when the array storage is already compact.

Back-patch, as the performance problem is worse in pre-v14 branches
than in HEAD.

Simon Riggs and Michail Nikolaev, with help from Tom Lane and
Andres Freund.

Discussion: https://postgr.es/m/CALdSSPgahNUD_=pB_j=1zSnDBaiOtqVfzo8Ejt5J_k7qZiU1Tw@mail.gmail.com
2022-11-29 15:43:17 -05:00
Andrew Dunstan
724dd56490 Fix binary mismatch for MSVC plperl vs gcc built perl libs
When loading plperl built against Strawberry perl or the msys2 ucrt perl
that have been built with gcc, a binary mismatch has been encountered
which looks like this:

loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)

To cure this we bring the handshake keys into sync by adding
NO_THREAD_SAFE_LOCALE to the defines used to build plperl.

Discussion: https://postgr.es/m/20211005004334.tgjmro4kuachwiuc@alap3.anarazel.de
Discussion: https://postgr.es/m/c2da86a0-2906-744c-923d-16da6047875e@dunslane.net

Backpatch to all live branches.
2022-11-27 09:18:46 -05:00
Tom Lane
b85fd73852 Remove temporary portlock directory during make [dist]clean.
Another oversight in 9b4eafcaf.
2022-11-26 10:31:08 -05:00
Andrew Dunstan
9d3f29d990 Add portlock directory to .gitignore
Commit 9b4eafcaf4 added creattion of a directory to reserve TAP test
ports at the top of the build tree. In a non-vpath build this means at
the top of the source tree, so it needs to be added to .gitignore.

As suggested by Michael Paquier

Backpatch to all live branches.
2022-11-26 07:47:27 -05:00
Andrew Dunstan
ae7c512130 Allow building with MSVC and Strawberry perl
Strawberry uses __builtin_expect which Visual C doesn't have. For this
case define it as a noop. Solution taken from vim sources.

Backpatch to all live branches
2022-11-25 15:37:34 -05:00
Amit Kapila
9b788aafdc Fix uninitialized access to InitialRunningXacts during decoding.
In commit 272248a0c, we introduced an InitialRunningXacts array to
remember transactions and subtransactions that were running when the
xl_running_xacts record that we decoded was written. This array was
allocated in the snapshot builder memory context after we restore
serialized snapshot but we forgot to reset the array while freeing the
builder memory context. So, the next time when we start decoding in the
same session where we don't restore any serialized snapshot, we ended up
using the uninitialized array and that can lead to unpredictable behavior.

This problem doesn't exist in HEAD as instead of using
InitialRunningXacts, we added the list of transaction IDs and
sub-transaction IDs, that have modified catalogs and are running during
snapshot serialization, to the serialized snapshot (see commit 7f13ac8123).

Reported-by: Maxim Orlov
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Maxim Orlov
Backpatch-through: 11
Discussion: https://postgr.es/m/CACG=ezZoz_KG+Ryh9MrU_g5e0HiVoHocEvqFF=NRrhrwKmEQJQ@mail.gmail.com
2022-11-25 08:56:54 +05:30
Alvaro Herrera
b20e381422
Make multixact error message more explicit
There are recent reports involving a very old error message that we have
no history of hitting -- perhaps a recently introduced bug.  Improve the
error message in an attempt to improve our chances of investigating the
bug.

Per reports from Dimos Stamatakis and Bob Krier.

Backpatch to 11.

Discussion: https://postgr.es/m/CO2PR0801MB2310579F65529380A4E5EDC0E20A9@CO2PR0801MB2310.namprd08.prod.outlook.com
Discussion: https://postgr.es/m/17518-04e368df5ad7f2ee@postgresql.org
2022-11-24 10:45:10 +01:00
Andrew Dunstan
2f92b8ad31
Fix perl warning from commit 9b4eafcaf4
per gripe from Andres Freund and Tom Lane

Backpatch to all live branches.
2022-11-23 07:18:16 -05:00
Tom Lane
b96a096dbc YA attempt at taming worst-case behavior of get_actual_variable_range.
We've made multiple attempts at preventing get_actual_variable_range
from taking an unreasonable amount of time (3ca930fc3, fccebe421).
But there's still an issue for the very first planning attempt after
deletion of a large number of extremal-valued tuples.  While that
planning attempt will set "killed" bits on the tuples it visits and
thereby reduce effort for next time, there's still a lot of work it
has to do to visit the heap and then set those bits.  It's (usually?)
not worth it to do that much work at plan time to have a slightly
better estimate, especially in a context like this where the table
contents are known to be mutating rapidly.

Therefore, let's bound the amount of work to be done by giving up
after we've visited 100 heap pages.  Giving up just means we'll
fall back on the extremal value recorded in pg_statistic, so it
shouldn't mean that planner estimates suddenly become worthless.

Note that this means we'll still gradually whittle down the problem
by setting a few more index "killed" bits in each planning attempt;
so eventually we'll reach a good state (barring further deletions),
even in the absence of VACUUM.

Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
adjustments by me).  Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com
2022-11-22 14:40:46 -05:00
Andrew Dunstan
46def5267c Prevent port collisions between concurrent TAP tests
Currently there is a race condition where if concurrent TAP tests both
test that they can open a port they will assume that it is free and use
it, causing one of them to fail. To prevent this we record a reservation
using an exclusive lock, and any TAP test that discovers a reservation
checks to see if the reserving process is still alive, and looks for
another free port if it is.

Ports are reserved in a directory set by the environment setting
PG_TEST_PORT_DIR, or if that doesn't exist a subdirectory of the top
build directory as set by Makefile.global, or its own
tmp_check directory.

The prove_check recipe in Makefile.global.in is extended to export
top_builddir to the TAP tests. This was already exported by the
prove_installcheck recipes.

Per complaint from Andres Freund

Backpatched from 9b4eafcaf4 to all live branches

Discussion: https://postgr.es/m/20221002164931.d57hlutrcz4d2zi7@awork3.anarazel.de
2022-11-22 10:53:01 -05:00
Michael Paquier
aa1fe772c9 doc: Fix description of pg_stat_all_tables.n_tup_upd
Issue caused by an incorrect merge done in f507895.

This issue only impacts v11 and v12.

Author: Guillaume Lelarge
Discussion: https://postgr.es/m/CAECtzeUAL3qoebLBDnn2DfWYS0Kww-yqDicQQ3r+JS5Yu1n6FA@mail.gmail.com
Backpatch-through: 11
2022-11-22 09:16:02 +09:00
Daniel Gustafsson
926ba562ef Replace link to Hunspell with the current homepage
The Hunspell project moved from Sourceforge to Github sometime
in 2016, so update our links to match the new URL.  Backpatch
the doc changes to all supported versions.

Discussion: https://postgr.es/m/DC9A662A-360D-4125-A453-5A6CB9C6C4B4@yesql.se
Backpatch-through: v11
2022-11-21 23:25:48 +01:00
Tom Lane
c0eed88914 Add comments and a missing CHECK_FOR_INTERRUPTS in ts_headline.
I just spent an annoying amount of time reverse-engineering the
100%-undocumented API between ts_headline and the text search
parser's prsheadline function.  Add some commentary about that
while it's fresh in mind.  Also remove some unused macros in
wparser_def.c.

While at it, I noticed that when commit 78e73e875 added a
CHECK_FOR_INTERRUPTS call in TS_execute_recurse, it missed
doing so in the parallel function TS_phrase_execute, which
surely needs one just as much.

Back-patch because of the missing CHECK_FOR_INTERRUPTS.
Might as well back-patch the rest of this too.
2022-11-21 17:07:07 -05:00
Andres Freund
140c803723 Fix mislabeling of PROC_QUEUE->links as PGPROC, fixing UBSan on 32bit
ProcSleep() used a PGPROC* variable to point to PROC_QUEUE->links.next,
because that does "the right thing" with SHMQueueInsertBefore(). While that
largely works, it's certainly not correct and unnecessary - we can just use
SHM_QUEUE* to point to the insertion point.

Noticed when testing a 32bit of postgres with undefined behavior
sanitizer. UBSan noticed that sometimes the supposed PGPROC wasn't
sufficiently aligned (required since 46d6e5f567, ensured indirectly, via
ShmemAllocRaw() guaranteeing cacheline alignment).

For now fix this by using a SHM_QUEUE* for the insertion point. Subsequently
we should replace all the use of PROC_QUEUE and SHM_QUEUE with ilist.h, but
that's a larger change that we don't want to backpatch.

Backpatch to all supported versions - it's useful to be able to run postgres
under UBSan.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20221117014230.op5kmgypdv2dtqsf@awork3.anarazel.de
Backpatch: 11-
2022-11-19 12:36:52 -08:00
Tom Lane
8687cee44d Doc: sync src/tutorial/basics.source with SGML documentation.
basics.source is supposed to be pretty closely in step with
the examples in chapter 2 of the tutorial, but I forgot to
update it in commit f05a5e000.  Fix that, and adjust a couple
of other discrepancies that had crept in over time.

(I notice that advanced.source is nowhere near being in sync
with chapter 3, but I lack the ambition to do something
about that right now.)
2022-11-19 13:09:14 -05:00
Tom Lane
b7333e8269 pg_dump: avoid unsafe function calls in getPolicies().
getPolicies() had the same disease I fixed in other places in
commit e3fcbbd62, i.e., it was calling pg_get_expr() for
expressions on tables that we don't necessarily have lock on.
To fix, restrict the query to only collect interesting rows,
rather than doing the filtering on the client side.

Back-patch of commit 3e6e86abc.  That's been in v15/HEAD long enough
to have some confidence about it, so now let's fix the problem in
older branches.

Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us
Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
Discussion: https://postgr.es/m/45c93d57-9973-248e-d2df-e02ca9af48d4@darold.net
2022-11-19 12:00:27 -05:00
Tom Lane
b1f106420b Postpone calls of unsafe server-side functions in pg_dump.
Avoid calling pg_get_partkeydef(), pg_get_expr(relpartbound),
and regtypeout until we have lock on the relevant tables.
The existing coding is at serious risk of failure if there
are any concurrent DROP TABLE commands going on --- including
drops of other sessions' temp tables.

Back-patch of commit e3fcbbd62.  That's been in v15/HEAD long enough
to have some confidence about it, so now let's fix the problem in
older branches.

Original patch by me; thanks to Gilles Darold for back-patching
legwork.

Discussion: https://postgr.es/m/2273648.1634764485@sss.pgh.pa.us
Discussion: https://postgr.es/m/7d7eb6128f40401d81b3b7a898b6b4de@W2012-02.nidsa.loc
Discussion: https://postgr.es/m/45c93d57-9973-248e-d2df-e02ca9af48d4@darold.net
2022-11-19 11:40:30 -05:00
Tom Lane
d4acf2eb94 Replace RelationOpenSmgr() with RelationGetSmgr().
This is a back-patch of the v15-era commit f10f0ae42 into older
supported branches.  The idea is to design out bugs in which an
ill-timed relcache flush clears rel->rd_smgr partway through
some code sequence that wasn't expecting that.  We had another
report today of a corner case that reliably crashes v14 under
debug_discard_caches (nee CLOBBER_CACHE_ALWAYS), and therefore
would crash once in a blue moon in the field.  We're unlikely
to get rid of all such code paths unless we adopt the more
rigorous coding rules instituted by f10f0ae42.  Therefore,
even though this is a bit invasive, it's time to back-patch.
Some comfort can be taken in the fact that f10f0ae42 has been
in v15 for 16 months without problems.

I left the RelationOpenSmgr macro present in the back branches,
even though no core code should use it anymore, in order to not break
third-party extensions in minor releases.  Such extensions might opt
to start using RelationGetSmgr instead, to reduce their code
differential between v15 and earlier branches.  This carries a hazard
of failing to compile against headers from existing minor releases.
However, once compiled the extension should work fine even with such
releases, because RelationGetSmgr is a "static inline" function so
it creates no link-time dependency.  So depending on distribution
practices, that might be an OK tradeoff.

Per report from Spyridon Dimitrios Agathos.  Original patch
by Amul Sul.

Discussion: https://postgr.es/m/CAFM5RaqdgyusQvmWkyPYaWMwoK5gigdtW-7HcgHgOeAw7mqJ_Q@mail.gmail.com
Discussion: https://postgr.es/m/CANiYTQsU7yMFpQYnv=BrcRVqK_3U3mtAzAsJCaqtzsDHfsUbdQ@mail.gmail.com
2022-11-17 16:54:31 -05:00
Noah Misch
791dd75791 Account for IPC::Run::result() Windows behavior change.
This restores compatibility with the not-yet-released successor of
version 20220807.0.  Back-patch to 9.4, which introduced this code.

Reviewed by Andrew Dunstan.

Discussion: https://postgr.es/m/20221117061805.GA4020280@rfd.leadboat.com
2022-11-17 07:39:02 -08:00
Amit Kapila
1703033f89 Fix cleanup lock acquisition in SPLIT_ALLOCATE_PAGE replay.
During XLOG_HASH_SPLIT_ALLOCATE_PAGE replay, we were checking for a
cleanup lock on the new bucket page after acquiring an exclusive lock on
it and raising a PANIC error on failure. However, it is quite possible
that checkpointer can acquire the pin on the same page before acquiring a
lock on it, and then the replay will lead to an error. So instead, directly
acquire the cleanup lock on the new bucket page during
XLOG_HASH_SPLIT_ALLOCATE_PAGE replay operation.

Reported-by: Andres Freund
Author: Robert Haas
Reviewed-By: Amit Kapila, Andres Freund, Vignesh C
Backpatch-through: 11
Discussion: https://postgr.es/m/20220810022617.fvjkjiauaykwrbse@awork3.anarazel.de
2022-11-14 09:52:06 +05:30