Commit Graph

54477 Commits

Author SHA1 Message Date
Alvaro Herrera 2655ecde2c
Fix event trigger example
Commit 2f9661311b changed command tags from strings to numbers, but
forgot to adjust the code in the event trigger example, which
consequently failed to compile.

While fixing that, improve the indentation to adhere to pgindent style.

Backpatch to v13, where the change was introduced.

Author: Laurenz Albe
Discussion: https://postgr.es/m/81e36ac17dc80489e74dc5b6914afa6ccdb1a99d.camel@cybertec.at
2022-12-23 13:21:41 +01:00
Michael Paquier 9c48a0f000 Fix some incorrectness in upgrade_adapt.sql on query for WITH OIDS
The query used to disable WITH OIDS in all the relations making use of
it was checking for materialized views, but this is not a supported
operation.  On the contrary, this needs to be done on foreign tables.

While on it, use quote_ident() in the ALTER TABLE strings built on the
relation name.

Author: Anton A. Melnikov, Michael Paquier
Discussion: https://postgr.es/m/49f389ba-95ce-8a9b-09ae-f60650c0e7c7@inbox.ru
Backpatch-through: 12
2022-12-23 11:27:11 +09:00
Michael Paquier e3897a3a4c Fix come incorrect elog() messages in aclchk.c
Three error strings used with cache lookup failures were referring to
incorrect object types for ACL checks:
- Schemas
- Types
- Foreign Servers
There errors should never be triggered, but if they do incorrect
information would be reported.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20221222153041.GN1153@telsasoft.com
Backpatch-through: 11
2022-12-23 10:04:30 +09:00
Tom Lane 1a3daa5bb2 Add some recursion and looping defenses in prepjointree.c.
Andrey Lepikhov demonstrated a case where we spend an unreasonable
amount of time in pull_up_subqueries().  Not only is that recursing
with no explicit check for stack overrun, but the code seems not
interruptable by control-C.  Let's stick a CHECK_FOR_INTERRUPTS
there, along with sprinkling some stack depth checks.

An actual fix for the excessive time consumption seems a bit
risky to back-patch; but this isn't, so let's do so.

Discussion: https://postgr.es/m/703c09a2-08f3-d2ec-b33d-dbecd62428b8@postgrespro.ru
2022-12-22 10:35:02 -05:00
Tom Lane b87037b373 Fix contrib/seg to be more wary of long input numbers.
seg stores the number of significant digits in an input number
in a "char" field.  If char is signed, and the input is more than
127 digits long, the count can read out as negative causing
seg_out() to print garbage (or, if you're really unlucky,
even crash).

To fix, clamp the digit count to be not more than FLT_DIG.
(In theory this loses some information about what the original
input was, but it doesn't seem like useful information; it would
not survive dump/restore in any case.)

Also, in case there are stored values of the seg type containing
bad data, add a clamp in seg_out's restore() subroutine.

Per bug #17725 from Robins Tharakan.  It's been like this
forever, so back-patch to all supported branches.

Discussion: https://postgr.es/m/17725-0a09313b67fbe86e@postgresql.org
2022-12-21 17:51:50 -05:00
Tom Lane ae98debf77 Fix inability to reference CYCLE column from inside its CTE.
Such references failed with "cache lookup failed for type 0"
because we didn't resolve the type of the CYCLE column until after
analyzing the CTE's query.  We can just move that processing
to before the recursive parse_sub_analyze call, though.

While here, invent a couple of local variables to make this
code less egregiously wider-than-80-columns.

Per bug #17723 from Vik Fearing.  Back-patch to v14 where
the CYCLE feature was added.

Discussion: https://postgr.es/m/17723-2c4985ff111e7bba@postgresql.org
2022-12-16 13:07:42 -05:00
David Rowley 1a9b43c688 Re-adjust drop-index-concurrently-1 isolation test
It seems that drop-index-concurrently-1 has started to forget what it was
originally meant to be testing.  d2d8a229b, which added incremental sorts
changed the expected plan to be an Index Scan plan instead of a Seq Scan
plan.  This occurred as the primary key index of the table in question
provided presorted input and, because that index happened to be the
cheapest input path due to enable_seqscan being disabled, the incremental
sort changes just added a Sort on top of that.  It seems based on the name
of the PREPAREd statement that the intention here is that the query
produces a seqscan plan.

The reason this test has become broken seems to be due to how the test was
originally coded.  The test was trying to force a seqscan plan by
performing some casting to make it so the test_dc index couldn't be used
to perform the required filtering.  Trying to coax the planner into using
a plan which has costed in a disable_cost seems like it's always going to
be flakey as small changes in costs are drowned out by the large
disable_cost combined with add_path's STD_FUZZ_FACTOR.  Here we get rid of
the casts that we're using to try to trick the planner into a seqscan and
instead toggle enable_seqscan as and when required to get the desired
plan.

Additionally, rename a few things in the test and add some additional
wording to the comments to try and make it more clear in the future what
we expect this test to be doing.

Discussion: https://postgr.es/m/CAApHDvrbDhObhLV+=U_K_-t+2Av2av1aL9d+2j_3AO-XndaviA@mail.gmail.com
Backpatch-through: 13, where d2d8a229b changed the expected test output
2022-12-16 11:40:22 +13:00
Tom Lane 18431ee6f5 Rethink handling of [Prevent|Is]InTransactionBlock in pipeline mode.
Commits f92944137 et al. made IsInTransactionBlock() set the
XACT_FLAGS_NEEDIMMEDIATECOMMIT flag before returning "false",
on the grounds that that kept its API promises equivalent to those of
PreventInTransactionBlock().  This turns out to be a bad idea though,
because it allows an ANALYZE in a pipelined series of commands to
cause an immediate commit, which is unexpected.

Furthermore, if we return "false" then we have another issue,
which is that ANALYZE will decide it's allowed to do internal
commit-and-start-transaction sequences, thus possibly unexpectedly
committing the effects of previous commands in the pipeline.

To fix the latter situation, invent another transaction state flag
XACT_FLAGS_PIPELINING, which explicitly records the fact that we
have executed some extended-protocol command and not yet seen a
commit for it.  Then, require that flag to not be set before allowing
InTransactionBlock() to return "false".

Having done that, we can remove its setting of NEEDIMMEDIATECOMMIT
without fear of causing problems.  This means that the API guarantees
of IsInTransactionBlock now diverge from PreventInTransactionBlock,
which is mildly annoying, but it seems OK given the very limited usage
of IsInTransactionBlock.  (In any case, a caller preferring the old
behavior could always set NEEDIMMEDIATECOMMIT for itself.)

For consistency also require XACT_FLAGS_PIPELINING to not be set
in PreventInTransactionBlock.  This too is meant to prevent commands
such as CREATE DATABASE from silently committing previous commands
in a pipeline.

Per report from Peter Eisentraut.  As before, back-patch to all
supported branches (which sadly no longer includes v10).

Discussion: https://postgr.es/m/65a899dd-aebc-f667-1d0a-abb89ff3abf8@enterprisedb.com
2022-12-13 14:23:59 -05:00
Tom Lane d79b76b10e Fix jsonb subscripting to cope with toasted subscript values.
jsonb_get_element() was incautious enough to use VARDATA() and
VARSIZE() directly on an arbitrary text Datum.  That of course
fails if the Datum is short-header, compressed, or out-of-line.
The typical result would be failing to match any element of a
jsonb object, though matching the wrong one seems possible as well.

setPathObject() was slightly brighter, in that it used VARDATA_ANY
and VARSIZE_ANY_EXHDR, but that only kept it out of trouble for
short-header Datums.  push_path() had the same issue.  This could
result in faulty subscripted insertions, though keys long enough to
cause a problem are likely rare in the wild.

Having seen these, I looked around for unsafe usages in the rest
of the adt/json* files.  There are a couple of places where it's not
immediately obvious that the Datum can't be compressed or out-of-line,
so I added pg_detoast_datum_packed() to cope if it is.  Also, remove
some other usages of VARDATA/VARSIZE on Datums we just extracted from
a text array.  Those aren't actively broken, but they will become so
if we ever start allowing short-header array elements, which does not
seem like a terribly unreasonable thing to do.  In any case they are
not great coding examples, and they could also do with comments
pointing out that we're assuming we don't need pg_detoast_datum_packed.

Per report from exe-dealer@yandex.ru.  Patch by me, but thanks to
David Johnston for initial investigation.  Back-patch to v14 where
jsonb subscripting was introduced.

Discussion: https://postgr.es/m/205321670615953@mail.yandex.ru
2022-12-12 16:17:49 -05:00
Robert Haas 8b5ba2f3f4 Fix failure to advance content pointer in sendFileWithContent.
If sendFileWithContent were used to send a file larger than the
bbsink buffer size, this would result in corruption. The only
files that are sent via sendFileWithContent are the backup label
file, the tablespace map file, and .done files for WAL segments
included in the backup. Of these, it seems that only the
tablespace_map file can become large enough to cause a problem,
and then only if you have a lot of tablespaces. If you do have
that situation, you might end up with a corrupted
tablespace_map file, which would be bad.

My commit bef47ff85d introduced
this problem.

Report and patch by Antonin Houska.

Discussion: http://postgr.es/m/15764.1670528645@antos
2022-12-12 10:33:02 -05:00
David Rowley 04788ee4c5 Add subquery pullup handling for WindowClause runCondition
9d9c02ccd added code to allow WindowAgg to take some shortcuts when a
monotonic WindowFunc reached some value that it could never come back
from due to the function's monotonic nature.  That commit added a
runCondition field to WindowClause to store the condition which, when it
becomes false we can start taking shortcuts in nodeWindowAgg.c.

Here we fix an issue where subquery pullups didn't properly update the
runCondition to update the Vars to properly reference the new query level.

Here we also add a missing call to preprocess_expression() for the
WindowClause's runCondtion.  The WindowFuncs in the targetlist will have
had this process done, so we must also do it for the WindowFuncs in the
runCondition so that they can be correctly found in the targetlist
during setrefs.c

Bug: #17709
Reported-by: Alexey Makhmutov
Author: Richard Guo, David Rowley
Discussion: https://postgr.es/m/17709-4f557160e3e8ee9a@postgresql.org
Backpatch-through: 15, where 9d9c02ccd was introduced
2022-12-10 19:27:53 +13:00
Dean Rasheed ee1c6728d8 Update MERGE docs to mention that ONLY is supported.
Commit 7103ebb7aa added support for MERGE, which included support for
inheritance hierarchies, but didn't document the fact that ONLY could
be specified before the source and/or target tables to exclude tables
inheriting from the tables specified.

Update merge.sgml to mention this, and while at it, add some
regression tests to cover it.

Dean Rasheed, reviewed by Nathan Bossart.

Backpatch to 15, where MERGE was added.

Discussion: https://postgr.es/m/CAEZATCU0XM-bJCvpJuVRU3UYNRqEBS6g4-zH%3Dj9Ye0caX8F6uQ%40mail.gmail.com
2022-12-09 10:03:04 +00:00
Etsuro Fujita a0bf7a0ecc Remove new structure member from ResultRelInfo.
In commit ffbb7e65a, I added a ModifyTableState member to ResultRelInfo
to save the owning ModifyTableState for use by nodeModifyTable.c when
performing batch inserts, but as pointed out by Tom Lane, that changed
the array stride of es_result_relations, and that would break any
previously-compiled extension code that accesses that array.  Fix by
removing that member from ResultRelInfo and instead adding a List member
at the end of EState to save such ModifyTableStates.

Per report from Tom Lane.  Back-patch to v14, like the previous commit;
I chose to apply the patch to HEAD as well, to make back-patching easy.

Discussion: http://postgr.es/m/4065383.1669395453%40sss.pgh.pa.us
2022-12-08 16:15:01 +09:00
Peter Eisentraut 6bcd1d9f30 Fix FK comment think-o
from commit d6f96ed94e

Author: Paul Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Ian Lawrence Barwick <barwick@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/6a7c7338-1aa2-4689-d171-0b0b294fdd84%40illuminatedcomputing.com
2022-12-07 17:08:19 +01:00
David Rowley 2a535620ce Fix 32-bit build dangling pointer issue in WindowAgg
9d9c02ccd added window "run conditions", which allows the evaluation of
monotonic window functions to be skipped when the run condition is no
longer true.  Prior to this commit, once the run condition was no longer
true and we stopped evaluating the window functions, we simply just left
the ecxt_aggvalues[] and ecxt_aggnulls[] arrays alone to store whatever
value was stored there the last time the window function was evaluated.
Leaving a stale value in there isn't really a problem on 64-bit builds as
all of the window functions which we recognize as monotonic all return
int8, which is passed by value on 64-bit builds.  However, on 32-bit
builds, this was a problem as the value stored in the ecxt_values[]
element would be a by-ref value and it would be pointing to some memory
which would get reset once the tuple context is destroyed.  Since the
WindowAgg node will output these values in the resulting tupleslot, this
could be problematic for the top-level WindowAgg node which must look at
these values to filter out the rows that don't meet its filter condition.

Here we fix this by just zeroing the ecxt_aggvalues[] and setting the
ecxt_aggnulls[] array to true when the run condition first becomes false.
This results in the WindowAgg's output having NULLs for the WindowFunc's
columns rather than the stale or pointer pointing to possibly freed
memory.  These tuples with the NULLs can only make it as far as the
top-level WindowAgg node before they're filtered out.  To ensure that
these tuples *are* always filtered out, we now insist that OpExprs making
up the run condition are strict OpExprs.  Currently, all the window
functions which the planner recognizes as monotonic return INT8 and the
operator which is used for the run condition must be a member of a btree
opclass.  In reality, these restrictions exclude nothing that's built-in
to Postgres and are unlikely to exclude anyone's custom operators due to
the requirement that the operator is part of a btree opclass.  It would be
unusual if those were not strict.

Reported-by: Sergey Shinderuk, using valgrind
Reviewed-by: Richard Guo, Sergey Shinderuk
Discussion: https://postgr.es/m/29184c50-429a-ebd7-f1fb-0589c6723a35@postgrespro.ru
Backpatch-through: 15, where 9d9c02ccd was added
2022-12-07 00:10:21 +13:00
Tom Lane c959f84c2b Fix Memoize to work with partitionwise joining.
A couple of places weren't up to speed for this.  By sheer good
luck, we didn't fail but just selected a non-memoized join plan,
at least in the test case we have.  Nonetheless, it's a bug,
and I'm not quite sure that it couldn't have worse consequences
in other examples.  So back-patch to v14 where Memoize came in.

Richard Guo

Discussion: https://postgr.es/m/CAMbWs48GkNom272sfp0-WeD6_0HSR19BJ4H1c9ZKSfbVnJsvRg@mail.gmail.com
2022-12-05 12:36:41 -05:00
Michael Paquier 74a600a150 doc: Add missing <varlistentry> markups for developer GUCs
Missing such markups makes it impossible to create links back to these
GUCs, and all the other parameters have one already.

Author: Ian Lawrence Barwick
Discussion: https://postgr.es/m/CAB8KJ=jx=6dFB_EN3j0UkuvG3cPu5OmQiM-ZKRAz+fKvS+u8Ng@mail.gmail.com
Backpatch-through: 11
2022-12-05 11:23:27 +09:00
Tom Lane 834d97c32b Fix broken MemoizePath support in reparameterize_path().
It neglected to recurse to the subpath, meaning you'd get back
a path identical to the input.  This could produce wrong query
results if the omission meant that the subpath fails to enforce
some join clause it should be enforcing.  We don't have a test
case for this at the moment, but the code is obviously broken
and the fix is equally obvious.  Back-patch to v14 where
Memoize was introduced.

Richard Guo

Discussion: https://postgr.es/m/CAMbWs4_R=ORpz=Lkn2q3ebPC5EuWyfZF+tmfCPVLBVK5W39mHA@mail.gmail.com
2022-12-04 13:48:12 -05:00
Tom Lane bf8fd64ff5 Fix generate_partitionwise_join_paths() to tolerate failure.
We might fail to generate a partitionwise join, because
reparameterize_path_by_child() does not support all path types.
This should not be a hard failure condition: we should just fall back
to a non-partitioned join.  However, generate_partitionwise_join_paths
did not consider this possibility and would emit the (misleading)
error "could not devise a query plan for the given query" if we'd
failed to make any paths for a child join.  Fix it to give up on
partitionwise joining if so.  (The accepted technique for giving up
appears to be to set rel->nparts = 0, which I find pretty bizarre,
but there you have it.)

I have not added a test case because there'd be little point:
any omissions of this sort that we identify would soon get fixed
by extending reparameterize_path_by_child(), so the test would stop
proving anything.  However, right now there is a known test case based
on failure to cover MaterialPath, and with that I've found that this
is broken in all supported versions.  Hence, patch all the way back.

Original report and patch by me; thanks to Richard Guo for
identifying a test case that works against committed versions.

Discussion: https://postgr.es/m/1854233.1669949723@sss.pgh.pa.us
2022-12-04 13:17:18 -05:00
Dean Rasheed c67204db61 Fix DEFAULT handling for multi-row INSERT rules.
When updating a relation with a rule whose action performed an INSERT
from a multi-row VALUES list, the rewriter might skip processing the
VALUES list, and therefore fail to replace any DEFAULTs in it. This
would lead to an "unrecognized node type" error.

The reason was that RewriteQuery() assumed that a query doing an
INSERT from a multi-row VALUES list would necessarily only have one
item in its fromlist, pointing to the VALUES RTE to read from. That
assumption is correct for the original query, but not for product
queries produced for rule actions. In such cases, there may be
multiple items in the fromlist, possibly including multiple VALUES
RTEs.

What is required instead is for RewriteQuery() to skip any RTEs from
the product query's originating query, which might include one or more
already-processed VALUES RTEs. What's left should then include at most
one VALUES RTE (from the rule action) to be processed.

Patch by me. Thanks to Tom Lane for reviewing.

Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAEZATCV39OOW7LAR_Xq4i%2BLc1Byux%3DeK3Q%3DHD_pF1o9LBt%3DphA%40mail.gmail.com
2022-12-03 12:14:36 +00:00
Andres Freund c6a60471a1 Prevent pgstats from getting confused when relkind of a relation changes
When the relkind of a relache entry changes, because a table is converted into
a view, pgstats can get confused in 15+, leading to crashes or assertion
failures.

For HEAD, Tom fixed this in b23cd185fd, by removing support for converting a
table to a view, removing the source of the inconsistency. This commit just
adds an assertion that a relcache entry's relkind does not change, just in
case we end up with another case of that in the future. As there's no cases of
changing relkind anymore, we can't add a test that that's handled correctly.

For 15, fix the problem by not maintaining the association with the old pgstat
entry when the relkind changes during a relcache invalidation processing. In
that case the pgstat entry needs to be unlinked first, to avoid
PgStat_TableStatus->relation getting out of sync. Also add a test reproducing
the issues.

No known problem exists in 11-14, so just add the test there.

Reported-by: vignesh C <vignesh21@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALDaNm2yXz+zOtv7y5zBd5WKT8O0Ld3YxikuU3dcyCvxF7gypA@mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm3oZA-8Wbps2Jd1g5_Gjrr-x3YWrJPek-mF5Asrrvz2Dg@mail.gmail.com
Backpatch: 15-
2022-12-02 18:07:47 -08:00
Tom Lane 97299cf99d Fix psql's \sf and \ef for new-style SQL functions.
Some options of these commands need to be able to identify the start
of the function body within the output of pg_get_functiondef().
It used to be that that always began with "AS", but since the
introduction of new-style SQL functions, it might also start with
"BEGIN" or "RETURN".  Fix that on the psql side, and add some
regression tests.

Noted by me awhile ago, but I didn't do anything about it.
Thanks to David Johnston for a nag.

Discussion: https://postgr.es/m/AM9PR01MB8268D5CDABDF044EE9F42173FE8C9@AM9PR01MB8268.eurprd01.prod.exchangelabs.com
2022-12-02 14:24:44 -05:00
Amit Kapila ebf87c019c Fix incorrect output from pgoutput when using column lists.
For Updates and Deletes, we were not honoring the columns list for old
tuple values while sending tuple data via pgoutput. This results in
pgoutput emitting more columns than expected.

This is not a problem for built-in logical replication as we simply ignore
additional columns based on the relation information sent previously which
didn't have those columns. However, some other users of pgoutput plugin
may expect the columns as per the column list. Also, sending extra columns
unnecessarily consumes network bandwidth defeating the purpose of the
column list feature.

Reported-by: Gunnar Morling
Author: Hou Zhijie
Reviewed-by: Amit Kapila
Backpatch-through: 15
Discussion: https://postgr.es/m/CADGJaX9kiRZ-OH0EpWF5Fkyh1ZZYofoNRCrhapBfdk02tj5EKg@mail.gmail.com
2022-12-02 11:14:42 +05:30
Jeff Davis 9377b4f30a Fix memory leak for hashing with nondeterministic collations.
Backpatch through 12, where nondeterministic collations were
introduced (5e1963fb76).

Backpatch-through: 12
2022-12-01 11:55:59 -08:00
Tom Lane e10799aa25 Doc: add example of round(v, s) with negative s.
This has always worked, but you'd be unlikely to guess it
from the documentation.  Add an example showing it.

Lack of docs noted by David Johnston.  Back-patch to v13;
the documentation layout we used before that was not very
amenable to squeezing in multiple examples.

Discussion: https://postgr.es/m/CAKFQuwZ4Vy1Xty0G5Ok+ot=NDrU3C6hzF1JwUk-FEkwe3V9_RA@mail.gmail.com
2022-12-01 12:26:12 -05:00
Tom Lane afa4a4f764 Doc: word-smith the discussion of secure schema usage patterns.
Rearrange the discussion of user-private schemas so that details
applying only to upgraded-from-pre-v15 databases are in a follow-on
paragraph, not in the main description of how to set up this pattern.
This seems a little clearer even today, and it'll get more so as
pre-v15 systems fade into the sunset.

Wording contributions from Robert Haas, Tom Lane, Noah Misch.

Discussion: https://postgr.es/m/CA+TgmoYUHsfp90inEMAP0yNr7Y_L6EphPH1YOon1JKtBztXHyQ@mail.gmail.com
2022-12-01 12:10:25 -05:00
Tom Lane a711b36e5b Fix under-parenthesized display of AT TIME ZONE constructs.
In commit 40c24bfef, I forgot to use get_rule_expr_paren() for the
arguments of AT TIME ZONE, resulting in possibly not printing parens
for expressions that need it.  But get_rule_expr_paren() wouldn't have
gotten it right anyway, because isSimpleNode() hadn't been taught that
COERCE_SQL_SYNTAX parent nodes don't guarantee sufficient parentheses.
Improve all that.  Also use this methodology for F_IS_NORMALIZED, so
that we don't print useless parens for that.

In passing, remove a comment that was obsoleted later.

Per report from Duncan Sands.  Back-patch to v14 where this code
came in.  (Before that, we didn't try to print AT TIME ZONE that way,
so there was no bug just ugliness.)

Discussion: https://postgr.es/m/f41566aa-a057-6628-4b7c-b48770ecb84a@deepbluecap.com
2022-12-01 11:38:15 -05:00
Bruce Momjian f79cca5bfd revert: add transaction processing chapter with internals info
This doc patch (master hash 66bc9d2d3e) was decided to be too
significant for backpatching, so reverted in all but master.  Also fix
SGML file header comment in master.

Reported-by:  	Peter Eisentraut

Discussion: https://postgr.es/m/c6304b19-6ff7-f3af-0148-cf7aa7e2fbfd@enterprisedb.com

Backpatch-through: 11
2022-12-01 10:45:08 -05:00
Tom Lane f2f9e11d35 Reject missing database name in pg_regress and cohorts.
Writing "pg_regress --dbname= ..." led to a crash, because
we weren't expecting there to be no database name supplied.
It doesn't seem like a great idea to run regression tests
in whatever is the user's default database; so rather than
supporting this case let's explicitly reject it.

Per report from Xing Guo.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/CACpMh+A8cRvtvtOWVAZsCM1DU81GK4DL26R83y6ugZ1osV=ifA@mail.gmail.com
2022-11-30 13:01:41 -05:00
Bruce Momjian 8592b56faf doc: add transaction processing chapter with internals info
This also adds references to this new chapter at relevant sections of
our documentation.  Previously much of these internal details were
exposed to users, but not explained.  This also updates RELEASE
SAVEPOINT.

Discussion: https://postgr.es/m/CANbhV-E_iy9fmrErxrCh8TZTyenpfo72Hf_XD2HLDppva4dUNA@mail.gmail.com

Author: Simon Riggs, Laurenz Albe

Reviewed-by: Bruce Momjian

Backpatch-through: 11
2022-11-29 20:49:52 -05:00
Michael Paquier 15571ccd19 Fix comment in fe-auth-scram.c
The frontend-side routine in charge of building a SCRAM verifier
mentioned that the restrictions applying to SASLprep on the password
with the encoding are described at the top of fe-auth-scram.c, but this
information is in auth-scram.c.

This is wrong since 8f8b9be, so backpatch all the way down as this is an
important documentation bit.

Spotted while reviewing a different patch.

Backpatch-through: 11
2022-11-30 08:38:27 +09:00
Tom Lane 55fa993d7e Improve heuristics for compressing the KnownAssignedXids array.
Previously, we'd compress only when the active range of array entries
reached Max(4 * PROCARRAY_MAXPROCS, 2 * pArray->numKnownAssignedXids).
If max_connections is large, the first term could result in not
compressing for a long time, resulting in much wastage of cycles in
hot-standby backends scanning the array to take snapshots.  Get rid
of that term, and just bound it to 2 * pArray->numKnownAssignedXids.

That however creates the opposite risk, that we might spend too much
effort compressing.  Hence, consider compressing only once every 128
commit records.  (This frequency was chosen by benchmarking.  While
we only tried one benchmark scenario, the results seem stable over
a fairly wide range of frequencies.)

Also, force compression when processing RecoveryInfo WAL records
(which should be infrequent); the old code could perform compression
then, but would do so only after the same array-range check as for
the transaction-commit path.

Also, opportunistically run compression if the startup process is about
to wait for WAL, though not oftener than once a second.  This should
prevent cases where we waste lots of time by leaving the array
not-compressed for long intervals due to low WAL traffic.

Lastly, add a simple check to keep us from uselessly compressing
when the array storage is already compact.

Back-patch, as the performance problem is worse in pre-v14 branches
than in HEAD.

Simon Riggs and Michail Nikolaev, with help from Tom Lane and
Andres Freund.

Discussion: https://postgr.es/m/CALdSSPgahNUD_=pB_j=1zSnDBaiOtqVfzo8Ejt5J_k7qZiU1Tw@mail.gmail.com
2022-11-29 15:43:17 -05:00
Tom Lane 5dfc2b753b Prevent clobbering of utility statements in SQL function caches.
This is an oversight in commit 7c337b6b5: I apparently didn't think
about the possibility of a SQL function being executed multiple
times within a query.  In that case, functions.c's primitive caching
mechanism allows the same utility parse tree to be presented for
execution more than once.  We have to tell ProcessUtility to make
a working copy of the parse tree, or bad things happen.

Normally I'd add a regression test, but I think the reported crasher
is dependent on some rather random implementation choices that are
nowhere near functions.c, so its usefulness as a long-lived test
feels questionable.  In any case, this fix is clearly correct given
the design choices of 7c337b6b5.

Per bug #17702 from Xin Wen.  Thanks to Daniel Gustafsson for
analysis.  Back-patch to v14 where the faulty commit came in
(before that, the responsibility for copying scribble-able
utility parse trees lay elsewhere).

Discussion: https://postgr.es/m/17702-ad24fdcdd1e9047a@postgresql.org
2022-11-29 11:46:33 -05:00
Tom Lane 556c0b913b Remove bogus Assert and dead code in remove_useless_results_recurse().
The JOIN_SEMI case Assert'ed that there are no PlaceHolderVars that
need to be evaluated at the semijoin's RHS, which is wrong because
there could be some in the semijoin's qual condition.  However, there
could not be any references further up than that, and within the qual
there is not any way that such a PHV could have gone to null yet, so
we don't really need the PHV and there is no need to avoid making the
RHS-removal optimization.  The upshot is that there's no actual bug
in production code, and we ought to just remove this misguided Assert.

While we're here, also drop the JOIN_RIGHT case, which is dead code
because reduce_outer_joins() already got rid of JOIN_RIGHT.

Per bug #17700 from Xin Wen.  Uselessness of the JOIN_RIGHT case
pointed out by Richard Guo.  Back-patch to v12 where this code
was added.

Discussion: https://postgr.es/m/17700-2b5c10d917c30687@postgresql.org
2022-11-29 10:52:44 -05:00
Andrew Dunstan b5d8fd4182 Fix binary mismatch for MSVC plperl vs gcc built perl libs
When loading plperl built against Strawberry perl or the msys2 ucrt perl
that have been built with gcc, a binary mismatch has been encountered
which looks like this:

loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)

To cure this we bring the handshake keys into sync by adding
NO_THREAD_SAFE_LOCALE to the defines used to build plperl.

Discussion: https://postgr.es/m/20211005004334.tgjmro4kuachwiuc@alap3.anarazel.de
Discussion: https://postgr.es/m/c2da86a0-2906-744c-923d-16da6047875e@dunslane.net

Backpatch to all live branches.
2022-11-27 09:18:14 -05:00
Tom Lane 3ae0c48a59 Remove temporary portlock directory during make [dist]clean.
Another oversight in 9b4eafcaf.
2022-11-26 10:30:46 -05:00
Andrew Dunstan fec24b7523 Add portlock directory to .gitignore
Commit 9b4eafcaf4 added creattion of a directory to reserve TAP test
ports at the top of the build tree. In a non-vpath build this means at
the top of the source tree, so it needs to be added to .gitignore.

As suggested by Michael Paquier

Backpatch to all live branches.
2022-11-26 07:47:01 -05:00
Andrew Dunstan fed54fc9a5 Allow building with MSVC and Strawberry perl
Strawberry uses __builtin_expect which Visual C doesn't have. For this
case define it as a noop. Solution taken from vim sources.

Backpatch to all live branches
2022-11-25 15:37:33 -05:00
Dean Rasheed 04d61bfe64 Fix rule-detection code for MERGE.
Use the relation's rd_rules structure to test whether it has rules,
rather than the relhasrules flag, which might be out of date.

Reviewed by Tom Lane.

Backpatch to 15, where MERGE was added.

Discussion: https://postgr.es/m/CAEZATCVkBVZABfw71sYvkcPf6tarcOFST5Bc6AOi-LFT9YdccQ%40mail.gmail.com
2022-11-25 13:29:51 +00:00
Etsuro Fujita fc02019c09 Fix handling of pending inserts in nodeModifyTable.c.
Commit b663a4136, which allowed FDWs to INSERT rows in bulk, added to
nodeModifyTable.c code to flush pending inserts to the foreign-table
result relation(s) before completing processing of the ModifyTable node,
but the code failed to take into account the case where the INSERT query
has modifying CTEs, leading to incorrect results.

Also, that commit failed to flush pending inserts before firing BEFORE
ROW triggers so that rows are visible to such triggers.

In that commit we scanned through EState's
es_tuple_routing_result_relations or es_opened_result_relations list to
find the foreign-table result relations to which pending inserts are
flushed, but that would be inefficient in some cases.  So to fix, 1) add
a List member to EState to record the insert-pending result relations,
and 2) modify nodeModifyTable.c so that it adds the foreign-table result
relation to the list in ExecInsert() if appropriate, and flushes pending
inserts properly using the list where needed.

While here, fix a copy-and-pasteo in a comment in ExecBatchInsert(),
which was added by that commit.

Back-patch to v14 where that commit appeared.

Discussion: https://postgr.es/m/CAPmGK16qutyCmyJJzgQOhfBq%3DNoGDqTB6O0QBZTihrbqre%2BoxA%40mail.gmail.com
2022-11-25 17:45:01 +09:00
Amit Kapila 898ef41bf6 Fix uninitialized access to InitialRunningXacts during decoding.
In commit 272248a0c, we introduced an InitialRunningXacts array to
remember transactions and subtransactions that were running when the
xl_running_xacts record that we decoded was written. This array was
allocated in the snapshot builder memory context after we restore
serialized snapshot but we forgot to reset the array while freeing the
builder memory context. So, the next time when we start decoding in the
same session where we don't restore any serialized snapshot, we ended up
using the uninitialized array and that can lead to unpredictable behavior.

This problem doesn't exist in HEAD as instead of using
InitialRunningXacts, we added the list of transaction IDs and
sub-transaction IDs, that have modified catalogs and are running during
snapshot serialization, to the serialized snapshot (see commit 7f13ac8123).

Reported-by: Maxim Orlov
Author: Masahiko Sawada
Reviewed-by: Amit Kapila, Maxim Orlov
Backpatch-through: 11
Discussion: https://postgr.es/m/CACG=ezZoz_KG+Ryh9MrU_g5e0HiVoHocEvqFF=NRrhrwKmEQJQ@mail.gmail.com
2022-11-25 09:38:03 +05:30
Alvaro Herrera f63f29733e
Make multixact error message more explicit
There are recent reports involving a very old error message that we have
no history of hitting -- perhaps a recently introduced bug.  Improve the
error message in an attempt to improve our chances of investigating the
bug.

Per reports from Dimos Stamatakis and Bob Krier.

Backpatch to 11.

Discussion: https://postgr.es/m/CO2PR0801MB2310579F65529380A4E5EDC0E20A9@CO2PR0801MB2310.namprd08.prod.outlook.com
Discussion: https://postgr.es/m/17518-04e368df5ad7f2ee@postgresql.org
2022-11-24 10:45:10 +01:00
Andrew Dunstan 2c0d0ee761
Fix perl warning from commit 9b4eafcaf4
per gripe from Andres Freund and Tom Lane

Backpatch to all live branches.
2022-11-23 07:14:50 -05:00
Tom Lane 2debceed29 YA attempt at taming worst-case behavior of get_actual_variable_range.
We've made multiple attempts at preventing get_actual_variable_range
from taking an unreasonable amount of time (3ca930fc3, fccebe421).
But there's still an issue for the very first planning attempt after
deletion of a large number of extremal-valued tuples.  While that
planning attempt will set "killed" bits on the tuples it visits and
thereby reduce effort for next time, there's still a lot of work it
has to do to visit the heap and then set those bits.  It's (usually?)
not worth it to do that much work at plan time to have a slightly
better estimate, especially in a context like this where the table
contents are known to be mutating rapidly.

Therefore, let's bound the amount of work to be done by giving up
after we've visited 100 heap pages.  Giving up just means we'll
fall back on the extremal value recorded in pg_statistic, so it
shouldn't mean that planner estimates suddenly become worthless.

Note that this means we'll still gradually whittle down the problem
by setting a few more index "killed" bits in each planning attempt;
so eventually we'll reach a good state (barring further deletions),
even in the absence of VACUUM.

Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
adjustments by me).  Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com
2022-11-22 14:40:44 -05:00
Andrew Dunstan 153e215677 Prevent port collisions between concurrent TAP tests
Currently there is a race condition where if concurrent TAP tests both
test that they can open a port they will assume that it is free and use
it, causing one of them to fail. To prevent this we record a reservation
using an exclusive lock, and any TAP test that discovers a reservation
checks to see if the reserving process is still alive, and looks for
another free port if it is.

Ports are reserved in a directory set by the environment setting
PG_TEST_PORT_DIR, or if that doesn't exist a subdirectory of the top
build directory as set by Makefile.global, or its own
tmp_check directory.

The prove_check recipe in Makefile.global.in is extended to export
top_builddir to the TAP tests. This was already exported by the
prove_installcheck recipes.

Per complaint from Andres Freund

Backpatched from 9b4eafcaf4 to all live branches

Discussion: https://postgr.es/m/20221002164931.d57hlutrcz4d2zi7@awork3.anarazel.de
2022-11-22 10:51:13 -05:00
Alvaro Herrera 1118a8d2c4
Remove useless MERGE test
This was trying to exercise an ERROR we don't actually have.

Backpatch to 15.

Reported by Teja Mupparti <Tejeswar.Mupparti@microsoft.com>
Discussion: https://postgr.es/m/SN6PR2101MB1040BDAF740EA4389484E92BF0079@SN6PR2101MB1040.namprd21.prod.outlook.com
2022-11-22 11:26:47 +01:00
Alvaro Herrera 1ad033df16
Ignore invalidated slots while computing oldest catalog Xmin
Once a logical slot has acquired a catalog_xmin, it doesn't let go of
it, even when invalidated by exceeding the max_slot_wal_keep_size, which
means that dead catalog tuples are not removed by vacuum anymore since
the point is invalidated, until the slot is dropped.  This could be
catastrophic if catalog churn is high.

Change the computation of Xmin to ignore invalidated slots,
to prevent dead rows from accumulating.

Backpatch to 13, where slot invalidation appeared.

Author: Sirisha Chamarthi <sirichamarthi22@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CAKrAKeUEDeqquN9vwzNeG-CN8wuVsfRYbeOUV9qKO_RHok=j+g@mail.gmail.com
2022-11-22 10:56:07 +01:00
Daniel Gustafsson 4f997ad062 Replace link to Hunspell with the current homepage
The Hunspell project moved from Sourceforge to Github sometime
in 2016, so update our links to match the new URL.  Backpatch
the doc changes to all supported versions.

Discussion: https://postgr.es/m/DC9A662A-360D-4125-A453-5A6CB9C6C4B4@yesql.se
Backpatch-through: v11
2022-11-21 23:25:48 +01:00
Tom Lane 0353db996e Add comments and a missing CHECK_FOR_INTERRUPTS in ts_headline.
I just spent an annoying amount of time reverse-engineering the
100%-undocumented API between ts_headline and the text search
parser's prsheadline function.  Add some commentary about that
while it's fresh in mind.  Also remove some unused macros in
wparser_def.c.

While at it, I noticed that when commit 78e73e875 added a
CHECK_FOR_INTERRUPTS call in TS_execute_recurse, it missed
doing so in the parallel function TS_phrase_execute, which
surely needs one just as much.

Back-patch because of the missing CHECK_FOR_INTERRUPTS.
Might as well back-patch the rest of this too.
2022-11-21 17:07:07 -05:00
Tom Lane b8988cf1d0 Mark pageinspect's disk-accessing functions as parallel restricted.
These functions have been marked parallel safe, but the buildfarm's
response to commit e2933a6e1 exposed the flaw in that thinking:
if you try to use them on a temporary table, and they run inside
a parallel worker, they'll fail with "cannot access temporary tables
during a parallel operation".

Fix that by marking them parallel restricted instead.  Maybe someday
we'll have a better answer and can reverse this decision.

Back-patch to v15.  To go back further, we'd have to devise variant
versions of pre-1.10 pageinspect versions.  Given the lack of field
complaints, it doesn't seem worth the trouble.  We'll just deem
this case unsupported pre-v15.  (If anyone does complain, it might
be good enough to update the markings manually in their DBs.)

Discussion: https://postgr.es/m/E1ox94a-000EHu-VH@gemulon.postgresql.org
2022-11-21 15:37:10 -05:00