Commit Graph

36788 Commits

Author SHA1 Message Date
Heikki Linnakangas
db7394d4de Fix another bug in parent page splitting during GiST index build.
Yet another bug in the ilk of commits a7ee7c851 and 741b88435. In
741b88435, we took care to clear the memorized location of the
downlink when we split the parent page, because splitting the parent
page can move the downlink. But we missed that even *updating* a tuple
on the parent can move it, because updating a tuple on a gist page is
implemented as a delete+insert, so the updated tuple gets moved to the
end of the page.

This commit fixes the bug in two different ways (belt and suspenders):

1. Clear the downlink when we update a tuple on the parent page, even
   if it's not split. This the same approach as in commits a7ee7c851
   and 741b88435.

   I also noticed that gistFindCorrectParent did not clear the
   'downlinkoffnum' when it stepped to the right sibling. Fix that
   too, as it seems like a clear bug even though I haven't been able
   to find a test case to hit that.

2. Change gistFindCorrectParent so that it treats 'downlinkoffnum'
   merely as a hint. It now always first checks if the downlink is
   still at that location, and if not, it scans the page like before.
   That's more robust if there are still more cases where we fail to
   clear 'downlinkoffnum' that we haven't yet uncovered. With this,
   it's no longer necessary to meticulously clear 'downlinkoffnum',
   so this makes the previous fixes unnecessary, but I didn't revert
   them because it still seems nice to clear it when we know that the
   downlink has moved.

Also add the test case using the same test data that Alexander
posted. I tried to reduce it to a smaller test, and I also tried to
reproduce this with different test data, but I was not able to, so
let's just include what we have.

Backpatch to v12, like the previous fixes.

Reported-by: Alexander Lakhin
Discussion: https://www.postgresql.org/message-id/18129-caca016eaf0c3702@postgresql.org
2023-09-26 14:15:35 +03:00
Thomas Munro
45d1fe8b53 Fix edge-case for xl_tot_len broken by bae868ca.
bae868ca removed a check that was still needed.  If you had an
xl_tot_len at the end of a page that was too small for a record header,
but not big enough to span onto the next page, we'd immediately perform
the CRC check using a bogus large length.  Because of arbitrary coding
differences between the CRC implementations on different platforms,
nothing very bad happened on common modern systems.  On systems using
the _sb8.c fallback we could segfault.

Restore that check, add a new assertion and supply a test for that case.
Back-patch to 12, like bae868ca.

Tested-by: Tom Lane <tgl@sss.pgh.pa.us>
Tested-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGLCkTT7zYjzOxuLGahBdQ%3DMcF%3Dz5ZvrjSOnW4EDhVjT-g%40mail.gmail.com
2023-09-26 10:54:47 +13:00
Andres Freund
d1c76fdec3 pg_dump: tests: Correct test condition for invalid databases
For some reason I used not_like = { pg_dumpall_dbprivs => 1, } in the test
condition of one of the tests added in in c66a7d75e6. That doesn't make sense
for two reasons: 1) not_like isn't a valid test condition 2) the database
should not be dumped in any of the tests.  Due to 1), the test achieved its
goal, but clearly the formulation is confusing.  Instead use like => {}, with
a comment explaining why.

Reported-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/3ddf79f2-8b7b-a093-11d2-5c739bc64f86@eisentraut.org
Backpatch: 11-, like c66a7d75e6
2023-09-25 12:10:44 -07:00
Tom Lane
8cfc3e80b6 Collect dependency information for parsed CallStmts.
Parse analysis of a CallStmt will inject mutable information,
for instance the OID of the called procedure, so that subsequent
DDL may create a need to re-parse the CALL.  We failed to detect
this for CALLs in plpgsql routines, because no dependency information
was collected when putting a CallStmt into the plan cache.  That
could lead to misbehavior or strange errors such as "cache lookup
failed".

Before commit ee895a655, the issue would only manifest for CALLs
appearing in atomic contexts, because we re-planned non-atomic
CALLs every time through anyway.

It is now apparent that extract_query_dependencies() probably
needs a special case for every utility statement type for which
stmt_requires_parse_analysis() returns true.  I wanted to add
something like Assert(!stmt_requires_parse_analysis(...)) when
falling out of extract_query_dependencies_walker without doing
anything, but there are API issues as well as a more fundamental
point: stmt_requires_parse_analysis is supposed to be applied to
raw parser output, so it'd be cheating to assume it will give the
correct answer for post-parse-analysis trees.  I contented myself
with adding a comment.

Per bug #18131 from Christian Stork.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/18131-576854e79c5cd264@postgresql.org
2023-09-25 14:41:57 -04:00
Tom Lane
52a676ef4f Limit to_tsvector_byid's initial array allocation to something sane.
The initial estimate of the number of distinct ParsedWords is just
that: an estimate.  Don't let it exceed what palloc is willing to
allocate.  If in fact we need more entries, we'll eventually fail
trying to enlarge the array.  But if we don't, this allows success on
inputs that currently draw "invalid memory alloc request size".

Per bug #18080 from Uwe Binder.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/18080-d5c5e58fef8c99b7@postgresql.org
2023-09-25 11:50:28 -04:00
Alvaro Herrera
d3946c4f62
pg_upgrade: check for types removed in pg12
Commit cda6a8d01d removed a few datatypes, but didn't update
pg_upgrade --check to throw error if these types are used.  So the users
find that pg_upgrade --check tells them that everything is fine, only to
fail when the real upgrade is attempted.

Reviewed-by: Tristan Partin <tristan@neon.tech>
Reviewed-by: Suraj Kharage <suraj.kharage@enterprisedb.com>
Discussion: https://postgr.es/m/202309201654.ng4ksea25mti@alvherre.pgsql
2023-09-25 14:34:05 +02:00
Thomas Munro
07896f468f Don't use Perl pack('Q') in 039_end_of_wal.pl.
'Q' for 64 bit integers turns out not to work on 32 bit Perl, as
revealed by the build farm.  Use 'II' instead, and deal with endianness.

Back-patch to 12, like bae868ca.

Discussion: https://postgr.es/m/ZQ4r1vHcryBsSi_V%40paquier.xyz
2023-09-23 14:15:37 +12:00
Thomas Munro
6606c57162 Don't trust unvalidated xl_tot_len.
xl_tot_len comes first in a WAL record.  Usually we don't trust it to be
the true length until we've validated the record header.  If the record
header was split across two pages, previously we wouldn't do the
validation until after we'd already tried to allocate enough memory to
hold the record, which was bad because it might actually be garbage
bytes from a recycled WAL file, so we could try to allocate a lot of
memory.  Release 15 made it worse.

Since 70b4f82a4b, we'd at least generate an end-of-WAL condition if the
garbage 4 byte value happened to be > 1GB, but we'd still try to
allocate up to 1GB of memory bogusly otherwise.  That was an
improvement, but unfortunately release 15 tries to allocate another
object before that, so you could get a FATAL error and recovery could
fail.

We can fix both variants of the problem more fundamentally using
pre-existing page-level validation, if we just re-order some logic.

The new order of operations in the split-header case defers all memory
allocation based on xl_tot_len until we've read the following page.  At
that point we know that its first few bytes are not recycled data, by
checking its xlp_pageaddr, and that its xlp_rem_len agrees with
xl_tot_len on the preceding page.  That is strong evidence that
xl_tot_len was truly the start of a record that was logged.

This problem was most likely to occur on a standby, because
walreceiver.c recycles WAL files without zeroing out trailing regions of
each page.  We could fix that too, but it wouldn't protect us from rare
crash scenarios where the trailing zeroes don't make it to disk.

With reliable xl_tot_len validation in place, the ancient policy of
considering malloc failure to indicate corruption at end-of-WAL seems
quite surprising, but changing that is left for later work.

Also included is a new TAP test to exercise various cases of end-of-WAL
detection by writing contrived data into the WAL from Perl.

Back-patch to 12.  We decided not to put this change into the final
release of 11.

Author: Thomas Munro <thomas.munro@gmail.com>
Author: Michael Paquier <michael@paquier.xyz>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Noah Misch <noah@leadboat.com> (the idea, not the code)
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Sergei Kornilov <sk@zsrv.org>
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/17928-aa92416a70ff44a2%40postgresql.org
2023-09-23 10:29:44 +12:00
Tom Lane
555a8d9d37 Fix COMMIT/ROLLBACK AND CHAIN in the presence of subtransactions.
In older branches, COMMIT/ROLLBACK AND CHAIN failed to propagate
the current transaction's properties to the new transaction if
there was any open subtransaction (unreleased savepoint).
Instead, some previous transaction's properties would be restored.
This is because the "if (s->chain)" check in CommitTransactionCommand
examined the wrong instance of the "chain" flag and falsely
concluded that it didn't need to save transaction properties.

Our regression tests would have noticed this, except they used
identical transaction properties for multiple tests in a row,
so that the faulty behavior was not distinguishable from correct
behavior.

Commit 12d768e70 fixed the problem in v15 and later, but only rather
accidentally, because I removed the "if (s->chain)" test to avoid a
compiler warning, while not realizing that the warning was flagging a
real bug.

In v14 and before, remove the if-test and save transaction properties
unconditionally; just as in the newer branches, that's not expensive
enough to justify thinking harder.

Add the comment and extra regression test to v15 and later to
forestall any future recurrence, but there's no live bug in those
branches.

Patch by me, per bug #18118 from Liu Xiang.  Back-patch to v12 where
the AND CHAIN feature was added.

Discussion: https://postgr.es/m/18118-4b72fcbb903aace6@postgresql.org
2023-09-21 23:11:31 -04:00
Etsuro Fujita
a3a66d3ff1 Update comment about set_join_pathlist_hook().
The comment introduced by commit e7cb7ee14 was a bit too terse, which
could lead to extensions doing different things within the hook function
than we intend to allow.  Extend the comment to explain what they can do
within the hook function.

Back-patch to all supported branches.

In passing, I rephrased a nearby comment that I recently added to the
back branches.

Reviewed by David Rowley and Andrei Lepikhov.

Discussion: https://postgr.es/m/CAPmGK15SBPA1nr3Aqsdm%2BYyS-ay0Ayo2BRYQ8_A2To9eLqwopQ%40mail.gmail.com
2023-09-21 19:45:07 +09:00
Heikki Linnakangas
459439f728 Fix GiST README's explanation of the NSN cross-check.
The text got the condition backwards, it's "NSN > LSN", not "NSN < LSN".
While we're at it, expand it a little for clarity.

Reviewed-by: Daniel Gustafsson
Discussion: https://www.postgresql.org/message-id/4cb46e18-e688-524a-0f73-b1f03ed5d6ee@iki.fi
2023-09-19 11:56:16 +03:00
Michael Paquier
35e8ba3e54 Fix assertion failure with PL/Python exceptions
PLy_elog() was not able to handle correctly cases where a SPI called
failed, which would fill in a DETAIL string able to trigger an
assertion.  We may want to improve this infrastructure so as it is able
to provide any extra detail information provided by an error stack, but
this is left as a future improvement as it could impact existing error
stacks and any applications that depend on them.  For now, the assertion
is removed and a regression test is added to cover the case of a failure
with a detail string.

This problem exists since 2bd78eb8d5, so backpatch all the way down
with tweaks to the regression tests output added where required.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/18070-ab9c171cbf4ebb0f@postgresql.org
Backpatch-through: 11
2023-09-19 08:31:27 +09:00
Tom Lane
e6e50ada07 Don't crash if cursor_to_xmlschema is used on a non-data-returning Portal.
cursor_to_xmlschema() assumed that any Portal must have a tupDesc,
which is not so.  Add a defensive check.

It's plausible that this mistake occurred because of the rather
poorly chosen name of the lookup function SPI_cursor_find(),
which in such cases is returning something that isn't very much
like a cursor.  Add some documentation to try to forestall future
errors of the same ilk.

Report and patch by Boyu Yang (docs changes by me).  Back-patch
to all supported branches.

Discussion: https://postgr.es/m/dd343010-c637-434c-a8cb-418f53bda3b8.yangboyu.yby@alibaba-inc.com
2023-09-18 14:27:47 -04:00
Tom Lane
c5b7f791da Track nesting depth correctly when drilling down into RECORD Vars.
expandRecordVariable() failed to adjust the parse nesting structure
correctly when recursing to inspect an outer-level Var.  This could
result in assertion failures or core dumps in corner cases.

Likewise, get_name_for_var_field() failed to adjust the deparse
namespace stack correctly when recursing to inspect an outer-level
Var.  In this case the likely result was a "bogus varno" error
while deparsing a view.

Per bug #18077 from Jingzhou Fu.  Back-patch to all supported
branches.

Richard Guo, with some adjustments by me

Discussion: https://postgr.es/m/18077-b9db97c6e0ab45d8@postgresql.org
2023-09-15 17:01:26 -04:00
Tom Lane
59bc0dfe44 Fix get_expr_result_type() to find field names for RECORD Consts.
This is a back-patch of commit d57534740 ("Fix EXPLAIN of SEARCH
BREADTH FIRST with a constant initial value") into pre-v14 branches.
At the time I'd thought it was not needed in branches that lack the
SEARCH/CYCLE feature, but that was just a failure of imagination.
It's possible to demonstrate "record type has not been registered"
failures in older branches too, during deparsing of views that contain
references to fields of composite constants.

Back-patch only the code changes, as the test cases added by d57534740
all require SEARCH/CYCLE syntax.  A suitable test case will be added
in the upcoming fix for bug #18077.

Discussion: https://postgr.es/m/17644-3bd1f3036d6d7a16@postgresql.org
Discussion: https://postgr.es/m/3607145.1694803130@sss.pgh.pa.us
2023-09-15 16:39:27 -04:00
Michael Paquier
f18efc6f2b Revert "Improve error message on snapshot import in snapmgr.c"
This reverts commit a0d87bcd9b, following a remark from Andres Frend
that the new error can be triggered with an incorrect SET TRANSACTION
SNAPSHOT command without being really helpful for the user as it uses
the internal file name.

Discussion: https://postgr.es/m/20230914020724.hlks7vunitvtbbz4@awork3.anarazel.de
Backpatch-through: 11
2023-09-14 16:00:43 +09:00
Michael Paquier
d98cbea201 Improve error message on snapshot import in snapmgr.c
When a snapshot file fails to be read in ImportSnapshot(), it would
issue an ERROR as "invalid snapshot identifier" when opening a stream
for it in read-only mode.  This error message is reworded to be the same
as all the other messages used in this case on failure, which is useful
when debugging this area.

Thinko introduced by bb446b689b where snapshot imports have been
added.  A backpatch down to 11 is done as this can improve any work
related to snapshot imports in older branches.

Author: Bharath Rupireddy
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/CALj2ACWmr=3KdxDkm8h7Zn1XxBoF6hdzq8WQyMn2y1OL5RYFrg@mail.gmail.com
Backpatch-through: 11
2023-09-14 10:30:32 +09:00
Thomas Munro
189097db82 Fix exception safety bug in typcache.c.
If an out-of-memory error was thrown at an unfortunate time,
ensure_record_cache_typmod_slot_exists() could leak memory and leave
behind a global state that produced an infinite loop on the next call.

Fix by merging RecordCacheArray and RecordIdentifierArray into a single
array.  With only one allocation or re-allocation, there is no
intermediate state.

Back-patch to all supported releases.

Reported-by: "James Pang (chaolpan)" <chaolpan@cisco.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/PH0PR11MB519113E738814BDDA702EDADD6EFA%40PH0PR11MB5191.namprd11.prod.outlook.com
2023-09-13 14:48:00 +12:00
Amit Kapila
c570bb4d61 Fix uninitialized access to InitialRunningXacts during decoding after ERROR.
The transactions and subtransactions array that was allocated under
snapshot builder memory context and recorded during decoding was not
cleared in case of errors. This can result in an assertion failure if we
attempt to retry logical decoding within the same session. To address this
issue, we register a callback function under the snapshot builder memory
context to clear the recorded transactions and subtransactions array along
with the context.

This problem doesn't exist in PG16 and HEAD as instead of using
InitialRunningXacts, we added the list of transaction IDs and
sub-transaction IDs, that have modified catalogs and are running during
snapshot serialization, to the serialized snapshot (see commit 7f13ac8123).

Author: Hou Zhijie
Reviewed-by: Amit Kapila
Backpatch-through: 11
Discussion: http://postgr.es/m/18055-ab3beed9f4b7b7d6@postgresql.org
2023-09-12 09:58:50 +05:30
Fujii Masao
5dc093eace Make recovery report error message when invalid page header is found.
Commit 0668719801 changed XLogPageRead() so that it validated the page
header, if invalid page header was found reset the error message and
retried reading the page, to fix the scenario where streaming standby
got stuck at a continuation record. This change hid the error message
about invalid page header, which would make it harder for users to
investigate what the actual issue was found in WAL.

To fix the issue, this commit makes XLogPageRead() report the error
message when invalid page header is found.

When not in standby mode, an invalid page header should cause recovery
to end, not retry reading the page, so XLogPageRead() doesn't need to
validate the page header for the retry. Instead, ReadPageInternal() should
be responsible for the validation in that case. Therefore this commit
changes XLogPageRead() so that if not in standby mode it doesn't validate
the page header for the retry.

This commit has been originally pushed as of 68601985e6 for 15 and
newer versions, but not to the older branches.  A recent investigation
related to WAL replay failures has showed up that the lack of this patch
in 12~14 is an issue, as we want to be able to improve the WAL reader to
make a correct distinction between the end-of-wal and OOM cases when
validating record headers.  REL_11_STABLE is left out as it will be
EOL'd soon.

Reported-by: Yugo Nagata
Author: Yugo Nagata, Kyotaro Horiguchi
Reviewed-by: Ranier Vilela, Fujii Masao
Discussion: https://postgr.es/m/20210718045505.32f463ed6c227111038d8ae4@sraoss.co.jp
Discussion: https://postgr.es/m/17928-aa92416a70ff44a2@postgresql.org
Backpatch-through: 12
2023-09-12 09:35:51 +09:00
Michael Paquier
6cdd7b805e pg_basebackup: Generate valid temporary slot names under PQbackendPID()
pgbouncer can cause PQbackendPID() to return negative values due to it
filling be_pid with random bytes (even these days pid_max can only be
set up to 2^22 on 64b machines on Linux, for example, so this cannot
happen with normal PID numbers).  When this happens, pg_basebackup may
generate a temporary slot name that may not be accepted by the parser,
leading to spurious failures, like:
pg_basebackup: error: could not send replication command
ERROR:  replication slot name "pg_basebackup_-1201966863" contains
invalid character

This commit fixes that problem by formatting the result from
PQbackendPID() as an unsigned integer when creating the temporary
replication slot name, so as the invalid character is gone and the
command can be parsed.

Author: Jelte Fennema
Reviewed-by: Daniel Gustafsson, Nishant Sharma
Discussion: https://postgr.es/m/CAGECzQQOGvYfp8ziF4fWQ_o8s2K7ppaoWBQnTmdakn3s-4Z=5g@mail.gmail.com
Backpatch-through: 11
2023-09-07 14:12:33 +09:00
Michael Paquier
82fa8ae5f9 Fix out-of-bound read in gtsvector_picksplit()
This could lead to an imprecise choice when splitting an index page of a
GiST index on a tsvector, deciding which entries should remain on the
old page and which entries should move to a new page.

This is wrong since tsearch2 has been moved into core with commit
140d4ebcb4, so backpatch all the way down.  This error has been
spotted by valgrind.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org
Backpatch-through: 11
2023-09-04 14:55:55 +09:00
Michael Paquier
420c8dc7ec Avoid possible overflow with ltsGetFreeBlock() in logtape.c
nFreeBlocks, defined as a long, stores the number of free blocks in a
logical tape.  ltsGetFreeBlock() has been using an int to store the
value of nFreeBlocks, which could lead to overflows on platforms where
long and int are not the same size (in short everything except Windows
where long is 4 bytes).

The problematic intermediate variable is switched to be a long instead
of an int.

Issue introduced by c02fdc9223, so backpatch down to 13.

Author: Ranier vilela
Reviewed-by: Peter Geoghegan, David Rowley
Discussion: https://postgr.es/m/CAEudQApLDWCBR_xmwNjGBrDo+f+S4E87x3s7-+hoaKqYdtC4JQ@mail.gmail.com
Backpatch-through: 13
2023-08-30 08:03:54 +09:00
Heikki Linnakangas
a01e479e84 Initialize ListenSocket array earlier.
After commit b0bea38705, syslogger prints 63 warnings about failing to
close a listen socket at postmaster startup. That's because the
syslogger process forks before the ListenSockets array is initialized,
so ClosePostmasterPorts() calls "close(0)" 64 times. The first call
succeeds, because fd 0 is stdin.

This has been like this since commit 9a86f03b4e in version 13, which
moved the SysLogger_Start() call to before initializing ListenSockets.
We just didn't notice until commit b0bea38705 added the LOG message.

Reported by Michael Paquier and Jeff Janes.

Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/ZOvvuQe0rdj2slA9%40paquier.xyz
Discussion: https://www.postgresql.org/message-id/ZO0fgDwVw2SUJiZx@paquier.xyz#482670177eb4eaf4c9f03c1eed963e5f
Backpatch-through: 13
2023-08-29 09:12:17 +03:00
Tom Lane
27566bcf3c Avoid unnecessary plancache revalidation of utility statements.
Revalidation of a plancache entry (after a cache invalidation event)
requires acquiring a snapshot.  Normally that is harmless, but not
if the cached statement is one that needs to run without acquiring a
snapshot.  We were already aware of that for TransactionStmts,
but for some reason hadn't extrapolated to the other statements that
PlannedStmtRequiresSnapshot() knows mustn't set a snapshot.  This can
lead to unexpected failures of commands such as SET TRANSACTION
ISOLATION LEVEL.  We can fix it in the same way, by excluding those
command types from revalidation.

However, we can do even better than that: there is no need to
revalidate for any statement type for which parse analysis, rewrite,
and plan steps do nothing interesting, which is nearly all utility
commands.  To mechanize this, invent a parser function
stmt_requires_parse_analysis() that tells whether parse analysis does
anything beyond wrapping a CMD_UTILITY Query around the raw parse
tree.  If that's what it does, then rewrite and plan will just
skip the Query, so that it is not possible for the same raw parse
tree to produce a different plan tree after cache invalidation.

stmt_requires_parse_analysis() is basically equivalent to the
existing function analyze_requires_snapshot(), except that for
obscure reasons that function omits ReturnStmt and CallStmt.
It is unclear whether those were oversights or intentional.
I have not been able to demonstrate a bug from not acquiring a
snapshot while analyzing these commands, but at best it seems mighty
fragile.  It seems safer to acquire a snapshot for parse analysis of
these commands too, which allows making stmt_requires_parse_analysis
and analyze_requires_snapshot equivalent.

In passing this fixes a second bug, which is that ResetPlanCache
would exclude ReturnStmts and CallStmts from revalidation.
That's surely *not* safe, since they contain parsable expressions.

Per bug #18059 from Pavel Kulakov.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/18059-79c692f036b25346@postgresql.org
2023-08-24 12:02:40 -04:00
Andrew Dunstan
1bb619d4d6 Cache by-reference missing values in a long lived context
Attribute missing values might be needed past the lifetime of the tuple
descriptors from which they are extracted. To avoid possibly using
pointers for by-reference values which might thus be left dangling, we
cache a datumCopy'd version of the datum in the TopMemoryContext. Since
we first search for the value this only needs to be done once per
session for any such value.

Original complaint from Tom Lane, idea for mitigation by Andrew Dunstan,
tweaked by Tom Lane.

Backpatch to version 11 where missing values were introduced.

Discussion: https://postgr.es/m/1306569.1687978174@sss.pgh.pa.us
2023-08-22 15:15:31 -04:00
Jeff Davis
4caa9e33ec Remove test from commit fa2e874946.
The fix itself is fine, but the test revealed other problems related
to parallel query that are not easily fixable. Remove the test for
now to fix the buildfarm.

Discussion: https://postgr.es/m/88825.1691665432@sss.pgh.pa.us
Backpatch-through: 11
2023-08-10 10:26:46 -07:00
Jeff Davis
90cadfd499 Recalculate search_path after ALTER ROLE.
Renaming a role can affect the meaning of the special string $user, so
must cause search_path to be recalculated.

Discussion: https://postgr.es/m/186761d32c0255debbdf50b6310b581b9c973e6c.camel@j-davis.com
Reviewed-by: Nathan Bossart, Michael Paquier
Backpatch-through: 11
2023-08-07 15:12:09 -07:00
Noah Misch
b1b585e0fc Reject substituting extension schemas or owners matching ["$'\].
Substituting such values in extension scripts facilitated SQL injection
when @extowner@, @extschema@, or @extschema:...@ appeared inside a
quoting construct (dollar quoting, '', or "").  No bundled extension was
vulnerable.  Vulnerable uses do appear in a documentation example and in
non-bundled extensions.  Hence, the attack prerequisite was an
administrator having installed files of a vulnerable, trusted,
non-bundled extension.  Subject to that prerequisite, this enabled an
attacker having database-level CREATE privilege to execute arbitrary
code as the bootstrap superuser.  By blocking this attack in the core
server, there's no need to modify individual extensions.  Back-patch to
v11 (all supported versions).

Reported by Micah Gate, Valerie Woolard, Tim Carey-Smith, and Christoph
Berg.

Security: CVE-2023-39417
2023-08-07 06:06:00 -07:00
Peter Eisentraut
2f89d0c97d Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 2cfadec9afd05853f9bf6dd83f6cf7fe96f9f3cf
2023-08-07 12:30:06 +02:00
Etsuro Fujita
4eb8b9c282 Update comments on CustomPath struct.
Commit e7cb7ee14 allowed custom scan providers to create CustomPath
paths for join relations as well, but missed updating the comments.

Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAPmGK15ODkN%2B%3DhkBCufj1HBW0x5OTb65Xuy7ryXchMdiCMpx_g%40mail.gmail.com
2023-08-03 17:15:06 +09:00
David Rowley
74a5bf1b67 Fix overly strict Assert in jsonpath code
This was failing for queries which try to get the .type() of a
jpiLikeRegex.  For example:

select jsonb_path_query('["string", "string"]',
                        '($[0] like_regex ".{7}").type()');

Reported-by: Alexander Kozhemyakin
Bug: #18035
Discussion: https://postgr.es/m/18035-64af5cdcb5adf2a9@postgresql.org
Backpatch-through: 12, where SQL/JSON path was added.
2023-08-02 01:41:55 +12:00
Etsuro Fujita
730f983eff Disallow replacing joins with scans in problematic cases.
Commit e7cb7ee14, which introduced the infrastructure for FDWs and
custom scan providers to replace joins with scans, failed to add support
handling of pseudoconstant quals assigned to replaced joins in
createplan.c, leading to an incorrect plan without a gating Result node
when postgres_fdw replaced a join with such a qual.

To fix, we could add the support by 1) modifying the ForeignPath and
CustomPath structs to store the list of RestrictInfo nodes to apply to
the join, as in JoinPaths, if they represent foreign and custom scans
replacing a join with a scan, and by 2) modifying create_scan_plan() in
createplan.c to use that list in that case, instead of the
baserestrictinfo list, to get pseudoconstant quals assigned to the join;
but #1 would cause an ABI break.  So fix by modifying the infrastructure
to just disallow replacing joins with such quals.

Back-patch to all supported branches.

Reported by Nishant Sharma.  Patch by me, reviewed by Nishant Sharma and
Richard Guo.

Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
2023-07-28 15:45:06 +09:00
Tom Lane
288b4288c3 Raise fixed token-length limit in hba.c.
Historically, hba.c limited tokens in the authentication configuration
files (pg_hba.conf and pg_ident.conf) to less than 256 bytes.  We have
seen a few reports of this limit causing problems; notably, for
moderately-complex LDAP configurations.  Increase the limit to 10240
bytes as a low-risk stop-gap solution.

In v13 and earlier, this also requires raising MAX_LINE, the limit
on overall line length.  I'm hesitant to make this code consume
too much stack space, so I only raised that to 20480 bytes.

Discussion: https://postgr.es/m/1588937.1690221208@sss.pgh.pa.us
2023-07-27 12:07:48 -04:00
Tom Lane
291c025402 Guard against null plan pointer in CachedPlanIsSimplyValid().
If both the passed-in plan pointer and plansource->gplan are
NULL, CachedPlanIsSimplyValid would think that the plan pointer
is possibly-valid and try to dereference it.  For the one extant
call site in plpgsql, this situation doesn't normally happen
which is why we've not noticed. However, it appears to be possible
if the previous use of the cached plan failed, as per report from
Justin Pryzby.  Add an extra check to prevent crashing.
Back-patch to v13 where this code was added.

Discussion: https://postgr.es/m/ZLlV+STFz1l/WhAQ@telsasoft.com
2023-07-20 14:23:46 -04:00
Michael Paquier
b3ca4f0a5c Fix indentation in twophase.c
This has been missed in cb0cca1, noticed before buildfarm member koel
has been able to complain while poking at a different patch.  Like the
other commit, backpatch all the way down to limit the odds of merge
conflicts.

Backpatch-through: 11
2023-07-18 14:04:51 +09:00
Michael Paquier
db59108a2b Fix recovery of 2PC transaction during crash recovery
A crash in the middle of a checkpoint with some two-phase state data
already flushed to disk by this checkpoint could cause a follow-up crash
recovery to recover twice the same transaction, once from what has been
found in pg_twophase/ at the beginning of recovery and a second time
when replaying its corresponding record.

This would lead to FATAL failures in the startup process during
recovery, where the same transaction would have a state recovered twice
instead of once:
LOG:  recovering prepared transaction 731 from shared memory
LOG:  recovering prepared transaction 731 from shared memory
FATAL:  lock ExclusiveLock on object 731/0/0 is already held

This issue is fixed by skipping the addition of any 2PC state coming
from a record whose equivalent 2PC state file has already been loaded in
TwoPhaseState at the beginning of recovery by restoreTwoPhaseData(),
which is OK as long as the system has not reached a consistent state.

The timing to get a messed up recovery processing is very racy, and
would very unlikely happen.  The thread that has reported the issue has
demonstrated the bug using injection points to force a PANIC in the
middle of a checkpoint.

Issue introduced in 728bd99, so backpatch all the way down.

Reported-by: "suyu.cmj" <mengjuan.cmj@alibaba-inc.com>
Author: "suyu.cmj" <mengjuan.cmj@alibaba-inc.com>
Author: Michael Paquier
Discussion: https://postgr.es/m/109e6994-b971-48cb-84f6-829646f18b4c.mengjuan.cmj@alibaba-inc.com
Backpatch-through: 11
2023-07-18 13:44:33 +09:00
Michael Paquier
bdaaf1bf1d Add indisreplident to fields refreshed by RelationReloadIndexInfo()
RelationReloadIndexInfo() is a fast-path used for index reloads in the
relation cache, and it has always forgotten about updating
indisreplident, which is something that would happen after an index is
selected for a replica identity.  This can lead to incorrect cache
information provided when executing a command in a transaction context
that updates indisreplident.

None of the code paths currently on HEAD that need to check upon
pg_index.indisreplident fetch its value from the relation cache, always
relying on a fresh copy on the syscache.  Unfortunately, this may not be
the case of out-of-core code, that could see out-of-date value.

Author: Shruthi Gowda
Reviewed-by: Robert Haas, Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/CAASxf_PBcxax0wW-3gErUyftZ0XrCs3Lrpuhq4-Z3Fak1DoW7Q@mail.gmail.com
Backpatch-through: 11
2023-07-14 11:16:10 +09:00
Michael Paquier
c89d74c18b Fix updates of indisvalid for partitioned indexes
indisvalid is switched to true for partitioned indexes when all its
partitions have valid indexes when attaching a new partition, up to the
top-most parent if all its leaves are themselves valid when dealing with
multiple layers of partitions.

The copy of the tuple from pg_index used to switch indisvalid to true
came from the relation cache, which is incorrect.  Particularly, in the
case reported by Shruthi Gowda, executing a series of commands in a
single transaction would cause the validation of partitioned indexes to
use an incorrect version of a pg_index tuple, as indexes are reloaded
after an invalidation request with RelationReloadIndexInfo(), a much
faster version than a full index cache rebuild.  In this case, the
limited information updated in the cache leads to an incorrect version
of the tuple used.  One of the symptoms reported was the following
error, with a replica identity update, for instance:
"ERROR: attempted to update invisible tuple"

This is incorrect since 8b08f7d, so backpatch all the way down.

Reported-by: Shruthi Gowda
Author: Michael Paquier
Reviewed-by: Shruthi Gowda, Dilip Kumar
Discussion: https://postgr.es/m/CAASxf_PBcxax0wW-3gErUyftZ0XrCs3Lrpuhq4-Z3Fak1DoW7Q@mail.gmail.com
Backpatch-through: 11
2023-07-14 10:13:20 +09:00
Andres Freund
81ce000067 Handle DROP DATABASE getting interrupted
Until now, when DROP DATABASE got interrupted in the wrong moment, the removal
of the pg_database row would also roll back, even though some irreversible
steps have already been taken. E.g. DropDatabaseBuffers() might have thrown
out dirty buffers, or files could have been unlinked. But we continued to
allow connections to such a corrupted database.

To fix this, mark databases invalid with an in-place update, just before
starting to perform irreversible steps. As we can't add a new column in the
back branches, we use pg_database.datconnlimit = -2 for this purpose.

An invalid database cannot be connected to anymore, but can still be
dropped.

Unfortunately we can't easily add output to psql's \l to indicate that some
database is invalid, it doesn't fit in any of the existing columns.

Add tests verifying that a interrupted DROP DATABASE is handled correctly in
the backend and in various tools.

Reported-by: Evgeny Morozov <postgresql3@realityexists.net>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20230509004637.cgvmfwrbht7xm7p6@awork3.anarazel.de
Discussion: https://postgr.es/m/20230314174521.74jl6ffqsee5mtug@awork3.anarazel.de
Backpatch: 11-, bug present in all supported versions
2023-07-13 13:03:34 -07:00
Andres Freund
53336e8f66 Release lock after encountering bogs row in vac_truncate_clog()
When vac_truncate_clog() encounters bogus datfrozenxid / datminmxid values, it
returns early. Unfortunately, until now, it did not release
WrapLimitsVacuumLock. If the backend later tries to acquire
WrapLimitsVacuumLock, the session / autovacuum worker hangs in an
uncancellable way. Similarly, other sessions will hang waiting for the
lock. However, if the backend holding the lock exited or errored out for some
reason, the lock was released.

The bug was introduced as a side effect of 566372b3d6.

It is interesting that there are no production reports of this problem. That
is likely due to a mix of bugs leading to bogus values having gotten less
common, process exit releasing locks and instances of hangs being hard to
debug for "normal" users.

Discussion: https://postgr.es/m/20230621221208.vhsqgduwfpzwxnpg@awork3.anarazel.de
2023-07-13 13:03:34 -07:00
Tom Lane
6b51fe8340 Be more rigorous about local variables in PostgresMain().
Since PostgresMain calls sigsetjmp, any local variables that are not
marked "volatile" have a risk of unspecified behavior.  In practice
this means that when control returns via longjmp, such variables might
get reset to their values as of the time of sigsetjmp, depending on
whether the compiler chose to put them in registers or on the stack.
We were careful about this for "send_ready_for_query", but not the
other local variables.

In the case of the timeout_enabled flags, resetting them to
their initial "false" states is actually good, since we do
"disable_all_timeouts()" in the longjmp cleanup code path.  If that
does not happen, we risk uselessly calling "disable_timeout()" later,
which is harmless but a little bit expensive.  Let's explicitly reset
these flags so that the behavior is correct and platform-independent.
(This change means that we really don't need the new "volatile"
markings after all, but let's install them anyway since any change
in this logic could re-introduce a problem.)

There is no issue for "firstchar" and "input_message" because those
are explicitly reinitialized each time through the query processing
loop.  To make that clearer, move them to be declared inside the loop.
That leaves us with all the function-lifespan locals except the
sigjmp_buf itself marked as volatile, which seems like a good policy
to have going forward.

Because of the possibility of extra disable_timeout() calls, this
seems worth back-patching.

Sergey Shinderuk and Tom Lane

Discussion: https://postgr.es/m/2eda015b-7dff-47fd-d5e2-f1a9899b90a6@postgrespro.ru
2023-07-10 12:14:34 -04:00
Michael Paquier
f5b075adcf Fix ALTER EXTENSION SET SCHEMA with objects outside an extension's schema
As coded, the code would use as a base comparison the namespace OID from
the first object scanned in pg_depend when switching its namespace
dependency entry to the new one, and use it as a base of comparison for
any follow-up checks.  It would also be used as the old namespace OID to
switch *from* for the extension's pg_depend entry.  Hence, if the first
object scanned has a namespace different than the one stored in the
extension, we would finish by:
- Not checking that the extension objects map with the extension's
schema.
- Not switching the extension -> namespace dependency entry to the new
namespace provided by the user, making ALTER EXTENSION ineffective.

This issue exists since this command has been introduced in d9572c4 for
relocatable extension, so backpatch all the way down to 11.  The test
case has been provided by Heikki, that I have tweaked a bit to show the
effects on pg_depend for the extension.

Reported-by: Heikki Linnakangas
Author: Michael Paquier, Heikki Linnakangas
Discussion: https://postgr.es/m/20eea594-a05b-4c31-491b-007b6fceef28@iki.fi
Backpatch-through: 11
2023-07-10 09:40:17 +09:00
Andres Freund
09391ddc95 Fix type of iterator variable in SH_START_ITERATE
Also add comment to make the reasoning behind the Assert() more explicit (per
Tom).

Reported-by: Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAocXNJ6s1VLz+hMamLAQAiewRoW17OJ6-+9GACKfj6iPQ@mail.gmail.com
Backpatch: 11-
2023-07-06 09:57:32 -07:00
Andrew Dunstan
9ebe6fdc5c Skip pg_baseback long filename test if path too long on Windows
On Windows, it's sometimes difficult to create a file with a path longer
than 255 chars, and if it can be created it might not be seen by the
archiver. This can be triggered by the test for tar backups with
filenames greater than 100 bytes. So we skip that test if the path would
exceed 255.

Backpatch to all live branches.

Reviewed by Daniel Gustafsson

Discussion: https://postgr.es/m/666ac55b-3400-fb2c-2cea-0281bf36a53c@dunslane.net
2023-07-06 12:34:18 -04:00
Heikki Linnakangas
c50b869eda WAL-log the creation of the init fork of unlogged indexes.
We create a file, so we better WAL-log it. In practice, all the
built-in index AMs and all extensions that I'm aware of write a
metapage to the init fork, which is WAL-logged, and replay of the
metapage implicitly creates the fork too. But if ambuildempty() didn't
write any page, we would miss it.

This can be seen with dummy_index_am. Set up replication, create a
'dummy_index_am' index on an unlogged table, and look at the files
created in the replica: the init fork is not created on the
replica. Dummy_index_am doesn't do anything with the relation files,
however, so it doesn't lead to any user-visible errors.

Backpatch to all supported versions.

Reviewed-by: Robert Haas
Discussion: https://www.postgresql.org/message-id/6e5bbc08-cdfc-b2b3-9e23-1a914b9850a9%40iki.fi
2023-07-06 17:29:14 +03:00
Amit Kapila
34f6c60635 Revert the commits related to allowing page lock to conflict among parallel group members.
This commit reverts the work done by commits 3ba59ccc89 and 72e78d831a.
Those commits were incorrect in asserting that we never acquire any other
heavy-weight lock after acquring page lock other than relation extension
lock. We can acquire a lock on catalogs while doing catalog look up after
acquring page lock.

This won't impact any existing feature but we need to think some other way
to achieve this before parallelizing other write operations or even
improving the parallelism in vacuum (like allowing multiple workers
for an index).

Reported-by: Jaime Casanova
Author: Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CAJKUy5jffnRKNvRHKQ0LynRb0RJC-o4P8Ku3x9vGAVLwDBWumQ@mail.gmail.com
2023-07-06 08:05:27 +05:30
Heikki Linnakangas
59c2a6fe9b Fix leak of LLVM "fatal-on-oom" section counter.
llvm_release_context() called llvm_enter_fatal_on_oom(), but was missing
the corresponding llvm_leave_fatal_on_oom() call. As a result, if JIT was
used at all, we were almost always in the "fatal-on-oom" state.

It only makes a difference if you use an extension written in C++, and
run out of memory in a C++ 'new' call. In that case, you would get a
PostgreSQL FATAL error, instead of the default behavior of throwing a
C++ exception.

Back-patch to all supported versions.

Reviewed-by: Daniel Gustafsson
Discussion: https://www.postgresql.org/message-id/54b78cca-bc84-dad8-4a7e-5b56f764fab5@iki.fi
2023-07-05 13:13:39 +03:00
Heikki Linnakangas
acc8cdff42 Ensure that creation of an empty relfile is fsync'd at checkpoint.
If you create a table and don't insert any data into it, the relation file
is never fsync'd. You don't lose data, because an empty table doesn't have
any data to begin with, but if you crash and lose the file, subsequent
operations on the table will fail with "could not open file" error.

To fix, register an fsync request in mdcreate(), like we do for mdwrite().

Per discussion, we probably should also fsync the containing directory
after creating a new file. But that's a separate and much wider issue.

Backpatch to all supported versions.

Reviewed-by: Andres Freund, Thomas Munro
Discussion: https://www.postgresql.org/message-id/d47d8122-415e-425c-d0a2-e0160829702d%40iki.fi
2023-07-04 18:08:08 +03:00
Peter Eisentraut
75373ff6c0 Adjust kerberos and ldap tests for Homebrew on ARM
The Homebrew package manager changed its default installation prefix
for the new architecture, so a couple of tests need tweaks to find
binaries.

This is a partial backpatch of dc513bc654.
2023-07-04 11:16:36 +02:00