Commit Graph

22555 Commits

Author SHA1 Message Date
David Rowley 6341cb0b02 Fix incorrect logic in plan dependency recording
Both 50e17ad28 and 29f45e299 mistakenly tried to record a plan dependency
on a function but mistakenly inverted the OidIsValid test.  This meant
that we'd record a dependency only when the function's Oid was
InvalidOid.  Clearly this was meant to *not* record the dependency in
that case.

50e17ad28 made this mistake first, then in v15 29f45e299 copied the same
mistake.

Reported-by: Tom Lane
Backpatch-through: 14, where 50e17ad28 first made this mistake
Discussion: https://postgr.es/m/2277537.1694301772@sss.pgh.pa.us
2023-09-14 11:27:43 +12:00
Thomas Munro e2452c2a63 Fix exception safety bug in typcache.c.
If an out-of-memory error was thrown at an unfortunate time,
ensure_record_cache_typmod_slot_exists() could leak memory and leave
behind a global state that produced an infinite loop on the next call.

Fix by merging RecordCacheArray and RecordIdentifierArray into a single
array.  With only one allocation or re-allocation, there is no
intermediate state.

Back-patch to all supported releases.

Reported-by: "James Pang (chaolpan)" <chaolpan@cisco.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/PH0PR11MB519113E738814BDDA702EDADD6EFA%40PH0PR11MB5191.namprd11.prod.outlook.com
2023-09-13 14:46:16 +12:00
Amit Kapila f7d25117ba Fix uninitialized access to InitialRunningXacts during decoding after ERROR.
The transactions and subtransactions array that was allocated under
snapshot builder memory context and recorded during decoding was not
cleared in case of errors. This can result in an assertion failure if we
attempt to retry logical decoding within the same session. To address this
issue, we register a callback function under the snapshot builder memory
context to clear the recorded transactions and subtransactions array along
with the context.

This problem doesn't exist in PG16 and HEAD as instead of using
InitialRunningXacts, we added the list of transaction IDs and
sub-transaction IDs, that have modified catalogs and are running during
snapshot serialization, to the serialized snapshot (see commit 7f13ac8123).

Author: Hou Zhijie
Reviewed-by: Amit Kapila
Backpatch-through: 11
Discussion: http://postgr.es/m/18055-ab3beed9f4b7b7d6@postgresql.org
2023-09-12 10:12:51 +05:30
Fujii Masao 2f13e8d9ec Make recovery report error message when invalid page header is found.
Commit 0668719801 changed XLogPageRead() so that it validated the page
header, if invalid page header was found reset the error message and
retried reading the page, to fix the scenario where streaming standby
got stuck at a continuation record. This change hid the error message
about invalid page header, which would make it harder for users to
investigate what the actual issue was found in WAL.

To fix the issue, this commit makes XLogPageRead() report the error
message when invalid page header is found.

When not in standby mode, an invalid page header should cause recovery
to end, not retry reading the page, so XLogPageRead() doesn't need to
validate the page header for the retry. Instead, ReadPageInternal() should
be responsible for the validation in that case. Therefore this commit
changes XLogPageRead() so that if not in standby mode it doesn't validate
the page header for the retry.

This commit has been originally pushed as of 68601985e6 for 15 and
newer versions, but not to the older branches.  A recent investigation
related to WAL replay failures has showed up that the lack of this patch
in 12~14 is an issue, as we want to be able to improve the WAL reader to
make a correct distinction between the end-of-wal and OOM cases when
validating record headers.  REL_11_STABLE is left out as it will be
EOL'd soon.

Reported-by: Yugo Nagata
Author: Yugo Nagata, Kyotaro Horiguchi
Reviewed-by: Ranier Vilela, Fujii Masao
Discussion: https://postgr.es/m/20210718045505.32f463ed6c227111038d8ae4@sraoss.co.jp
Discussion: https://postgr.es/m/17928-aa92416a70ff44a2@postgresql.org
Backpatch-through: 12
2023-09-12 09:35:42 +09:00
Michael Paquier df11421d82 Fix out-of-bound read in gtsvector_picksplit()
This could lead to an imprecise choice when splitting an index page of a
GiST index on a tsvector, deciding which entries should remain on the
old page and which entries should move to a new page.

This is wrong since tsearch2 has been moved into core with commit
140d4ebcb4, so backpatch all the way down.  This error has been
spotted by valgrind.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/17950-6c80a8d2b94ec695@postgresql.org
Backpatch-through: 11
2023-09-04 14:55:53 +09:00
Michael Paquier d2bd4ba305 Avoid possible overflow with ltsGetFreeBlock() in logtape.c
nFreeBlocks, defined as a long, stores the number of free blocks in a
logical tape.  ltsGetFreeBlock() has been using an int to store the
value of nFreeBlocks, which could lead to overflows on platforms where
long and int are not the same size (in short everything except Windows
where long is 4 bytes).

The problematic intermediate variable is switched to be a long instead
of an int.

Issue introduced by c02fdc9223, so backpatch down to 13.

Author: Ranier vilela
Reviewed-by: Peter Geoghegan, David Rowley
Discussion: https://postgr.es/m/CAEudQApLDWCBR_xmwNjGBrDo+f+S4E87x3s7-+hoaKqYdtC4JQ@mail.gmail.com
Backpatch-through: 13
2023-08-30 08:03:52 +09:00
Heikki Linnakangas 3d895f95a0 Initialize ListenSocket array earlier.
After commit b0bea38705, syslogger prints 63 warnings about failing to
close a listen socket at postmaster startup. That's because the
syslogger process forks before the ListenSockets array is initialized,
so ClosePostmasterPorts() calls "close(0)" 64 times. The first call
succeeds, because fd 0 is stdin.

This has been like this since commit 9a86f03b4e in version 13, which
moved the SysLogger_Start() call to before initializing ListenSockets.
We just didn't notice until commit b0bea38705 added the LOG message.

Reported by Michael Paquier and Jeff Janes.

Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/ZOvvuQe0rdj2slA9%40paquier.xyz
Discussion: https://www.postgresql.org/message-id/ZO0fgDwVw2SUJiZx@paquier.xyz#482670177eb4eaf4c9f03c1eed963e5f
Backpatch-through: 13
2023-08-29 09:12:24 +03:00
Tom Lane 9b2a41db1c Avoid unnecessary plancache revalidation of utility statements.
Revalidation of a plancache entry (after a cache invalidation event)
requires acquiring a snapshot.  Normally that is harmless, but not
if the cached statement is one that needs to run without acquiring a
snapshot.  We were already aware of that for TransactionStmts,
but for some reason hadn't extrapolated to the other statements that
PlannedStmtRequiresSnapshot() knows mustn't set a snapshot.  This can
lead to unexpected failures of commands such as SET TRANSACTION
ISOLATION LEVEL.  We can fix it in the same way, by excluding those
command types from revalidation.

However, we can do even better than that: there is no need to
revalidate for any statement type for which parse analysis, rewrite,
and plan steps do nothing interesting, which is nearly all utility
commands.  To mechanize this, invent a parser function
stmt_requires_parse_analysis() that tells whether parse analysis does
anything beyond wrapping a CMD_UTILITY Query around the raw parse
tree.  If that's what it does, then rewrite and plan will just
skip the Query, so that it is not possible for the same raw parse
tree to produce a different plan tree after cache invalidation.

stmt_requires_parse_analysis() is basically equivalent to the
existing function analyze_requires_snapshot(), except that for
obscure reasons that function omits ReturnStmt and CallStmt.
It is unclear whether those were oversights or intentional.
I have not been able to demonstrate a bug from not acquiring a
snapshot while analyzing these commands, but at best it seems mighty
fragile.  It seems safer to acquire a snapshot for parse analysis of
these commands too, which allows making stmt_requires_parse_analysis
and analyze_requires_snapshot equivalent.

In passing this fixes a second bug, which is that ResetPlanCache
would exclude ReturnStmts and CallStmts from revalidation.
That's surely *not* safe, since they contain parsable expressions.

Per bug #18059 from Pavel Kulakov.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/18059-79c692f036b25346@postgresql.org
2023-08-24 12:02:40 -04:00
Andrew Dunstan 7f4515a58e Cache by-reference missing values in a long lived context
Attribute missing values might be needed past the lifetime of the tuple
descriptors from which they are extracted. To avoid possibly using
pointers for by-reference values which might thus be left dangling, we
cache a datumCopy'd version of the datum in the TopMemoryContext. Since
we first search for the value this only needs to be done once per
session for any such value.

Original complaint from Tom Lane, idea for mitigation by Andrew Dunstan,
tweaked by Tom Lane.

Backpatch to version 11 where missing values were introduced.

Discussion: https://postgr.es/m/1306569.1687978174@sss.pgh.pa.us
2023-08-22 15:15:45 -04:00
Jeff Davis 00953f1e23 Recalculate search_path after ALTER ROLE.
Renaming a role can affect the meaning of the special string $user, so
must cause search_path to be recalculated.

Discussion: https://postgr.es/m/186761d32c0255debbdf50b6310b581b9c973e6c.camel@j-davis.com
Reviewed-by: Nathan Bossart, Michael Paquier
Backpatch-through: 11
2023-08-07 15:12:33 -07:00
Noah Misch d4648a74be Reject substituting extension schemas or owners matching ["$'\].
Substituting such values in extension scripts facilitated SQL injection
when @extowner@, @extschema@, or @extschema:...@ appeared inside a
quoting construct (dollar quoting, '', or "").  No bundled extension was
vulnerable.  Vulnerable uses do appear in a documentation example and in
non-bundled extensions.  Hence, the attack prerequisite was an
administrator having installed files of a vulnerable, trusted,
non-bundled extension.  Subject to that prerequisite, this enabled an
attacker having database-level CREATE privilege to execute arbitrary
code as the bootstrap superuser.  By blocking this attack in the core
server, there's no need to modify individual extensions.  Back-patch to
v11 (all supported versions).

Reported by Micah Gate, Valerie Woolard, Tim Carey-Smith, and Christoph
Berg.

Security: CVE-2023-39417
2023-08-07 06:06:00 -07:00
Peter Eisentraut 6186e2775e Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 2ac4b26db0a9032bce0eb018f6f742cea5847118
2023-08-07 12:38:55 +02:00
David Rowley bf315354e0 Don't Memoize lateral joins with volatile join conditions
The use of Memoize was already disabled in normal joins when the join
conditions had volatile functions per the code in
match_opclause_to_indexcol().  Ordinarily, the parameterization for the
inner side of a nested loop will be an Index Scan or at least eventually
lead to an index scan (perhaps nested several joins deep). However, for
lateral joins, that's not the case and seq scans can be parameterized
too, so we can't rely on match_opclause_to_indexcol().

Here we explicitly check the parameterization for volatile functions and
don't consider the generation of a Memoize path when such functions
are present.

Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs49nHFnHbpepLsv_yF3qkpCS4BdB-v8HoJVv8_=Oat0u_w@mail.gmail.com
Backpatch-through: 14, where Memoize was introduced
2023-08-07 22:15:50 +12:00
Masahiko Sawada 2e3741fb57 Fix ReorderBufferCheckMemoryLimit() comment.
Commit 7259736a6 updated the comment but it was not correct since
ReorderBufferLargestStreamableTopTXN() returns only top-level
transactions.

Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAD21AoA9XB7OR86BqvrCe2dMYX%2BZv3-BvVmjF%3DGY2z6jN-kqjg%40mail.gmail.com
Backpatch-through: 14
2023-08-02 15:01:05 +09:00
David Rowley f457f2ef14 Fix overly strict Assert in jsonpath code
This was failing for queries which try to get the .type() of a
jpiLikeRegex.  For example:

select jsonb_path_query('["string", "string"]',
                        '($[0] like_regex ".{7}").type()');

Reported-by: Alexander Kozhemyakin
Bug: #18035
Discussion: https://postgr.es/m/18035-64af5cdcb5adf2a9@postgresql.org
Backpatch-through: 12, where SQL/JSON path was added.
2023-08-02 01:41:21 +12:00
Etsuro Fujita b0e390e6d1 Disallow replacing joins with scans in problematic cases.
Commit e7cb7ee14, which introduced the infrastructure for FDWs and
custom scan providers to replace joins with scans, failed to add support
handling of pseudoconstant quals assigned to replaced joins in
createplan.c, leading to an incorrect plan without a gating Result node
when postgres_fdw replaced a join with such a qual.

To fix, we could add the support by 1) modifying the ForeignPath and
CustomPath structs to store the list of RestrictInfo nodes to apply to
the join, as in JoinPaths, if they represent foreign and custom scans
replacing a join with a scan, and by 2) modifying create_scan_plan() in
createplan.c to use that list in that case, instead of the
baserestrictinfo list, to get pseudoconstant quals assigned to the join;
but #1 would cause an ABI break.  So fix by modifying the infrastructure
to just disallow replacing joins with such quals.

Back-patch to all supported branches.

Reported by Nishant Sharma.  Patch by me, reviewed by Nishant Sharma and
Richard Guo.

Discussion: https://postgr.es/m/CADrsxdbcN1vejBaf8a%2BQhrZY5PXL-04mCd4GDu6qm6FigDZd6Q%40mail.gmail.com
2023-07-28 15:45:04 +09:00
Tom Lane 341996248e Raise fixed token-length limit in hba.c.
Historically, hba.c limited tokens in the authentication configuration
files (pg_hba.conf and pg_ident.conf) to less than 256 bytes.  We have
seen a few reports of this limit causing problems; notably, for
moderately-complex LDAP configurations.  Increase the limit to 10240
bytes as a low-risk stop-gap solution.

In v13 and earlier, this also requires raising MAX_LINE, the limit
on overall line length.  I'm hesitant to make this code consume
too much stack space, so I only raised that to 20480 bytes.

Discussion: https://postgr.es/m/1588937.1690221208@sss.pgh.pa.us
2023-07-27 12:07:48 -04:00
Tom Lane 10fd061bbc Guard against null plan pointer in CachedPlanIsSimplyValid().
If both the passed-in plan pointer and plansource->gplan are
NULL, CachedPlanIsSimplyValid would think that the plan pointer
is possibly-valid and try to dereference it.  For the one extant
call site in plpgsql, this situation doesn't normally happen
which is why we've not noticed. However, it appears to be possible
if the previous use of the cached plan failed, as per report from
Justin Pryzby.  Add an extra check to prevent crashing.
Back-patch to v13 where this code was added.

Discussion: https://postgr.es/m/ZLlV+STFz1l/WhAQ@telsasoft.com
2023-07-20 14:23:46 -04:00
Michael Paquier 763d26205a Fix indentation in twophase.c
This has been missed in cb0cca1, noticed before buildfarm member koel
has been able to complain while poking at a different patch.  Like the
other commit, backpatch all the way down to limit the odds of merge
conflicts.

Backpatch-through: 11
2023-07-18 14:04:50 +09:00
Michael Paquier 442749100d Fix recovery of 2PC transaction during crash recovery
A crash in the middle of a checkpoint with some two-phase state data
already flushed to disk by this checkpoint could cause a follow-up crash
recovery to recover twice the same transaction, once from what has been
found in pg_twophase/ at the beginning of recovery and a second time
when replaying its corresponding record.

This would lead to FATAL failures in the startup process during
recovery, where the same transaction would have a state recovered twice
instead of once:
LOG:  recovering prepared transaction 731 from shared memory
LOG:  recovering prepared transaction 731 from shared memory
FATAL:  lock ExclusiveLock on object 731/0/0 is already held

This issue is fixed by skipping the addition of any 2PC state coming
from a record whose equivalent 2PC state file has already been loaded in
TwoPhaseState at the beginning of recovery by restoreTwoPhaseData(),
which is OK as long as the system has not reached a consistent state.

The timing to get a messed up recovery processing is very racy, and
would very unlikely happen.  The thread that has reported the issue has
demonstrated the bug using injection points to force a PANIC in the
middle of a checkpoint.

Issue introduced in 728bd99, so backpatch all the way down.

Reported-by: "suyu.cmj" <mengjuan.cmj@alibaba-inc.com>
Author: "suyu.cmj" <mengjuan.cmj@alibaba-inc.com>
Author: Michael Paquier
Discussion: https://postgr.es/m/109e6994-b971-48cb-84f6-829646f18b4c.mengjuan.cmj@alibaba-inc.com
Backpatch-through: 11
2023-07-18 13:44:31 +09:00
Michael Paquier 7af65523ab Add indisreplident to fields refreshed by RelationReloadIndexInfo()
RelationReloadIndexInfo() is a fast-path used for index reloads in the
relation cache, and it has always forgotten about updating
indisreplident, which is something that would happen after an index is
selected for a replica identity.  This can lead to incorrect cache
information provided when executing a command in a transaction context
that updates indisreplident.

None of the code paths currently on HEAD that need to check upon
pg_index.indisreplident fetch its value from the relation cache, always
relying on a fresh copy on the syscache.  Unfortunately, this may not be
the case of out-of-core code, that could see out-of-date value.

Author: Shruthi Gowda
Reviewed-by: Robert Haas, Dilip Kumar, Michael Paquier
Discussion: https://postgr.es/m/CAASxf_PBcxax0wW-3gErUyftZ0XrCs3Lrpuhq4-Z3Fak1DoW7Q@mail.gmail.com
Backpatch-through: 11
2023-07-14 11:16:08 +09:00
Michael Paquier 954cc2139c Fix updates of indisvalid for partitioned indexes
indisvalid is switched to true for partitioned indexes when all its
partitions have valid indexes when attaching a new partition, up to the
top-most parent if all its leaves are themselves valid when dealing with
multiple layers of partitions.

The copy of the tuple from pg_index used to switch indisvalid to true
came from the relation cache, which is incorrect.  Particularly, in the
case reported by Shruthi Gowda, executing a series of commands in a
single transaction would cause the validation of partitioned indexes to
use an incorrect version of a pg_index tuple, as indexes are reloaded
after an invalidation request with RelationReloadIndexInfo(), a much
faster version than a full index cache rebuild.  In this case, the
limited information updated in the cache leads to an incorrect version
of the tuple used.  One of the symptoms reported was the following
error, with a replica identity update, for instance:
"ERROR: attempted to update invisible tuple"

This is incorrect since 8b08f7d, so backpatch all the way down.

Reported-by: Shruthi Gowda
Author: Michael Paquier
Reviewed-by: Shruthi Gowda, Dilip Kumar
Discussion: https://postgr.es/m/CAASxf_PBcxax0wW-3gErUyftZ0XrCs3Lrpuhq4-Z3Fak1DoW7Q@mail.gmail.com
Backpatch-through: 11
2023-07-14 10:13:17 +09:00
Andres Freund d11efe8303 Handle DROP DATABASE getting interrupted
Until now, when DROP DATABASE got interrupted in the wrong moment, the removal
of the pg_database row would also roll back, even though some irreversible
steps have already been taken. E.g. DropDatabaseBuffers() might have thrown
out dirty buffers, or files could have been unlinked. But we continued to
allow connections to such a corrupted database.

To fix this, mark databases invalid with an in-place update, just before
starting to perform irreversible steps. As we can't add a new column in the
back branches, we use pg_database.datconnlimit = -2 for this purpose.

An invalid database cannot be connected to anymore, but can still be
dropped.

Unfortunately we can't easily add output to psql's \l to indicate that some
database is invalid, it doesn't fit in any of the existing columns.

Add tests verifying that a interrupted DROP DATABASE is handled correctly in
the backend and in various tools.

Reported-by: Evgeny Morozov <postgresql3@realityexists.net>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20230509004637.cgvmfwrbht7xm7p6@awork3.anarazel.de
Discussion: https://postgr.es/m/20230314174521.74jl6ffqsee5mtug@awork3.anarazel.de
Backpatch: 11-, bug present in all supported versions
2023-07-13 13:03:33 -07:00
Andres Freund e246fd4236 Release lock after encountering bogs row in vac_truncate_clog()
When vac_truncate_clog() encounters bogus datfrozenxid / datminmxid values, it
returns early. Unfortunately, until now, it did not release
WrapLimitsVacuumLock. If the backend later tries to acquire
WrapLimitsVacuumLock, the session / autovacuum worker hangs in an
uncancellable way. Similarly, other sessions will hang waiting for the
lock. However, if the backend holding the lock exited or errored out for some
reason, the lock was released.

The bug was introduced as a side effect of 566372b3d6.

It is interesting that there are no production reports of this problem. That
is likely due to a mix of bugs leading to bogus values having gotten less
common, process exit releasing locks and instances of hangs being hard to
debug for "normal" users.

Discussion: https://postgr.es/m/20230621221208.vhsqgduwfpzwxnpg@awork3.anarazel.de
2023-07-13 13:03:33 -07:00
Tom Lane 48582cf9ed Be more rigorous about local variables in PostgresMain().
Since PostgresMain calls sigsetjmp, any local variables that are not
marked "volatile" have a risk of unspecified behavior.  In practice
this means that when control returns via longjmp, such variables might
get reset to their values as of the time of sigsetjmp, depending on
whether the compiler chose to put them in registers or on the stack.
We were careful about this for "send_ready_for_query", but not the
other local variables.

In the case of the timeout_enabled flags, resetting them to
their initial "false" states is actually good, since we do
"disable_all_timeouts()" in the longjmp cleanup code path.  If that
does not happen, we risk uselessly calling "disable_timeout()" later,
which is harmless but a little bit expensive.  Let's explicitly reset
these flags so that the behavior is correct and platform-independent.
(This change means that we really don't need the new "volatile"
markings after all, but let's install them anyway since any change
in this logic could re-introduce a problem.)

There is no issue for "firstchar" and "input_message" because those
are explicitly reinitialized each time through the query processing
loop.  To make that clearer, move them to be declared inside the loop.
That leaves us with all the function-lifespan locals except the
sigjmp_buf itself marked as volatile, which seems like a good policy
to have going forward.

Because of the possibility of extra disable_timeout() calls, this
seems worth back-patching.

Sergey Shinderuk and Tom Lane

Discussion: https://postgr.es/m/2eda015b-7dff-47fd-d5e2-f1a9899b90a6@postgrespro.ru
2023-07-10 12:14:34 -04:00
Michael Paquier 235e716bc2 Fix ALTER EXTENSION SET SCHEMA with objects outside an extension's schema
As coded, the code would use as a base comparison the namespace OID from
the first object scanned in pg_depend when switching its namespace
dependency entry to the new one, and use it as a base of comparison for
any follow-up checks.  It would also be used as the old namespace OID to
switch *from* for the extension's pg_depend entry.  Hence, if the first
object scanned has a namespace different than the one stored in the
extension, we would finish by:
- Not checking that the extension objects map with the extension's
schema.
- Not switching the extension -> namespace dependency entry to the new
namespace provided by the user, making ALTER EXTENSION ineffective.

This issue exists since this command has been introduced in d9572c4 for
relocatable extension, so backpatch all the way down to 11.  The test
case has been provided by Heikki, that I have tweaked a bit to show the
effects on pg_depend for the extension.

Reported-by: Heikki Linnakangas
Author: Michael Paquier, Heikki Linnakangas
Discussion: https://postgr.es/m/20eea594-a05b-4c31-491b-007b6fceef28@iki.fi
Backpatch-through: 11
2023-07-10 09:40:15 +09:00
Heikki Linnakangas 32f327f681 WAL-log the creation of the init fork of unlogged indexes.
We create a file, so we better WAL-log it. In practice, all the
built-in index AMs and all extensions that I'm aware of write a
metapage to the init fork, which is WAL-logged, and replay of the
metapage implicitly creates the fork too. But if ambuildempty() didn't
write any page, we would miss it.

This can be seen with dummy_index_am. Set up replication, create a
'dummy_index_am' index on an unlogged table, and look at the files
created in the replica: the init fork is not created on the
replica. Dummy_index_am doesn't do anything with the relation files,
however, so it doesn't lead to any user-visible errors.

Backpatch to all supported versions.

Reviewed-by: Robert Haas
Discussion: https://www.postgresql.org/message-id/6e5bbc08-cdfc-b2b3-9e23-1a914b9850a9%40iki.fi
2023-07-06 17:29:13 +03:00
Amit Kapila cf3e298512 Revert the commits related to allowing page lock to conflict among parallel group members.
This commit reverts the work done by commits 3ba59ccc89 and 72e78d831a.
Those commits were incorrect in asserting that we never acquire any other
heavy-weight lock after acquring page lock other than relation extension
lock. We can acquire a lock on catalogs while doing catalog look up after
acquring page lock.

This won't impact any existing feature but we need to think some other way
to achieve this before parallelizing other write operations or even
improving the parallelism in vacuum (like allowing multiple workers
for an index).

Reported-by: Jaime Casanova
Author: Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CAJKUy5jffnRKNvRHKQ0LynRb0RJC-o4P8Ku3x9vGAVLwDBWumQ@mail.gmail.com
2023-07-06 08:15:37 +05:30
Heikki Linnakangas bfb493dbae Fix leak of LLVM "fatal-on-oom" section counter.
llvm_release_context() called llvm_enter_fatal_on_oom(), but was missing
the corresponding llvm_leave_fatal_on_oom() call. As a result, if JIT was
used at all, we were almost always in the "fatal-on-oom" state.

It only makes a difference if you use an extension written in C++, and
run out of memory in a C++ 'new' call. In that case, you would get a
PostgreSQL FATAL error, instead of the default behavior of throwing a
C++ exception.

Back-patch to all supported versions.

Reviewed-by: Daniel Gustafsson
Discussion: https://www.postgresql.org/message-id/54b78cca-bc84-dad8-4a7e-5b56f764fab5@iki.fi
2023-07-05 13:13:35 +03:00
Heikki Linnakangas d85bf0719e Ensure that creation of an empty relfile is fsync'd at checkpoint.
If you create a table and don't insert any data into it, the relation file
is never fsync'd. You don't lose data, because an empty table doesn't have
any data to begin with, but if you crash and lose the file, subsequent
operations on the table will fail with "could not open file" error.

To fix, register an fsync request in mdcreate(), like we do for mdwrite().

Per discussion, we probably should also fsync the containing directory
after creating a new file. But that's a separate and much wider issue.

Backpatch to all supported versions.

Reviewed-by: Andres Freund, Thomas Munro
Discussion: https://www.postgresql.org/message-id/d47d8122-415e-425c-d0a2-e0160829702d%40iki.fi
2023-07-04 18:07:46 +03:00
Thomas Munro b7ec66731d Re-bin segment when memory pages are freed.
It's OK to be lazy about re-binning memory segments when allocating,
because that can only leave segments in a bin that's too high.  We'll
search higher bins if necessary while allocating next time, and
also eventually re-bin, so no memory can become unreachable that way.

However, when freeing memory, the largest contiguous range of free pages
might go up, so we should re-bin eagerly to make sure we don't leave the
segment in a bin that is too low for get_best_segment() to find.

The re-binning code is moved into a function of its own, so it can be
called whenever free pages are returned to the segment's free page map.

Back-patch to all supported releases.

Author: Dongming Liu <ldming101@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier version)
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CAL1p7e8LzB2LSeAXo2pXCW4%2BRya9s0sJ3G_ReKOU%3DAjSUWjHWQ%40mail.gmail.com
2023-07-04 15:26:42 +12:00
Thomas Munro fb663f3879 Fix race in SSI interaction with gin fast path.
The ginfast.c code previously checked for conflicts in before locking
the relevant buffer, leaving a window where a RW conflict could be
missed.  Re-order.

There was also a place where buffer ID and block number were confused
while trying to predicate-lock a page, noted by visual inspection.

Back-patch to all supported releases.  Fixes one more problem discovered
with the reproducer from bug #17949, in this case when Dmitry tried
other index types.

Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reported-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
2023-07-04 09:14:16 +12:00
Thomas Munro 3f7d3a77e1 Fix race in SSI interaction with bitmap heap scan.
When performing a bitmap heap scan, we don't want to miss concurrent
writes that occurred after we observed the heap's rs_nblocks, but before
we took predicate locks on index pages.  Therefore, we can't skip
fetching any heap tuples that are referenced by the index, because we
need to test them all with CheckForSerializableConflictOut().  The
old optimization that would ignore any references to blocks >=
rs_nblocks gets in the way of that requirement, because it means that
concurrent writes in that window are ignored.

Removing that optimization shouldn't affect correctness at any isolation
level, because any new tuples shouldn't be visible to an MVCC snapshot.
There also shouldn't be any error-causing references to heap blocks past
the end, because we should have held at least an AccessShareLock on the
table before the index scan.  It can't get smaller while our transaction
is running.  For now, though, we'll keep the optimization at lower
levels to avoid making unnecessary changes in a bug fix.

Back-patch to all supported releases.  In release 11, the code is in a
different place but not fundamentally different.  Fixes one aspect of
bug #17949.

Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
2023-07-04 09:14:16 +12:00
Thomas Munro ae6d536ed0 Fix race in SSI interaction with empty btrees.
When predicate-locking btrees, we have a special case for completely
empty btrees, since there is no page to lock.  This was racy, because,
without buffer lock held, a matching key could be inserted between the
_bt_search() and the PredicateLockRelation() calls.

Fix, by rechecking _bt_search() after taking the relation-level SIREAD
lock, if using SERIALIZABLE isolation and an empty btree is discovered.

Back-patch to all supported releases.  Fixes one aspect of bug #17949.

Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
2023-07-04 09:14:16 +12:00
Tomas Vondra 260dbf19a5 Fix oversight in handling of modifiedCols since f24523672d
Commit f24523672d fixed a memory leak by moving the modifiedCols bitmap
into the per-row memory context. In the case of AFTER UPDATE triggers,
the bitmap is however referenced from an event kept until the end of the
query, resulting in a use-after-free bug.

Fixed by copying the bitmap into the AfterTriggerEvents memory context,
which is the one where we keep the trigger events. There's only one
place that needs to do the copy, but the memory context may not exist
yet. Doing that in a separate function seems more readable.

Report by Alexander Pyhalov, fix by me. Backpatch to 13, where the
bitmap was added to the event by commit 71d60e2aa0.

Reported-by: Alexander Pyhalov
Backpatch-through: 13
Discussion: https://postgr.es/m/acddb17c89b0d6cb940eaeda18c08bbe@postgrespro.ru
2023-07-02 22:23:04 +02:00
Tomas Vondra c1affa38c7 Fix memory leak in Incremental Sort rescans
The Incremental Sort had a couple issues, resulting in leaking memory
during rescans, possibly triggering OOM. The code had a couple of
related flaws:

1. During rescans, the sort states were reset but then also set to NULL
   (despite the comment saying otherwise). ExecIncrementalSort then
   sees NULL and initializes a new sort state, leaking the memory used
   by the old one.

2. Initializing the sort state also automatically rebuilt the info about
   presorted keys, leaking the already initialized info. presorted_keys
   was also unnecessarily reset to NULL.

Patch by James Coleman, based on patches by Laurenz Albe and Tom Lane.
Backpatch to 13, where Incremental Sort was introduced.

Author: James Coleman, Laurenz Albe, Tom Lane
Reported-by: Laurenz Albe, Zu-Ming Jiang
Backpatch-through: 13
Discussion: https://postgr.es/m/b2bd02dff61af15e3526293e2771f874cf2a3be7.camel%40cybertec.at
Discussion: https://postgr.es/m/db03c582-086d-e7cd-d4a1-3bc722f81765%40inf.ethz.ch
2023-07-02 20:05:14 +02:00
Michael Paquier 663b35f2df Fix marking of indisvalid for partitioned indexes at creation
The logic that introduced partitioned indexes missed a few things when
invalidating a partitioned index when these are created, still the code
is written to handle recursions:
1) If created from scratch because a mapping index could not be found,
the new index created could be itself invalid, if for example it was a
partitioned index with one of its leaves invalid.
2) A CCI was missing when indisvalid is set for a parent index, leading
to inconsistent trees when recursing across more than one level for a
partitioned index creation if an invalidation of the parent was
required.

This could lead to the creation of a partition index tree where some of
the partitioned indexes are marked as invalid, but some of the parents
are marked valid, which is not something that should happen (as
validatePartitionedIndex() defines, indisvalid is switched to true for a
partitioned index iff all its partitions are themselves valid).

This patch makes sure that indisvalid is set to false on a partitioned
index if at least one of its partition is invalid.  The flag is set to
true if *all* its partitions are valid.

The regression test added in this commit abuses of a failed concurrent
index creation, marked as invalid, that maps with an index created on
its partitioned table afterwards.

Reported-by: Alexander Lakhin
Reviewed-by: Alexander Lakhin
Discussion: https://postgr.es/m/14987634-43c0-0cb3-e075-94d423607e08@gmail.com
Backpatch-through: 11
2023-06-30 13:54:56 +09:00
Tom Lane 0789b82a97 Fix order of operations in ExecEvalFieldStoreDeForm().
If the given composite datum is toasted out-of-line,
DatumGetHeapTupleHeader will perform database accesses to detoast it.
That can invalidate the result of get_cached_rowtype, as documented
(perhaps not plainly enough) in that function's API spec; which leads
to strange errors or crashes when we try to use the TupleDesc to read
the tuple.  In short then, trying to update a field of a composite
column could fail intermittently if the overall column value is wide
enough to require toasting.

We can fix the bug at no cost by just changing the order of
operations, since we don't need the TupleDesc until after detoasting.
(Other callers of get_cached_rowtype appear to get this right already,
so there's only one bug.)

Note that the added regression test case reveals this bug reliably
only with debug_discard_caches/CLOBBER_CACHE_ALWAYS.

Per bug #17994 from Alexander Lakhin.  Sadly, this patch does not fix
the missing-values issue revealed in the bug discussion; we'll need
some more work to cover that.

Discussion: https://postgr.es/m/17994-5c7100b51b4790e9@postgresql.org
2023-06-29 10:19:10 -04:00
Peter Eisentraut 6bc7873da1 Remove inappropriate raw_expression_tree_walker() code
It was walking into the ColumnDef->compression field, which is not a
node but a string.  This code is currently not reachable (because the
compression field is only set in situations that don't go through
raw_expression_tree_walker()), but if it had been, this could have
behaved erratically.
2023-06-29 10:35:53 +02:00
Michael Paquier 6160e221d5 Ignore invalid indexes when enforcing index rules in ALTER TABLE ATTACH PARTITION
A portion of ALTER TABLE .. ATTACH PARTITION is to ensure that the
partition being attached to the partitioned table has a correct set of
indexes, so as there is a consistent index mapping between the
partitioned table and its new-to-be partition.  However, as introduced
in 8b08f7d, the current logic could choose an invalid index as a match,
which is something that can exist when dealing with more than two levels
of partitioning, like attaching a partitioned table (that has
partitions, with an index created by CREATE INDEX ON ONLY) to another
partitioned table.

A partitioned index with indisvalid set to false is equivalent to an
incomplete partition tree, meaning that an invalid partitioned index
does not have indexes defined in all its partitions.  Hence, choosing an
invalid partitioned index can create inconsistent partition index trees,
where the parent attaching to is valid, but its partition may be
invalid.

In the report from Alexander Lakhin, this showed up as an assertion
failure when validating an index.  Without assertions enabled, the
partition index tree would be actually broken, as indisvalid should
be switched to true for a partitioned index once all its partitions are
themselves valid.  With two levels of partitioning, the top partitioned
table used a valid index and was able to link to an invalid index stored
on its partition, itself a partitioned table.

I have studied a few options here (like the possibility to switch
indisvalid to false for the parent), but came down to the conclusion
that we'd better rely on a simple rule: invalid indexes had better never
be chosen, so as the partition attached uses and creates indexes that
the parent expects.  Some regression tests are added to provide some
coverage.  Note that the existing coverage is not impacted.

This is a problem since partitioned indexes exist, so backpatch all the
way down to v11.

Reported-by: Alexander Lakhin
Discussion: https://postgr.es/14987634-43c0-0cb3-e075-94d423607e08@gmail.com
Backpatch-through: 11
2023-06-28 15:57:48 +09:00
Tom Lane 4c61afa47c Check for interrupts and stack overflow in TParserGet().
TParserGet() recurses for some token types, meaning it's possible
to drive it to stack overflow.  Since this is a minority behavior,
I chose to add the check_stack_depth() call to the two places that
recurse rather than doing it during every single call.

While at it, add CHECK_FOR_INTERRUPTS(), because this can run
unpleasantly long for long inputs.

Per bug #17995 from Zuming Jiang.  This is old, so back-patch
to all supported branches.

Discussion: https://postgr.es/m/17995-9f20ff3e6389db4c@postgresql.org
2023-06-24 17:18:08 -04:00
Peter Geoghegan 63fa0deb31 nbtree VACUUM: cope with topparent inconsistencies.
Avoid "right sibling %u of block %u is not next child" errors when
vacuuming a corrupt nbtree index.  Just LOG the issue and press on.
That way VACUUM will have a decent chance of finishing off all required
processing for the index (and for the table as a whole).

This is similar to recent work from commit 5abff197, as well as work
from commit 5b861baa (later backpatched as commit 43e409ce), which
taught nbtree VACUUM to keep going when its "re-find" check fails.  The
hardening added by this commit takes place directly after the "re-find"
check, right before the critical section for the first stage of page
deletion.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wz=dayg0vjs4+er84TS9ami=csdzjpuiCGbEw=idhwqhzQ@mail.gmail.com
Backpatch: 11- (all supported versions).
2023-06-21 17:41:54 -07:00
Tom Lane 120ea65b8a Avoid Assert failure when processing empty statement in aborted xact.
exec_parse_message() wants to create a cached plan in all cases,
including for empty input.  The empty-input path does not have
a test for being in an aborted transaction, making it possible
that plancache.c will fail due to trying to do database lookups
even though there's no real work to do.

One solution would be to throw an aborted-transaction error in
this path too, but it's not entirely clear whether the lack of
such an error was intentional or whether some clients might be
relying on non-error behavior.  Instead, let's hack plancache.c
so that it treats empty statements with the same logic it
already had for transaction control commands, ensuring that it
can soldier through even in an already-aborted transaction.

Per bug #17983 from Alexander Lakhin.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/17983-da4569fcb878672e@postgresql.org
2023-06-21 11:07:11 -04:00
Amit Kapila 0b79042701 Fix the errhint message and docs for drop subscription failure.
The existing errhint message and docs were missing the fact that we can't
disassociate from the slot unless the subscription is disabled.

Author: Robert Sjöblom, Peter Smith
Reviewed-by: Peter Eisentraut, Amit Kapila
Backpatch-through: 11
Discussion: https://postgr.es/m/807bdf85-61ea-88e2-5712-6d9fcd4eabff@fortnox.se
2023-06-21 10:23:09 +05:30
Tom Lane d911dce14d Fix hash join when inner hashkey expressions contain Params.
If the inner-side expressions contain PARAM_EXEC Params, we must
re-hash whenever the values of those Params change.  The executor
mechanism for that exists already, but we failed to invoke it because
finalize_plan() neglected to search the Hash.hashkeys field for
Params.  This allowed a previous scan's hash table to be re-used
when it should not be, leading to rows missing from the join's output.
(I believe incorrectly-included join rows are impossible however,
since checking the real hashclauses would reject false matches.)

This bug is very ancient, dating probably to d24d75ff1 of 7.4.
Sadly, this simple fix depends on the plan representational changes
made by 2abd7ae9b, so it will only work back to v12.  I thought
about trying to make some kind of hack for v11, but I'm leery
of putting code significantly different from what is used in the
newer branches into a nearly-EOL branch.  Seeing that the bug
escaped detection for a full twenty years, problematic cases
must be rare; so I don't feel too awful about leaving v11 as-is.

Per bug #17985 from Zuming Jiang.  Back-patch to v12.

Discussion: https://postgr.es/m/17985-748b66607acd432e@postgresql.org
2023-06-20 17:47:36 -04:00
Michael Paquier e6317d9b50 Enable archiving in recovery TAP test 009_twophase.pl
This is a follow-up of f663b00, that has been committed to v13 and v14,
tweaking the TAP test for two-phase transactions so as it provides
coverage for the bug that has been fixed.  This change is done in its
own commit for clarity, as v15 and HEAD did not show the problematic
behavior, still missed coverage for it.

While on it, this adds a comment about the dependency of the last
partial segment rename and RecoverPreparedTransactions() at the end of
recovery, as that can be easy to miss.

Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/743b9b45a2d4013bd90b6a5cba8d6faeb717ee34.camel@cybertec.at
Backpatch-through: 13
2023-06-20 10:25:45 +09:00
Michael Paquier f663b00918 Fix failure at promotion with 2PC transactions and archiving enabled
When archiving is enabled, a promotion request would fail with the
following error when some 2PC transaction needs to be recovered from
WAL, preventing the promotion to complete:
FATAL:  requested WAL segment pg_wal/000000010000000000000001 has already been removed

The origin of the problem is that the last partial segment of the old
timeline is renamed before recovering the 2PC data via
RecoverPreparedTransactions() at the end of recovery, causing the FATAL
because the segment wanted is now renamed with a .partial suffix.  This
commit reorders a bit the end-of-recovery actions so as the execution of
recovery_end_command, the cleanup of the old segments of the old
timeline (RemoveNonParentXlogFiles) and the last partial segment rename
are done after the 2PC transaction data is recovered with
RecoverPreparedTransactions().  This makes the order of these
end-of-recovery actions more consistent with ~15, at the exception of
the end-of-recovery checkpoint that still needs to happen before all the
actions reordered here in v13 and v14, contrary to what 15~ does.

v15 and newer versions have "fixed" this problem somewhat accidentally
with 811051c, where the end-of-recovery actions got reordered.  In this
case, the recovery of 2PC transactions happens before the renaming of
the last partial segment of the old timeline.

v13 and v14 are the versions that can easily see this problem as per the
refactoring of 38a95731 where XLogReaderState is reset in
XLogBeginRead() before reading the 2PC transaction data.  v11 and v12
could also see this problem, but may finish by reading the 2PC data from
some of the WAL buffers instead.  Perhaps something could be done for
these two branches, but I am not really excited about doing something on
these per the lack of complaints and per the fact that v11 is soon going
to be EOL'd soon (there is always a risk of breaking something).

Note that the TAP test 009_twophase.pl is able to exhibit the issue if
it enables archiving on the primary node, which does not impact the test
coverage as restore_command would remain unused.  This is something that
should be changed on v15 and HEAD as well, so this will be changed in a
separate commit for clarity.

Author: Julian Markwort
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/743b9b45a2d4013bd90b6a5cba8d6faeb717ee34.camel@cybertec.at
Backpatch-through: 13
2023-06-20 09:36:35 +09:00
David Rowley 73f1c17fc3 Don't use partial unique indexes for unique proofs in the planner
Here we adjust relation_has_unique_index_for() so that it no longer makes
use of partial unique indexes as uniqueness proofs.  It is incorrect to
use these as the predicates used by check_index_predicates() to set
predOK makes use of not only baserestrictinfo quals as proofs, but also
qual from join conditions.  For relation_has_unique_index_for()'s case, we
need to know the relation is unique for a given set of columns before any
joins are evaluated, so if predOK was only set to true due to some join
qual, then it's unsafe to use such indexes in
relation_has_unique_index_for().  The final plan may not even make use
of that index, which could result in reading tuples that are not as
unique as the planner previously expected them to be.

Bug: #17975
Reported-by: Tor Erik Linnerud
Backpatch-through: 11, all supported versions
Discussion: https://postgr.es/m/17975-98a90c156f25c952%40postgresql.org
2023-06-19 13:01:58 +12:00
Amit Langote 3f157d085b Fix typo in comment.
Back-patch down to 11.

Author: Sho Kato (<kato-sho@fujitsu.com>)
Discussion: https://postgr.es/m/TYCPR01MB68499042A33BC32241193AAF9F5BA%40TYCPR01MB6849.jpnprd01.prod.outlook.com
2023-06-16 10:19:33 +09:00
Tom Lane d1423c52e3 Correctly update hasSubLinks while mutating a rule action.
rewriteRuleAction neglected to check for SubLink nodes in the
securityQuals of range table entries.  This could lead to failing
to convert such a SubLink to a SubPlan, resulting in assertion
crashes or weird errors later in planning.

In passing, fix some poor coding in rewriteTargetView:
we should not pass the source parsetree's hasSubLinks
field to ReplaceVarsFromTargetList's outer_hasSubLinks.
ReplaceVarsFromTargetList knows enough to ignore that
when a Query node is passed, but it's still confusing
and bad precedent: if we did try to update that flag
we'd be updating a stale copy of the parsetree.

Per bug #17972 from Alexander Lakhin.  This has been broken since
we added RangeTblEntry.securityQuals (although the presented test
case only fails back to 215b43cdc), so back-patch all the way.

Discussion: https://postgr.es/m/17972-f422c094237847d0@postgresql.org
2023-06-13 15:58:37 -04:00
Tom Lane 5eaa05f637 Accept fractional seconds in jsonpath's datetime() method.
Commit 927d9abb6 purported to make datetime() accept any string
that could be output for a datetime value by to_jsonb().  But it
overlooked the possibility of fractional seconds being present,
so that cases as simple as to_jsonb(now()) would defeat it.

Fix by adding formats that include ".US" to the list in
executeDateTimeMethod().  (Note that while this is nominally
microseconds, it'll do the right thing for fractions with
fewer than six digits.)

In passing, re-order the list to restore the datatype ordering
specified in its comment.  The violation accidentally did not
break anything; but the next edit might be less lucky, so add
more comments.

Per report from Tim Field.  Back-patch to v13 where datetime()
was added, like the previous patch.

Discussion: https://postgr.es/m/014A028B-5CE6-4FDF-AC24-426CA6FC9CEE@mohiohio.com
2023-06-12 10:54:28 -04:00
Michael Paquier c6043fcbb2 Fix missing initializations of MyProc.delayChkptEnd
This commit fixes an oversight introduced in 10520f4, that has added
delayChkptEnd to PGPROC to avoid ABI breakages on stable branches, where
two spots have missed to initialize this variable (delayChkpt was
switched back from int to bool, and it was initialized as 0 so there was
no consequences for it):
- InitProcess(), where the per-process data structures of a backend are
initialized.
- InitAuxiliaryProcess(), same but for auxiliary processes.

An interruption during relation truncation while this flag is set could
cause an assertion failure when a follow-up process does a relation
truncation while reusing the same PGPROC entry.  A second effect could
be incorrect checkpoint end delays.

While on it, add an assertion in ProcArrayClearTransaction() for
delayChkptEnd to be in line with 5788e25.  This is needed only for v14.

This issue affects v11~v14, but not v15~, as we use a single field
called delayChkptFlags to delay checkpoints there.

Author: suyu.cmj (mengjuan.cmj@alibaba-inc.com)
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/9c3d2a49-db5f-43cb-840b-d58f9a684295.mengjuan.cmj@alibaba-inc.com
Backpatch-through: 11
2023-06-11 10:33:46 +09:00
Tomas Vondra 7f528e96c5 Use per-tuple context in ExecGetAllUpdatedCols
Commit fc22b6623b (generated columns) replaced ExecGetUpdatedCols() with
ExecGetAllUpdatedCols() in a couple places handling UPDATE (triggers and
lock mode). However, ExecGetUpdatedCols() did exec_rt_fetch() while
ExecGetAllUpdatedCols() also allocates memory through bms_union()
without paying attention to the memory context and happened to use the
long-lived ExecutorState, leaking the memory until the end of the query.

The amount of leaked memory is proportional to the number of (updated)
attributes, types of UPDATE triggers, and the number of processed rows
(which for UPDATE ... FROM ... may be much higher than updated rows).

Fixed by switching to the per-tuple context in GetAllUpdatedColumns().
This is fine for all in-core callers, but external callers may need to
copy the result. But we're not aware of any such callers.

Note the issue was introduced by fc22b6623b, but the macros were later
renamed by f50e888990.

Backpatch to 12, where the issue was introduced.

Reported-by: Tomas Vondra
Reviewed-by: Andres Freund, Tom Lane, Jakub Wartak
Backpatch-through: 12
Discussion: https://postgr.es/m/222a3442-7f7d-246c-ed9b-a76209d19239@enterprisedb.com
2023-06-07 18:53:04 +02:00
Heikki Linnakangas 525ec837e1 Initialize 'recordXtime' to silence compiler warning.
In reality, recordXtime will always be set by the getRecordTimestamp
call, but the compiler doesn't necessarily see that.

Back-patch to all supported versions.

Author: Tristan Partin
Discussion: https://www.postgresql.org/message-id/CT5MN8E11U0M.1NYNCHXYUHY41@gonk
2023-06-06 20:32:02 +03:00
Peter Geoghegan 322c9b340a nbtree VACUUM: cope with right sibling link corruption.
Avoid "right sibling's left-link doesn't match" errors when vacuuming a
corrupt nbtree index.  Just LOG the issue and press on.  That way VACUUM
will have a decent chance of finishing off all required processing for
the index (and for the table as a whole).

This error was seen in the field from time to time (it's more than a
theoretical risk), so giving VACUUM the ability to press on like this
has real value.  Nothing short of a REINDEX is expected to fix the
underlying index corruption, so giving up (by throwing an error) risks
making a bad situation far worse.  Anything that blocks forward progress
by VACUUM like this might go unnoticed for a long time.  This could
eventually lead to a wraparound/xidStopLimit outage.

Note that _bt_unlink_halfdead_page() has always been able to bail on
page deletion when the target page's left sibling page was in an
inconsistent state.  It now does the same thing (returns false to back
out of the second phase of deletion) when it notices sibling link
corruption in the target page's right sibling page.

This is similar to the work from commit 5b861baa (later backpatched as
commit 43e409ce), which taught nbtree to press on with vacuuming an
index when page deletion fails to "re-find" a downlink in the target
page's parent page.  The "re-find" check seems to make VACUUM bail on
page deletion more often in practice, but there is no reason to take any
chances here.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wzko2q2kP1+UvgJyP9g0mF4hopK0NtQZcxwvMv9_ytGhkQ@mail.gmail.com
Backpatch: 11- (all supported versions).
2023-05-25 15:32:53 -07:00
Tom Lane f8320cc72d Fix misbehavior of EvalPlanQual checks with multiple result relations.
The idea of EvalPlanQual is that we replace the query's scan of the
result relation with a single injected tuple, and see if we get a
tuple out, thereby implying that the injected tuple still passes the
query quals.  (In join cases, other relations in the query are still
scanned normally.)  This logic was not updated when commit 86dc90056
made it possible for a single DML query plan to have multiple result
relations, when the query target relation has inheritance or partition
children.  We replaced the output for the current result relation
successfully, but other result relations were still scanned normally;
thus, if any other result relation contained a tuple satisfying the
quals, we'd think the EPQ check passed, even if it did not pass for
the injected tuple itself.  This would lead to update or delete
actions getting performed when they should have been skipped due to
a conflicting concurrent update in READ COMMITTED isolation mode.

Fix by blocking all sibling result relations from emitting tuples
during an EvalPlanQual recheck.  In the back branches, the fix is
complicated a bit by the need to not change the size of struct
EPQState (else we'd have ABI-breaking changes in offsets in
struct ModifyTableState).  Like the back-patches of 3f7836ff6
and 4b3e37993, add a separately palloc'd struct to avoid that.
The logic is the same as in HEAD otherwise.

This is only a live bug back to v14 where 86dc90056 came in.
However, I chose to back-patch the test cases further, on the
grounds that this whole area is none too well tested.  I skipped
doing so in v11 though because none of the test applied cleanly,
and it didn't quite seem worth extra work for a branch with only
six months to live.

Per report from Ante Krešić (via Aleksander Alekseev)

Discussion: https://postgr.es/m/CAJ7c6TMBTN3rcz4=AjYhLPD_w3FFT0Wq_C15jxCDn8U4tZnH1g@mail.gmail.com
2023-05-19 14:26:34 -04:00
Michael Paquier e72580232c pageinspect: Fix gist_page_items() with included columns
Non-leaf pages of GiST indexes contain key attributes, leaf pages
contain both key and non-key attributes, and gist_page_items() ignored
the handling of non-key attributes.  This caused a few problems when
using gist_page_items() on a GiST index with INCLUDE:
- On a non-leaf page, the function would crash.
- On a leaf page, the function would work, but miss to display all the
values for included attributes.

This commit fixes gist_page_items() to handle such cases in a more
appropriate way, and now displays the values of key and non-key
attributes for each item separately in a style consistent with what
ruleutils.c would generate for the attribute list, depending on the page
type dealt with.  In a way similar to how a record is displayed, values
would be double-quoted for key or non-key attributes if required.

ruleutils.c did not provide a routine able to control if non-key
attributes should be displayed, so an extended() routine for index
definitions is added to work around the leaf and non-leaf page
differences.

While on it, this commit fixes a third problem related to the amount of
data reported for key attributes.  The code originally relied on
BuildIndexValueDescription() (used for error reports on constraints)
that would not print all the data stored in the index but the index
opclass's input type, so this limited the amount of information
available.  This switch makes gist_page_items() much cheaper as there is
no need to run ACL checks for each item printed, which is not an issue
anyway as superuser rights are required to execute the functions of
pageinspect.  Opclasses whose data cannot be displayed can rely on
gist_page_items_bytea().

The documentation of this function was slightly incorrect for the
output results generated on HEAD and v15, so adjust it on these
branches.

Author: Alexander Lakhin, Michael Paquier
Discussion: https://postgr.es/m/17884-cb8c326522977acb@postgresql.org
Backpatch-through: 14
2023-05-19 12:38:18 +09:00
Tomas Vondra 40d465cafc Fix handling of empty ranges and NULLs in BRIN
BRIN indexes did not properly distinguish between summaries for empty
(no rows) and all-NULL ranges, treating them as essentially the same
thing. Summaries were initialized with allnulls=true, and opclasses
simply reset allnulls to false when processing the first non-NULL value.
This however produces incorrect results if the range starts with a NULL
value (or a sequence of NULL values), in which case we forget the range
contains NULL values when adding the first non-NULL value.

This happens because the allnulls flag is used for two separate
purposes - to mark empty ranges (not representing any rows yet) and
ranges containing only NULL values.

Opclasses don't know which of these cases it is, and so don't know
whether to set hasnulls=true. Setting the flag in both cases would make
it correct, but it would also make BRIN indexes useless for queries with
IS NULL clauses. All ranges start empty (and thus allnulls=true), so all
ranges would end up with either allnulls=true or hasnulls=true.

The severity of the issue is somewhat reduced by the fact that it only
happens when adding values to an existing summary with allnulls=true.
This can happen e.g. for small tables (because a summary for the first
range exists for all BRIN indexes), or for tables with large fraction of
NULL values in the indexed columns.

Bulk summarization (e.g. during CREATE INDEX or automatic summarization)
that processes all values at once is not affected by this issue. In this
case the flags were updated in a slightly different way, not forgetting
the NULL values.

To identify empty ranges we use a new flag, stored in an unused bit in
the BRIN tuple header so the on-disk format remains the same. A matching
flag is added to BrinMemTuple, into a 3B gap after bt_placeholder.
That means there's no risk of ABI breakage, although we don't actually
pass the BrinMemTuple to any public API.

We could also skip storing index tuples for empty summaries, but then
we'd have to always process such ranges - even if there are no rows in
large parts of the table (e.g. after a bulk DELETE), it would still
require reading the pages etc. So we store them, but ignore them when
building the bitmap.

Backpatch to 11. The issue exists since BRIN indexes were introduced in
9.5, but older releases are already EOL.

Backpatch-through: 11
Reviewed-by: Justin Pryzby, Matthias van de Meent, Alvaro Herrera
Discussion: https://postgr.es/m/402430e4-7d9d-6cf1-09ef-464d80afff3b@enterprisedb.com
2023-05-19 00:15:00 +02:00
Tomas Vondra 3f1356e5d6 Fix handling of NULLs when merging BRIN summaries
When merging BRIN summaries, union_tuples() did not correctly update the
target hasnulls/allnulls flags. When merging all-NULL summary into a
summary without any NULL values, the result had both flags set to false
(instead of having hasnulls=true).

This happened because the code only considered the hasnulls flags,
ignoring the possibility the source summary has allnulls=true.

Discovered while investigating issues with handling empty BRIN ranges
and handling of NULL values, but it's a separate problem (has nothing to
do with empty ranges).

Fixed by considering both flags on the source summary, and updating the
hasnulls flag on the target summary.

Backpatch to 11. The bug exists since 9.5 (where BRIN indexes were
introduced), but those releases are EOL already.

Discussion: https://postgr.es/m/9d993d0d-e431-2196-9ccc-0554d0e60154%40enterprisedb.com
2023-05-18 23:34:10 +02:00
Tom Lane f8d799eda2 Handle RLS dependencies in inlined set-returning functions properly.
If an SRF in the FROM clause references a table having row-level
security policies, and we inline that SRF into the calling query,
we neglected to mark the plan as potentially dependent on which
role is executing it.  This could lead to later executions in the
same session returning or hiding rows that should have been hidden
or returned instead.

Our thanks to Wolfgang Walther for reporting this problem.

Stephen Frost and Tom Lane

Security: CVE-2023-2455
2023-05-08 10:12:44 -04:00
Noah Misch 01e8182c73 Replace last PushOverrideSearchPath() call with set_config_option().
The two methods don't cooperate, so set_config_option("search_path",
...) has been ineffective under non-empty overrideStack.  This defect
enabled an attacker having database-level CREATE privilege to execute
arbitrary code as the bootstrap superuser.  While that particular attack
requires v13+ for the trusted extension attribute, other attacks are
feasible in all supported versions.

Standardize on the combination of NewGUCNestLevel() and
set_config_option("search_path", ...).  It is newer than
PushOverrideSearchPath(), more-prevalent, and has no known
disadvantages.  The "override" mechanism remains for now, for
compatibility with out-of-tree code.  Users should update such code,
which likely suffers from the same sort of vulnerability closed here.
Back-patch to v11 (all supported versions).

Alexander Lakhin.  Reported by Alexander Lakhin.

Security: CVE-2023-2454
2023-05-08 06:14:11 -07:00
Peter Eisentraut 76a3e1d7a8 Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 0ef8754efd61f40389ef749bb6ffecd6abc1555b
2023-05-08 14:33:02 +02:00
Michael Paquier ae4ffa7227 Fix typo with wait event for SLRU buffer of commit timestamps
This wait event was documented as "CommitTsBuffer" since its
introduction, but the code named it "CommitTSBuffer".  This commit fixes
the code to follow the term documented, which is also more consistent
with the naming of the other wait events used for commit timestamps.

Introduced by 5da1493.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/e8c38840-596a-83d6-bd8d-cebc51111572@gmail.com
Backpatch-through: 13
2023-05-05 21:25:56 +09:00
Tom Lane d5de344a50 In array_position()/array_positions(), beware of empty input array.
These functions incautiously fetched the array's first lower bound
even when the array is zero-dimensional, thus fetching the word
after the allocated array space.  While almost always harmless,
with very bad luck this could result in SIGSEGV.  Fix by adding
an early exit for empty input.

Per bug #17920 from Alexander Lakhin.

Discussion: https://postgr.es/m/17920-f7c228c627b6d02e%40postgresql.org
2023-05-04 11:48:23 -04:00
Michael Paquier d29eba1987 Fix crashes with CREATE SCHEMA AUTHORIZATION and schema elements
CREATE SCHEMA AUTHORIZATION with appended schema elements can lead to
crashes when comparing the schema name of the query with the schemas
used in the qualification of some clauses in the elements' queries.

The origin of the problem is that the transformation routine for the
elements listed in a CREATE SCHEMA query uses as new, expected, schema
name the one listed in CreateSchemaStmt itself.  However, depending on
the query, CreateSchemaStmt.schemaname may be NULL, being computed
instead from the role specification of the query given by the
AUTHORIZATION clause, that could be either:
- A user name string, with the new schema name being set to the same
value as the role given.
- Guessed from CURRENT_ROLE, SESSION_ROLE or CURRENT_ROLE, with a new
schema name computed from the security context where CREATE SCHEMA is
running.

Regression tests are added for CREATE SCHEMA with some appended elements
(some of them with schema qualifications), covering also some role
specification patterns.

While on it, this simplifies the context structure used during the
transformation of the elements listed in a CREATE SCHEMA query by
removing the fields for the role specification and the role type.  They
were not used, and for the role specification this could be confusing as
the schema name may by extracted from that at the beginning of
CreateSchemaCommand().

This issue exists for a long time, so backpatch down to all the versions
supported.

Reported-by: Song Hongyu
Author: Michael Paquier
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/17909-f65c12dfc5f0451d@postgresql.org
Backpatch-through: 11
2023-04-28 19:29:38 +09:00
Nathan Bossart 1370030369 Prevent underflow in KeepLogSeg().
The call to XLogGetReplicationSlotMinimumLSN() might return a
greater LSN than the one given to the function.  Subsequent segment
number calculations might then underflow, which could result in
unexpected behavior when removing or recyling WAL files.  This was
introduced with max_slot_wal_keep_size in c655077639.  To fix, skip
the block of code for replication slots if the LSN is greater.

Reported-by: Xu Xingwang
Author: Kyotaro Horiguchi
Reviewed-by: Junwang Zhao
Discussion: https://postgr.es/m/17903-4288d439dee856c6%40postgresql.org
Backpatch-through: 13
2023-04-27 14:32:40 -07:00
Michael Paquier aeb6f4b3b0 Re-add tracking of wait event SLRUFlushSync
SLRUFlushSync has been accidently removed during dee663f, that has moved
the flush of the SLRU files to the checkpointer, so add it back.  The
issue has been noticed by Thomas when checking for orphaned wait
events.

Author: Thomas Munro
Reviewed-by: Bharath Rupireddy
Discussion: https://postgr.es/m/CA+hUKGK6tqm59KuF1z+h5Y8fsWcu5v8+84kduSHwRzwjB2aa_A@mail.gmail.com
2023-04-26 07:30:47 +09:00
Daniel Gustafsson 0e8e5e856c Fix vacuum_cost_delay check for balance calculation.
Commit 1021bd6a89 excluded autovacuum workers from cost-limit balance
calculations when per-relation options were set.  The code checks for
limit and cost_delay being greater than zero, but since cost_delay can
be set to -1 the test needs to check for greater than or zero.

Backpatch to all supported branches since 1021bd6a89 was backpatched
all the way at the time.

Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAD21AoBS7o6Ljt_vfqPQPf67AhzKu3fR0iqk8B=vVYczMugKMQ@mail.gmail.com
Backpatch-through: v11 (all supported branches)
2023-04-25 13:54:10 +02:00
Michael Paquier 4cc56f8edb Fix buffer refcount leak with FDW bulk inserts
The leak would show up when using batch inserts with foreign tables
included in a partition tree, as the slots used in the batch were not
reset once processed.  In order to fix this problem, some
ExecClearTuple() are added to clean up the slots used once a batch is
filled and processed, mapping with the number of slots currently in use
as tracked by the counter ri_NumSlots.

This buffer refcount leak has been introduced in b676ac4 with the
addition of the executor facility to improve bulk inserts for FDWs, so
backpatch down to 14.

Alexander has provided the patch (slightly modified by me).  The test
for postgres_fdw comes from me, based on the test case that the author
has sent in the report.

Author: Alexander Pyhalov
Discussion: https://postgr.es/m/b035780a740efd38dc30790c76927255@postgrespro.ru
Backpatch-through: 14
2023-04-25 09:42:36 +09:00
Daniel Gustafsson 34e09e71ac Remove duplicate lines of code
Commit 6df7a9698b accidentally included two identical prototypes for
default_multirange_selectivi() and commit 086cf1458c added a break;
statement where one was already present, thus duplicating it.  While
there is no bug caused by this, fix by removing the duplicated lines
as they provide no value.

Backpatch the fix for duplicate prototypes to v14 and the duplicate
break statement fix to all supported branches to avoid backpatching
hazards due to the removal.

Reported-by: Anton Voloshin <a.voloshin@postgrespro.ru>
Discussion: https://postgr.es/m/0e69cb60-0176-f6d0-7e15-6478b7d85724@postgrespro.ru
2023-04-24 11:16:17 +02:00
Alexander Korotkov 9ef5a35837 Fix custom validators call in build_local_reloptions()
We need to call them only when validate == true.

Backpatch to 13, where opclass options were introduced.

Reported-by: Tom Lane
Discussion: https://postgr.es/m/2656633.1681831542%40sss.pgh.pa.us
Reviewed-by: Tom Lane, Pavel Borisov
Backpatch-through: 13
2023-04-23 14:00:13 +03:00
Jeff Davis dde926b0f6 Avoid character classification in regex escape parsing.
For regex escape sequences, just test directly for the relevant ASCII
characters rather than using locale-sensitive character
classification.

This fixes an assertion failure when a locale considers a non-ASCII
character, such as "൧", to be a digit.

Reported-by: Richard Guo
Discussion: https://postgr.es/m/CAMbWs49Q6UoKGeT8pBkMtJGJd+16CBFZaaWUk9Du+2ERE5g_YA@mail.gmail.com
Backpatch-through: 11
2023-04-21 08:20:32 -07:00
Tom Lane 72a914e9fb Avoid trying to write an empty WAL record in log_newpage_range().
If the last few pages in the specified range are empty (all zero),
then log_newpage_range() could try to emit an empty WAL record
containing no FPIs.  This at least upsets an Assert in
ReserveXLogInsertLocation, and might perhaps have bad real-world
consequences in non-assert builds.

This has been broken since log_newpage_range() was introduced,
but the case was hard if not impossible to hit before commit 3d6a98457
decided it was okay to leave VM and FSM pages intentionally zero.
Nonetheless, it seems prudent to back-patch.  log_newpage_range()
was added in v12 but later back-patched, so this affects all
supported branches.

Matthias van de Meent, per report from Justin Pryzby

Discussion: https://postgr.es/m/ZD1daibg4RF50IOj@telsasoft.com
2023-04-17 14:22:06 -04:00
Tom Lane 9b104a27c7 Fix assignment to array of domain over composite, redux.
Commit 3e310d837 taught isAssignmentIndirectionExpr() to look through
CoerceToDomain nodes.  That's not sufficient, because since commit
04fe805a1 it's been possible for the planner to simplify
CoerceToDomain to RelabelType when the domain has no constraints
to enforce.  So we need to look through RelabelType too.

Per bug #17897 from Alexander Lakhin.  Although 3e310d837 was
back-patched to v11, it seems sufficient to apply this change
to v12 and later, since 04fe805a1 came in in v12.

Dmitry Dolgov

Discussion: https://postgr.es/m/17897-4216c546c3874044@postgresql.org
2023-04-15 12:01:39 -04:00
David Rowley ae85fb828c Fix incorrect partition pruning logic for boolean partitioned tables
The partition pruning logic assumed that "b IS NOT true" was exactly the
same as "b IS FALSE".  This is not the case when considering NULL values.
Fix this so we correctly include any partition which could hold NULL
values for the NOT case.

Additionally, this fixes a bug in the partition pruning code which handles
partitioned tables partitioned like ((NOT boolcol)).  This is a seemingly
unlikely schema design, and it was untested and also broken.

Here we add tests for the ((NOT boolcol)) case and insert some actual data
into those tables and verify we do get the correct rows back when running
queries.  I've also adjusted the existing boolpart tests to include some
data and verify we get the correct results too.

Both the bugs being fixed here could lead to incorrect query results with
fewer rows being returned than expected.  No additional rows could have
been returned accidentally.

In passing, remove needless ternary expression.  It's more simple just to
pass !is_not_clause to makeBoolConst().  It makes sense to do this so the
code is consistent with the bug fix in the "else if" condition just below.

David Kimura did submit a patch to fix the first of the issues here, but
that's not what's being committed here.

Reported-by: David Kimura
Reviewed-by: Richard Guo, David Kimura
Discussion: https://postgr.es/m/CAHnPFjQ5qxs6J_p+g8=ww7GQvfn71_JE+Tygj0S7RdRci1uwPw@mail.gmail.com
Backpatch-through: 11, all supported versions
2023-04-14 16:21:42 +12:00
Tom Lane 0dd55ef9b8 Fix parallel-safety marking when moving initplans to another node.
Our policy since commit ab77a5a45 has been that a plan node having
any initplans is automatically not parallel-safe.  (This could be
relaxed, but not today.)  clean_up_removed_plan_level neglected
this, and could attach initplans to a parallel-safe child plan
node without clearing the plan's parallel-safe flag.  That could
lead to "subplan was not initialized" errors at runtime, in case
an initplan referenced another one and only the referencing one
got transmitted to parallel workers.

The fix in clean_up_removed_plan_level is trivial enough.
materialize_finished_plan also moves initplans from one node
to another, but it's okay because it already copies the source
node's parallel_safe flag.  The other place that does this kind
of thing is standard_planner's hack to inject a top-level Gather
when debug_parallel_query is active.  But that's actually dead
code given that we're correctly enforcing the "initplans aren't
parallel safe" rule, so just replace it with an Assert that
there are no initplans.

Also improve some related comments.

Normally we'd add a regression test case for this sort of bug.
The mistake itself is already reached by existing tests, but there
is accidentally no visible problem.  The only known test case that
creates an actual failure seems too indirect and fragile to justify
keeping it as a regression test (not least because it fails to fail
in v11, though the bug is clearly present there too).

Per report from Justin Pryzby.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/ZDVt6MaNWkRDO1LQ@telsasoft.com
2023-04-12 10:46:30 -04:00
Tom Lane 34ad3aedb0 Fix ts_headline() edge cases for empty query and empty search text.
tsquery's GETQUERY() macro is only safe to apply to a tsquery
that is known non-empty; otherwise it gives a pointer to garbage.
Before commit 5a617d75d, ts_headline() avoided this pitfall, but
only in a very indirect, nonobvious way.  (hlCover could not reach
its TS_execute call, because if the query contains no lexemes
then hlFirstIndex would surely return -1.)  After that commit,
it fell into the trap, resulting in weird errors such as
"unrecognized operator" and/or valgrind complaints.  In HEAD,
fix this by not calling TS_execute_locations() at all for an
empty query.  In the back branches, add a defensive check to
hlCover() --- that's not fixing any live bug, but I judge the
code a bit too fragile as-is.

Also, both mark_hl_fragments() and mark_hl_words() were careless
about the possibility of empty search text: in the cases where
no match has been found, they'd end up telling mark_fragment() to
mark from word indexes 0 to 0 inclusive, even when there is no
word 0.  This is harmless since we over-allocated the prs->words
array, but it does annoy valgrind.  Fix so that the end index is -1
and thus mark_fragment() will do nothing in such cases.

Bottom line is that this fixes a live bug in HEAD, but in the
back branches it's only getting rid of a valgrind nitpick.
Back-patch anyway.

Per report from Alexander Lakhin.

Discussion: https://postgr.es/m/c27f642d-020b-01ff-ae61-086af287c4fd@gmail.com
2023-04-06 15:52:37 -04:00
Tom Lane 0a6aaf0116 Fix another issue with ENABLE/DISABLE TRIGGER on partitioned tables.
In v13 and v14, the ENABLE/DISABLE TRIGGER USER variant malfunctioned
on cloned triggers, failing to find the clones because it thought they
were system triggers.  Other variants of ENABLE/DISABLE TRIGGER would
improperly apply a superuserness check.  Fix by adjusting the is-it-
a-system-trigger check to match reality in those branches.  (As far
as I can find, this is the only place that got it wrong.)

There's no such bug in v15/HEAD, because we revised the catalog
representation of system triggers to be what this code was expecting.
However, add the test case to these branches anyway, because this area
is visibly pretty fragile.  Also remove an obsoleted comment.

The recent v15/HEAD commit 6949b921d fixed a nearby bug.  I now see
that my commit message for that was inaccurate: the behavior of
recursing to clone triggers is older than v15, but it didn't apply
to the case in v13/v14 because in those branches parent partitioned
tables have no pg_trigger entries for foreign-key triggers.  But add
the test case from that commit to v13/v14, just to show what is
happening there.

Per bug #17886 from DzmitryH.

Discussion: https://postgr.es/m/17886-5406d5d828aa4aa3@postgresql.org
2023-04-05 12:56:30 -04:00
Tom Lane b0b55d8b80 Reject system columns as elements of foreign keys.
Up through v11 it was sensible to use the "oid" system column as
a foreign key column, but since that was removed there's no visible
usefulness in making any of the remaining system columns a foreign
key.  Moreover, since the TupleTableSlot rewrites in v12, such cases
actively fail because of implicit assumptions that only user columns
appear in foreign keys.  The lack of complaints about that seems
like good evidence that no one is trying to do it.  Hence, rather
than trying to repair those assumptions (of which there are at least
two, maybe more), let's just forbid the case up front.

Per this patch, a system column in either the referenced or
referencing side of a foreign key will draw this error; however,
putting one in the referenced side would have failed later anyway,
since we don't allow unique indexes to be made on system columns.

Per bug #17877 from Alexander Lakhin.  Back-patch to v12; the
case still appears to work in v11, so we shouldn't break it there.

Discussion: https://postgr.es/m/17877-4bcc658e33df6de1@postgresql.org
2023-03-31 11:18:49 -04:00
Tom Lane 2dcd92e97d Ensure acquire_inherited_sample_rows sets its output parameters.
The totalrows/totaldeadrows outputs were left uninitialized in cases
where we find no analyzable child tables of a partitioned table.  This
could lead to setting the partitioned table's pg_class.reltuples value
to garbage.  It's not clear that that would have any very bad effects
in practice, but fix it anyway because it's making valgrind unhappy.

Reported and diagnosed by Alexander Lakhin (bug #17880).

Discussion: https://postgr.es/m/17880-9282037c923d856e@postgresql.org
2023-03-31 10:08:40 -04:00
David Rowley 2110162206 Fix List memory issue in transformColumnDefinition
When calling generateSerialExtraStmts(), we would pass in the
constraint->options.  In some cases, generateSerialExtraStmts() would
modify the referenced List to remove elements from it, but doing so is
invalid without assigning the list back to all variables that point to it.
In the particular reported problem case, the List became empty, in which
cases it became NIL, but the passed in constraint->options didn't get to
find out about that and was left pointing to free'd memory.

To fix this, just perform a list_copy() inside generateSerialExtraStmts().
We could just do a list_copy() just before we perform the delete from the
list, however, that seems less robust.  Let's make sure the generated
CreateSeqStmt gets a completely different copy of the list to be safe.

Bug: #17879
Reported-by: Fei Changhong
Diagnosed-by: Fei Changhong
Discussion: https://postgr.es/m/17879-b7dfb5debee58ff5@postgresql.org
Backpatch-through: 11, all supported versions
2023-03-31 12:14:04 +13:00
Tom Lane a1904c9ce5 Fix dereference of dangling pointer in GiST index buffering build.
gistBuildCallback tried to fetch the size of an index tuple that
might have already been freed by gistProcessEmptyingQueue.
While this seems to usually be harmless in production builds,
in principle it could result in a SIGSEGV, or more likely a bogus
value for indtuplesSize leading to poor page-split decisions later
in the build.

The memory management here is confusing and could stand to be
refactored, but for the moment it seems to be enough to fetch
the tuple size sooner.  AFAICT the indtuples[Size] totals aren't
used in between these places; even if they were, the updated
values shouldn't be any worse to use.  So just move the
incrementing of the totals up.

It's not very clear why our valgrind-using buildfarm animals
haven't noticed this problem, because the relevant code path
does seem to be exercised according to the code coverage report.
I think the reason that we didn't fix this bug after the first
report is that I'd wanted to try to understand that better.
However, now that it's been re-discovered let's just be pragmatic
and fix it already.

Original report by Alexander Lakhin (bug #16329),
later rediscovered by Egor Chindyaskin (bug #17874).

Patch by Alexander Lakhin (commentary by Pavel Borisov and me).
Back-patch to all supported branches.

Discussion: https://postgr.es/m/16329-7a6aa9b6fa1118a1@postgresql.org
Discussion: https://postgr.es/m/17874-63ca6c7ce42d2103@postgresql.org
2023-03-29 11:31:30 -04:00
Tom Lane 334cc4c966 Reject attempts to alter composite types used in indexes.
find_composite_type_dependencies() ignored indexes, which is a poor
decision because an expression index could have a stored column of
a composite (or other container) type even when the underlying table
does not.  Teach it to detect such cases and error out.  We have to
work a bit harder than for other relations because the pg_depend entry
won't identify the specific index column of concern, but it's not much
new code.

This does not address bug #17872's original complaint that dropping
a column in such a type might lead to violations of the uniqueness
property that a unique index is supposed to ensure.  That seems of
much less concern to me because it won't lead to crashes.

Per bug #17872 from Alexander Lakhin.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/17872-d0fbb799dc3fd85d@postgresql.org
2023-03-27 15:04:02 -04:00
Tom Lane 11213d4466 Fix oversights in array manipulation.
The nested-arrays code path in ExecEvalArrayExpr() used palloc to
allocate the result array, whereas every other array-creating function
has used palloc0 since 18c0b4ecc.  This mostly works, but unused bits
past the end of the nulls bitmap may end up undefined.  That causes
valgrind complaints with -DWRITE_READ_PARSE_PLAN_TREES, and could
cause planner misbehavior as cited in 18c0b4ecc.  There seems no very
good reason why we should strive to avoid palloc0 in just this one case,
so fix it the easy way with s/palloc/palloc0/.

While looking at that I noted that we also failed to check for overflow
of "nbytes" and "nitems" while summing the sizes of the sub-arrays,
potentially allowing a crash due to undersized output allocation.
For "nbytes", follow the policy used by other array-munging code of
checking for overflow after each addition.  (As elsewhere, the last
addition of the array's overhead space doesn't need an extra check,
since palloc itself will catch a value between 1Gb and 2Gb.)
For "nitems", there's no very good reason to sum the inputs at all,
since we can perfectly well use ArrayGetNItems' result instead of
ignoring it.

Per discussion of this bug, also remove redundant zeroing of the
nulls bitmap in array_set_element and array_set_slice.

Patch by Alexander Lakhin and myself, per bug #17858 from Alexander
Lakhin; thanks also to Richard Guo.  These bugs are a dozen years old,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/17858-8fd287fd3663d051@postgresql.org
2023-03-26 13:41:06 -04:00
Amit Kapila 9dac02c772 Ignore generated columns during apply of update/delete.
We fail to apply updates and deletes when the REPLICA IDENTITY FULL is
used for the table having generated columns. We didn't use to ignore
generated columns while doing tuple comparison among the tuples from
the publisher and subscriber during apply of updates and deletes.

Author: Onder Kalaci
Reviewed-by: Shi yu, Amit Kapila
Backpatch-through: 12
Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
2023-03-23 11:32:22 +05:30
Amit Kapila 65ead76961 Ignore dropped columns during apply of update/delete.
We fail to apply updates and deletes when the REPLICA IDENTITY FULL is
used for the table having dropped columns. We didn't use to ignore dropped
columns while doing tuple comparison among the tuples from the publisher
and subscriber during apply of updates and deletes.

Author: Onder Kalaci, Shi yu
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CACawEhVQC9WoofunvXg12aXtbqKnEgWxoRx3+v8q32AWYsdpGg@mail.gmail.com
2023-03-21 09:18:51 +05:30
Thomas Munro 1b9e42e82a Fix race in parallel hash join batch cleanup, take II.
With unlucky timing and parallel_leader_participation=off (not the
default), PHJ could attempt to access per-batch shared state just as it
was being freed.  There was code intended to prevent that by checking
for a cleared pointer, but it was racy.  Fix, by introducing an extra
barrier phase.  The new phase PHJ_BUILD_RUNNING means that it's safe to
access the per-batch state to find a batch to help with, and
PHJ_BUILD_DONE means that it is too late.  The last to detach will free
the array of per-batch state as before, but now it will also atomically
advance the phase, so that late attachers can avoid the hazard.  This
mirrors the way per-batch hash tables are freed (see phases
PHJ_BATCH_PROBING and PHJ_BATCH_DONE).

An earlier attempt to fix this (commit 3b8981b6, later reverted) missed
one special case.  When the inner side is empty (the "empty inner
optimization), the build barrier would only make it to
PHJ_BUILD_HASHING_INNER phase before workers attempted to detach from
the hashtable.  In that case, fast-forward the build barrier to
PHJ_BUILD_RUNNING before proceeding, so that our later assertions hold
and we can still negotiate who is cleaning up.

Revealed by build farm failures, where BarrierAttach() failed a sanity
check assertion, because the memory had been clobbered by dsa_free().
In non-assert builds, the result could be a segmentation fault.

Back-patch to all supported releases.

Author: Thomas Munro <thomas.munro@gmail.com>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reported-by: Michael Paquier <michael@paquier.xyz>
Reported-by: David Geier <geidav.pg@gmail.com>
Tested-by: David Geier <geidav.pg@gmail.com>
Discussion: https://postgr.es/m/20200929061142.GA29096%40paquier.xyz
2023-03-21 14:37:33 +13:00
Tomas Vondra 6a78a42fea Fix netmask handling in inet_minmax_multi_ops
When calculating distance in brin_minmax_multi_distance_inet(), the
netmask was applied incorrectly. This results in (seemingly) incorrect
ordering of values, triggering an assert.

For builds without asserts this is mostly harmless - we may merge other
ranges, possibly resulting in slightly less efficient index. But it's
still correct and the greedy algorithm doesn't guarantee optimality
anyway.

Backpatch to 14, where minmax-multi indexes were introduced.

Reported by Dmitry Dolgov, investigation and fix by me.

Reported-by: Dmitry Dolgov
Backpatch-through: 14
Discussion: https://postgr.es/m/17774-c6f3e36dd4471e67@postgresql.org
2023-03-20 10:16:54 +01:00
David Rowley f654f343c6 Fix memory leak in Memoize cache key evaluation
When probing the Memoize cache to check if the current cache key values
exist in the cache, we perform an evaluation of the expressions making up
the cache key before probing the hash table for those values.  This
operation could leak memory as it is possible that the cache key is an
expression which requires allocation of memory, as was the case in bug
17844.

Here we fix this by correctly switching to the per tuple context before
evaluating the cache expressions so that the memory is freed next time the
per tuple context is reset.

Bug: 17844
Reported-by: Alexey Ermakov
Discussion: https://postgr.es/m/17844-d2f6f9e75a622bed@postgresql.org
Backpatch-through: 14, where Memoize was introduced
2023-03-20 13:30:55 +13:00
Thomas Munro 1c0d4affa2 Small tidyup for commit d41a178b, part II.
Further to commit 6a9229da, checking for NULL is now redundant.  An "out
of memory" error would have been thrown already by palloc() and treated
as FATAL, so we can delete a few more lines.

Back-patch to all releases, like those other commits.

Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/4040668.1679013388%40sss.pgh.pa.us
2023-03-17 14:46:50 +13:00
Andres Freund a7a92738ff Work around spurious compiler warning in inet operators
gcc 12+ has complaints like the following:

../../../../../pgsql/src/backend/utils/adt/network.c: In function 'inetnot':
../../../../../pgsql/src/backend/utils/adt/network.c:1893:34: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
 1893 |                         pdst[nb] = ~pip[nb];
      |                         ~~~~~~~~~^~~~~~~~~~
../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16
   27 |         unsigned char ipaddr[16];       /* up to 128 bits of address */
      |                       ^~~~~~
../../../../../pgsql/src/include/utils/inet.h:27:23: note: at offset -1 into destination object 'ipaddr' of size 16

This is due to a compiler bug:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104986

It has been a year since the bug has been reported without getting fixed. As
the warnings are verbose and use of gcc 12 is becoming more common, it seems
worth working around the bug. Particularly because a simple reformulation of
the loop condition fixes the issue and isn't any less readable.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/144536.1648326206@sss.pgh.pa.us
Backpatch: 11-
2023-03-16 14:48:46 -07:00
Thomas Munro 00fc4b3a31 Small tidyup for commit d41a178b.
A comment was left behind claiming that we needed to use malloc() rather
than palloc() because the corresponding free would run in another
thread, but that's not true anymore.  Remove that comment.  And, with
the reason being gone, we might as well actually use palloc().

Back-patch to supported releases, like d41a178b.

Discussion: https://postgr.es/m/CA%2BhUKG%2BpdM9v3Jv4tc2BFx2jh_daY3uzUyAGBhtDkotEQDNPYw%40mail.gmail.com
2023-03-17 09:52:45 +13:00
Thomas Munro 2bef57ee8b Fix fractional vacuum_cost_delay.
Commit 4753ef37 changed vacuum_delay_point() to use the WaitLatch() API,
to fix the problem that vacuum could keep running for a very long time
after the postmaster died.

Unfortunately, that broke commit caf626b2's support for fractional
vacuum_cost_delay, which shipped in PostgreSQL 12.  WaitLatch() works in
whole milliseconds.

For now, revert the change from commit 4753ef37, but add an explicit
check for postmaster death.  That's an extra system call on systems
other than Linux and FreeBSD, but that overhead doesn't matter much
considering that we willingly went to sleep and woke up again.  (In
later work, we might add higher resolution timeouts to the latch API so
that we could do this with our standard programming pattern, but that
wouldn't be back-patched.)

Back-patch to 14, where commit 4753ef37 arrived.

Reported-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/CAAKRu_b-q0hXCBUCAATh0Z4Zi6UkiC0k2DFgoD3nC-r3SkR3tg%40mail.gmail.com
2023-03-15 14:05:27 +13:00
Thomas Munro 9b6e0b9c37 Fix waitpid() emulation on Windows.
Our waitpid() emulation didn't prevent a PID from being recycled by the
OS before the call to waitpid().  The postmaster could finish up
tracking more than one child process with the same PID, and confuse
them.

Fix, by moving the guts of pgwin32_deadchild_callback() into waitpid(),
so that resources are released synchronously.  The process and PID
continue to exist until we close the process handle, which only happens
once we're ready to adjust our book-keeping of running children.

This seems to explain a couple of failures on CI.  It had never been
reported before, despite the code being as old as the Windows port.
Perhaps Windows started recycling PIDs more rapidly, or perhaps timing
changes due to commit 7389aad6 made it more likely to break.

Thanks to Alexander Lakhin for analysis and Andres Freund for tracking
down the root cause.

Back-patch to all supported branches.

Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20230208012852.bvkn2am4h4iqjogq%40awork3.anarazel.de
2023-03-15 13:30:58 +13:00
Tom Lane 7cac191057 Fix corner case bug in numeric to_char() some more.
The band-aid applied in commit f0bedf3e4 turns out to still need
some work: it made sure we didn't set Np->last_relevant too small
(to the left of the decimal point), but it didn't prevent setting
it too large (off the end of the partially-converted string).
This could result in fetching data beyond the end of the allocated
space, which with very bad luck could cause a SIGSEGV, though
I don't see any hazard of interesting memory disclosure.

Per bug #17839 from Thiago Nunes.  The bug's pretty ancient,
so back-patch to all supported versions.

Discussion: https://postgr.es/m/17839-aada50db24d7b0da@postgresql.org
2023-03-14 19:17:31 -04:00
Tom Lane 7c509f7e5a Remove unnecessary code in dependency_is_compatible_expression().
Scanning the expression for compatible Vars isn't really necessary,
because the subsequent match against StatisticExtInfo entries will
eliminate expressions containing other Vars just fine.  Moreover,
this code hadn't stopped to think about what to do with
PlaceHolderVars or Aggrefs in the clause; and at least for the PHV
case, that demonstrably leads to failures.  Rather than work out
whether it's reasonable to ignore those, let's just remove the
whole stanza.

Per report from Richard Guo.  Back-patch to v14 where this code
was added.

Discussion: https://postgr.es/m/CAMbWs48Mmvm-acGevXuwpB=g5JMqVSL6i9z5UaJyLGJqa-XPAA@mail.gmail.com
2023-03-14 11:10:45 -04:00
Tom Lane 0ee9d685dd Fix JSON error reporting for many cases of erroneous string values.
The majority of error exit cases in json_lex_string() failed to
set lex->token_terminator, causing problems for the error context
reporting code: it would see token_terminator less than token_start
and do something more or less nuts.  In v14 and up the end result
could be as bad as a crash in report_json_context().  Older
versions accidentally avoided that fate; but all versions produce
error context lines that are far less useful than intended,
because they'd stop at the end of the prior token instead of
continuing to where the actually-bad input is.

To fix, invent some macros that make it less notationally painful
to do the right thing.  Also add documentation about what the
function is actually required to do; and in >= v14, add an assertion
in report_json_context about token_terminator being sufficiently
far advanced.

Per report from Nikolay Shaplov.  Back-patch to all supported
versions.

Discussion: https://postgr.es/m/7332649.x5DLKWyVIX@thinkpad-pgpro
2023-03-13 15:19:00 -04:00
Tom Lane 096e708056 Fix failure to detect some cases of improperly-nested aggregates.
check_agg_arguments_walker() supposed that it needn't descend into
the arguments of a lower-level aggregate function, but this is
just wrong in the presence of multiple levels of sub-select.  The
oversight would lead to executor failures on queries that should
be rejected.  (Prior to v11, they actually were rejected, thanks
to a "redundant" execution-time check.)

Per bug #17835 from Anban Company.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/17835-4f29f3098b2d0ba4@postgresql.org
2023-03-13 12:40:28 -04:00
Tom Lane 53a53ea332 Ensure COPY TO on an RLS-enabled table copies no more than it should.
The COPY documentation is quite clear that "COPY relation TO" copies
rows from only the named table, not any inheritance children it may
have.  However, if you enabled row-level security on the table then
this stopped being true, because the code forgot to apply the ONLY
modifier in the "SELECT ... FROM relation" query that it constructs
in order to allow RLS predicates to be attached.  Fix that.

Report and patch by Antonin Houska (comment adjustments and test case
by me).  Back-patch to all supported branches.

Discussion: https://postgr.es/m/3472.1675251957@antos
2023-03-10 13:52:28 -05:00
Thomas Munro d811d74be3 Fix race in SERIALIZABLE READ ONLY.
Commit bdaabb9b started skipping doomed transactions when building the
list of possible conflicts for SERIALIZABLE READ ONLY.  That makes
sense, because doomed transactions won't commit, but a couple of subtle
things broke:

1.  If all uncommitted r/w transactions are doomed, a READ ONLY
transaction would arbitrarily not benefit from the safe snapshot
optimization.  It would not be taken immediately, and yet no other
transaction would set SXACT_FLAG_RO_SAFE later.

2.  In the same circumstances but with DEFERRABLE, GetSafeSnapshot()
would correctly exit its wait loop without sleeping and then take the
optimization in non-assert builds, but assert builds would fail a sanity
check that SXACT_FLAG_RO_SAFE had been set by another transaction.

This is similar to the case for PredXact->WritableSxactCount == 0.  We
should opt out immediately if our possibleUnsafeConflicts list is empty
after filtering.

The code to maintain the serializable global xmin is moved down below
the new opt out site, because otherwise we'd have to reverse its effects
before returning.

Back-patch to all supported releases.  Bug #17368.

Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/17116-d6ca217acc180e30%40postgresql.org
Discussion: https://postgr.es/m/20110707212159.GF76634%40csail.mit.edu
2023-03-09 16:57:11 +13:00