Commit Graph

36297 Commits

Author SHA1 Message Date
Alvaro Herrera
9a33ed8fa1
psql \dP: reference regclass with "pg_catalog." prefix
Strictly speaking this isn't a bug, but since all references to catalog
objects are schema-qualified, we might as well be consistent.  The
omission first appeared in commit 1c5d9270e3, so backpatch to 12.

Author: Justin Pryzby <pryzbyj@telsasoft.com>
Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
2021-08-28 11:45:47 -04:00
Noah Misch
b18669f5e6 Fix data loss in wal_level=minimal crash recovery of CREATE TABLESPACE.
If the system crashed between CREATE TABLESPACE and the next checkpoint,
the result could be some files in the tablespace unexpectedly containing
no rows.  Affected files would be those for which the system did not
write WAL; see the wal_skip_threshold documentation.  Before v13, a
different set of conditions governed the writing of WAL; see v12's
<sect2 id="populate-pitr">.  (The v12 conditions were broader in some
ways and narrower in others.)  Users may want to audit non-default
tablespaces for unexpected short files.  The bug could have truncated an
index without affecting the associated table, and reindexing the index
would fix that particular problem.

This fixes the bug by making create_tablespace_directories() more like
TablespaceCreateDbspace().  create_tablespace_directories() was
recursively removing tablespace contents, reasoning that WAL redo would
recreate everything removed that way.  That assumption holds for other
wal_level values.  Under wal_level=minimal, the old approach could
delete files for which no other copy existed.  Back-patch to 9.6 (all
supported versions).

Reviewed by Robert Haas and Prabhat Sahu.  Reported by Robert Haas.

Discussion: https://postgr.es/m/CA+TgmoaLO9ncuwvr2nN-J4VEP5XyAcy=zKiHxQzBbFRxxGxm0w@mail.gmail.com
2021-08-27 23:33:27 -07:00
Tom Lane
dbb239d518 Count SP-GiST index scans in pg_stat statistics.
Somehow, spgist overlooked the need to call pgstat_count_index_scan().
Hence, pg_stat_all_indexes.idx_scan and equivalent columns never
became nonzero for an SP-GiST index, although the related per-tuple
counters worked fine.

This fix works a bit differently from other index AMs, in that the
counter increment occurs in spgrescan not spggettuple/spggetbitmap.
It looks like this won't make the user-visible semantics noticeably
different, so I won't go to the trouble of introducing an is-this-
the-first-call flag just to make the counter bumps happen in the
same places.

Per bug #17163 from Christian Quest.  Back-patch to all supported
versions.

Discussion: https://postgr.es/m/17163-b8c5cc88322a5e92@postgresql.org
2021-08-27 19:42:42 -04:00
Robert Haas
bc062cb938 Fix broken snapshot handling in parallel workers.
Pengchengliu reported an assertion failure in a parallel woker while
performing a parallel scan using an overflowed snapshot. The proximate
cause is that TransactionXmin was set to an incorrect value.  The
underlying cause is incorrect snapshot handling in parallel.c.

In particular, InitializeParallelDSM() was unconditionally calling
GetTransactionSnapshot(), because I (rhaas) mistakenly thought that
was always retrieving an existing snapshot whereas, at isolation
levels less than REPEATABLE READ, it's actually taking a new one. So
instead do this only at higher isolation levels where there actually
is a single snapshot for the whole transaction.

By itself, this is not a sufficient fix, because we still need to
guarantee that TransactionXmin gets set properly in the workers. The
easiest way to do that seems to be to install the leader's active
snapshot as the transaction snapshot if the leader did not serialize a
transaction snapshot. This doesn't affect the results of future
GetTrasnactionSnapshot() calls since those have to take a new snapshot
anyway; what we care about is the side effect of setting TransactionXmin.

Report by Pengchengliu. Patch by Greg Nancarrow, except for some comment
text which I supplied.

Discussion: https://postgr.es/m/002f01d748ac$eaa781a0$bff684e0$@tju.edu.cn
2021-08-25 08:40:52 -04:00
Amit Kapila
794025eff0 Fix toast rewrites in logical decoding.
Commit 325f2ec555 introduced pg_class.relwrite to skip operations on
tables created as part of a heap rewrite during DDL. It links such
transient heaps to the original relation OID via this new field in
pg_class but forgot to do anything about toast tables. So, logical
decoding was not able to skip operations on internally created toast
tables. This leads to an error when we tried to decode the WAL for the
next operation for which it appeared that there is a toast data where
actually it didn't have any toast data.

To fix this, we set pg_class.relwrite for internally created toast tables
as well which allowed skipping operations on them during logical decoding.

Author: Bertrand Drouvot
Reviewed-by: David Zhang, Amit Kapila
Backpatch-through: 11, where it was introduced
Discussion: https://postgr.es/m/b5146fb1-ad9e-7d6e-f980-98ed68744a7c@amazon.com
2021-08-25 09:23:27 +05:30
Fujii Masao
7d9026cbfd Avoid using ambiguous word "positive" in error message.
There are two identical error messages about valid value of modulus for
hash partition, in PostgreSQL source code. Commit 0e1275fb07 improved
only one of them so that ambiguous word "positive" was avoided there,
and forgot to improve the other. This commit improves the other.
Which would reduce translator burden.

Back-pach to v11 where the error message exists.

Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
2021-08-25 11:47:46 +09:00
Fujii Masao
81fa1bce27 Improve error message about valid value for distance in phrase operator.
The distance in phrase operator must be an integer value between zero
and MAXENTRYPOS inclusive. But previously the error message about
its valid value included the information about its upper limit
but not lower limit (i.e., zero). This commit improves the error message
so that it also includes the information about its lower limit.

Back-patch to v9.6 where full-text phrase search was supported.

Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
2021-08-25 11:45:15 +09:00
Tom Lane
071146184a Fix regexp misbehavior with capturing parens inside "{0}".
Regexps like "(.){0}...\1" drew an "invalid backreference number".
That's not unreasonable on its face, since the capture group will
never be matched if it's iterated zero times.  However, other engines
such as Perl's don't complain about this, nor do we throw an error for
related cases such as "(.)|\1", even though that backref can never
succeed either.  Also, if the zero-iterations case happens at runtime
rather than compile time --- say, "(x)*...\1" when there's no "x" to
be found --- that's not an error, we just deem the backref to not
match.  Making this even less defensible, no error was thrown for
nested cases such as "((.)){0}...\2"; and to add insult to injury,
those cases could result in assertion failures instead.  (It seems
that nothing especially bad happened in non-assert builds, though.)

Let's just fix it so that no error is thrown and instead the backref
is deemed to never match, so that compile-time detection of no
iterations behaves the same as run-time detection.

Per report from Mark Dilger.  This appears to be an aboriginal error
in Spencer's library, so back-patch to all supported versions.

Pre-v14, it turns out to also be necessary to back-patch one aspect of
commits cb76fbd7e/00116dee5, namely to create capture-node subREs with
the begin/end states of their subexpressions, not the current lp/rp
of the outer parseqatom invocation.  Otherwise delsub complains that
we're trying to disconnect a state from itself.  This is a bit scary
but code examination shows that it's safe: in the pre-v14 code, if we
want to wrap iteration around the subexpression, the first thing we do
is overwrite the atom's begin/end fields with new states.  So the
bogus values didn't survive long enough to be used for anything, except
if no iteration is required, in which case it doesn't matter.

Discussion: https://postgr.es/m/A099E4A8-4377-4C64-A98C-3DEDDC075502@enterprisedb.com
2021-08-24 16:37:27 -04:00
Tom Lane
9a327179c8 Prevent regexp back-refs from sometimes matching when they shouldn't.
The recursion in cdissect() was careless about clearing match data
for capturing parentheses after rejecting a partial match.  This
could allow a later back-reference to succeed when by rights it
should fail for lack of a defined referent.

To fix, think a little more rigorously about what the contract
between different levels of cdissect's recursion needs to be.
With the right spec, we can fix this using fewer rather than more
resets of the match data; the key decision being that a failed
sub-match is now explicitly responsible for clearing any matches
it may have set.

There are enough other cross-checks and optimizations in the code
that it's not especially easy to exhibit this problem; usually, the
match will fail as-expected.  Plus, regexps that are even potentially
vulnerable are most likely user errors, since there's just not much
point in writing a back-ref that doesn't always have a referent.
These facts perhaps explain why the issue hasn't been detected,
even though it's almost certainly a couple of decades old.

Discussion: https://postgr.es/m/151435.1629733387@sss.pgh.pa.us
2021-08-23 17:41:07 -04:00
Alvaro Herrera
ad1231171f
Avoid creating archive status ".ready" files too early
WAL records may span multiple segments, but XLogWrite() does not
wait for the entire record to be written out to disk before
creating archive status files.  Instead, as soon as the last WAL page of
the segment is written, the archive status file is created, and the
archiver may process it.  If PostgreSQL crashes before it is able to
write and flush the rest of the record (in the next WAL segment), the
wrong version of the first segment file lingers in the archive, which
causes operations such as point-in-time restores to fail.

To fix this, keep track of records that span across segments and ensure
that segments are only marked ready-for-archival once such records have
been completely written to disk.

This has always been wrong, so backpatch all the way back.

Author: Nathan Bossart <bossartn@amazon.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Ryo Matsumura <matsumura.ryo@fujitsu.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505@amazon.com
2021-08-23 15:50:35 -04:00
Michael Paquier
29f9423251 Fix backup manifests to generate correct WAL-Ranges across timelines
In a backup manifest, WAL-Ranges stores the range of WAL that is
required for the backup to be valid.  pg_verifybackup would then
internally use pg_waldump for the checks based on this data.

When the timeline where the backup started was more than 1 with a
history file looked at for the manifest data generation, the calculation
of the WAL range for the first timeline to check was incorrect.  The
previous logic used as start LSN the start position of the first
timeline, but it needs to use the start LSN of the backup.  This would
cause failures with pg_verifybackup, or any tools making use of the
backup manifests.

This commit adds a test based on a logic using a self-promoted node,
making it rather cheap.

Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210818.143031.1867083699202617521.horikyota.ntt@gmail.com
Backpatch-through: 13
2021-08-23 11:09:57 +09:00
Tom Lane
b30f7f399e Fix performance bug in regexp's citerdissect/creviterdissect.
After detecting a sub-match "dissect" failure (i.e., a backref match
failure) in the i'th sub-match of an iteration node, we should proceed
by adjusting the attempted length of the i'th submatch.  As coded,
though, these functions changed the attempted length of the *last*
sub-match, and only after exhausting all possibilities for that would
they back up to adjust the next-to-last sub-match, and then the
second-from-last, etc; all of which is wasted effort, since only
changing the start or length of the i'th sub-match can possibly make
it succeed.  This oversight creates the possibility for exponentially
bad performance.  Fortunately the problem is masked in most cases by
optimizations or constraints applied elsewhere; which explains why
we'd not noticed it before.  But it is possible to reach the problem
with fairly simple, if contrived, regexps.

Oversight in my commit 173e29aa5.  That's pretty ancient now,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/1808998.1629412269@sss.pgh.pa.us
2021-08-20 14:19:04 -04:00
Tom Lane
7fa367d96b Avoid trying to lock OLD/NEW in a rule with FOR UPDATE.
transformLockingClause neglected to exclude the pseudo-RTEs for
OLD/NEW when processing a rule's query.  This led to odd errors
or even crashes later on.  This bug is very ancient, but it's
not terribly surprising that nobody noticed, since the use-case
for SELECT FOR UPDATE in a non-view rule is somewhere between
thin and non-existent.  Still, crashing is not OK.

Per bug #17151 from Zhiyong Wu.  Thanks to Masahiko Sawada
for analysis of the problem.

Discussion: https://postgr.es/m/17151-c03a3e6e4ec9aadb@postgresql.org
2021-08-19 12:12:35 -04:00
Tom Lane
ecd4dd9f1d Fix check_agg_arguments' examination of aggregate FILTER clauses.
Recursion into the FILTER clause was mis-implemented, such that a
relevant Var or Aggref at the very top of the FILTER clause would
be ignored.  (Of course, that'd have to be a plain boolean Var or
boolean-returning aggregate.)  The consequence would be
mis-identification of the correct semantic level of the aggregate,
which could lead to not-per-spec query behavior.  If the FILTER
expression is an aggregate, this could also lead to failure to issue
an expected "aggregate function calls cannot be nested" error, which
would likely result in a core dump later on, since the planner and
executor aren't expecting such cases to appear.

The root cause is that commit b560ec1b0 blindly copied some code
that assumed it's recursing into a List, and thus didn't examine the
top-level node.  To forestall questions about why this call doesn't
look like the others, as well as possible future copy-and-paste
mistakes, let's change all three check_agg_arguments_walker calls in
check_agg_arguments, even though only the one for the filter clause
is really broken.

Per bug #17152 from Zhiyong Wu.  This has been wrong since we
implemented FILTER, so back-patch to all supported versions.
(Testing suggests that pre-v11 branches manage to avoid crashing
in the bad-Aggref case, thanks to "redundant" checks in ExecInitAgg.
But I'm not sure how thorough that protection is, and anyway the
wrong-behavior issue remains, so fix 9.6 and 10 too.)

Discussion: https://postgr.es/m/17152-c7f906cc1a88e61b@postgresql.org
2021-08-18 18:12:51 -04:00
Tom Lane
7b01246e1d Prevent ALTER TYPE/DOMAIN/OPERATOR from changing extension membership.
If recordDependencyOnCurrentExtension is invoked on a pre-existing,
free-standing object during an extension update script, that object
will become owned by the extension.  In our current code this is
possible in three cases:

* Replacing a "shell" type or operator.
* CREATE OR REPLACE overwriting an existing object.
* ALTER TYPE SET, ALTER DOMAIN SET, and ALTER OPERATOR SET.

The first of these cases is intentional behavior, as noted by the
existing comments for GenerateTypeDependencies.  It seems like
appropriate behavior for CREATE OR REPLACE too; at least, the obvious
alternatives are not better.  However, the fact that it happens during
ALTER is an artifact of trying to share code (GenerateTypeDependencies
and makeOperatorDependencies) between the CREATE and ALTER cases.
Since an extension script would be unlikely to ALTER an object that
didn't already belong to the extension, this behavior is not very
troubling for the direct target object ... but ALTER TYPE SET will
recurse to dependent domains, and it is very uncool for those to
become owned by the extension if they were not already.

Let's fix this by redefining the ALTER cases to never change extension
membership, full stop.  We could minimize the behavioral change by
only changing the behavior when ALTER TYPE SET is recursing to a
domain, but that would complicate the code and it does not seem like
a better definition.

Per bug #17144 from Alex Kozhemyakin.  Back-patch to v13 where ALTER
TYPE SET was added.  (The other cases are older, but since they only
affect the directly-named object, there's not enough of a problem to
justify changing the behavior further back.)

Discussion: https://postgr.es/m/17144-e67d7a8f049de9af@postgresql.org
2021-08-17 14:29:22 -04:00
Daniel Gustafsson
e15f32f0ed Set type identifier on BIO
In OpenSSL there are two types of BIO's (I/O abstractions):
source/sink and filters. A source/sink BIO is a source and/or
sink of data, ie one acting on a socket or a file. A filter
BIO takes a stream of input from another BIO and transforms it.
In order for BIO_find_type() to be able to traverse the chain
of BIO's and correctly find all BIO's of a certain type they
shall have the type bit set accordingly, source/sink BIO's
(what PostgreSQL implements) use BIO_TYPE_SOURCE_SINK and
filter BIO's use BIO_TYPE_FILTER. In addition to these, file
descriptor based BIO's should have the descriptor bit set,
BIO_TYPE_DESCRIPTOR.

The PostgreSQL implementation didn't set the type bits, which
went unnoticed for a long time as it's only really relevant
for code auditing the OpenSSL installation, or doing similar
tasks. It is required by the API though, so this fixes it.

Backpatch through 9.6 as this has been wrong for a long time.

Author: Itamar Gafni
Discussion: https://postgr.es/m/SN6PR06MB39665EC10C34BB20956AE4578AF39@SN6PR06MB3966.namprd06.prod.outlook.com
Backpatch-through: 9.6
2021-08-17 14:31:00 +02:00
Michael Paquier
7f0873f328 Refresh apply delay on reload of recovery_min_apply_delay at recovery
This commit ensures that the wait interval in the replay delay loop
waiting for an amount of time defined by recovery_min_apply_delay is
correctly handled on reload, recalculating the delay if this GUC value
is updated, based on the timestamp of the commit record being replayed.

The previous behavior would be problematic for example with replay
still waiting even if the delay got reduced or just cancelled.  If the
apply delay was increased to a larger value, the wait would have just
respected the old value set, finishing earlier.

Author: Soumyadeep Chakraborty, Ashwin Agrawal
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/CAE-ML+93zfr-HLN8OuxF0BjpWJ17O5dv1eMvSE5jsj9jpnAXZA@mail.gmail.com
Backpatch-through: 9.6
2021-08-16 12:11:53 +09:00
Tom Lane
48695decc2 Add RISC-V spinlock support in s_lock.h.
Like the ARM case, just use gcc's __sync_lock_test_and_set();
that will compile into AMOSWAP.W.AQ which does what we need.

At some point it might be worth doing some work on atomic ops
for RISC-V, but this should be enough for a creditable port.

Back-patch to all supported branches, just in case somebody
wants to try them on RISC-V.

Marek Szuba

Discussion: https://postgr.es/m/dea97b6d-f55f-1f6d-9109-504aa7dfa421@gentoo.org
2021-08-13 13:59:06 -04:00
David Rowley
4873da79da Fix incorrect hash table resizing code in simplehash.h
This fixes a bug in simplehash.h which caused an incorrect size mask to be
used when the hash table grew to SH_MAX_SIZE (2^32).  The code was
incorrectly setting the size mask to 0 when the hash tables reached the
maximum possible number of buckets.  This would result always trying to
use the 0th bucket causing an  infinite loop of trying to grow the hash
table due to there being too many collisions.

Seemingly it's not that common for simplehash tables to ever grow this big
as this bug dates back to v10 and nobody seems to have noticed it before.
However, probably the most likely place that people would notice it would
be doing a large in-memory Hash Aggregate with something close to at least
2^31 groups.

After this fix, the code now works correctly with up to within 98% of 2^32
groups and will fail with the following error when trying to insert any
more items into the hash table:

ERROR:  hash table size exceeded

However, the work_mem (or hash_mem_multiplier in newer versions) settings
will generally cause Hash Aggregates to spill to disk long before reaching
that many groups.  The minimal test case I did took a work_mem setting of
over 192GB to hit the bug.

simplehash hash tables are used in a few other places such as Bitmap Index
Scans, however, again the size that the hash table can become there is
also limited to work_mem and it would take a relation of around 16TB
(2^31) pages and a very large work_mem setting to hit this.  With smaller
work_mem values the table would become lossy and never grow large enough
to hit the problem.

Author: Yura Sokolov
Reviewed-by: David Rowley, Ranier Vilela
Discussion: https://postgr.es/m/b1f7f32737c3438136f64b26f4852b96@postgrespro.ru
Backpatch-through: 10, where simplehash.h was added
2021-08-13 16:42:35 +12:00
Thomas Munro
2c62754235 Make EXEC_BACKEND more convenient on macOS.
It's hard to disable ASLR on current macOS releases, for testing with
-DEXEC_BACKEND.  You could already set the environment variable
PG_SHMEM_ADDR to something not likely to collide with mappings created
earlier in process startup.  Let's also provide a default value that
works on current releases and architectures, for developer convenience.

As noted in the pre-existing comment, this is a horrible hack, but
-DEXEC_BACKEND is only used by Unix-based PostgreSQL developers for
testing some otherwise Windows-only code paths, so it seems excusable.

Back-patch to all supported branches.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20210806032944.m4tz7j2w47mant26%40alap3.anarazel.de
2021-08-13 11:11:38 +12:00
Peter Eisentraut
dc10035ecc Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 9bb123c161ac8f773572e112ced524b99e81c1d9
2021-08-09 11:56:40 +02:00
Tom Lane
ba9f665a44 Really fix the ambiguity in REFRESH MATERIALIZED VIEW CONCURRENTLY.
Rather than trying to pick table aliases that won't conflict with
any possible user-defined matview column name, adjust the queries'
syntax so that the aliases are only used in places where they can't be
mistaken for column names.  Mostly this consists of writing "alias.*"
not just "alias", which adds clarity for humans as well as machines.
We do have the issue that "SELECT alias.*" acts differently from
"SELECT alias", but we can use the same hack ruleutils.c uses for
whole-row variables in SELECT lists: write "alias.*::compositetype".

We might as well revert to the original aliases after doing this;
they're a bit easier to read.

Like 75d66d10e, back-patch to all supported branches.

Discussion: https://postgr.es/m/2488325.1628261320@sss.pgh.pa.us
2021-08-07 13:29:32 -04:00
Dean Rasheed
da188b9934 Adjust the integer overflow tests in the numeric code.
Formerly, the numeric code tested whether an integer value of a larger
type would fit in a smaller type by casting it to the smaller type and
then testing if the reverse conversion produced the original value.
That's perfectly fine, except that it caused a test failure on
buildfarm animal castoroides, most likely due to a compiler bug.

Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That
matches existing code in other places, such as int84(), which is more
widely tested, and so is less likely to go wrong.

While at it, add regression tests covering the numeric-to-int8/4/2
conversions, and adjust the recently added tests to the style of
434ddfb79a (on the v11 branch) to make failures easier to diagnose.

Per buildfarm via Tom Lane, reviewed by Tom Lane.

Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us
2021-08-06 21:31:20 +01:00
Peter Eisentraut
d3ad6566a1 Fix wording 2021-08-06 22:05:41 +02:00
Dean Rasheed
a72ad63154 Fix division-by-zero error in to_char() with 'EEEE' format.
This fixes a long-standing bug when using to_char() to format a
numeric value in scientific notation -- if the value's exponent is
less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a
division-by-zero error.

The reason for this error was that get_str_from_var_sci() divides its
input by 10^exp, which it produced using power_var_int(). However, the
underflow test in power_var_int() causes it to return zero if the
result scale is too small. That's not a problem for power_var_int()'s
only other caller, power_var(), since that limits the rscale to 1000,
but in get_str_from_var_sci() the exponent can be much smaller,
requiring a much larger rscale. Fix by introducing a new function to
compute 10^exp directly, with no rscale limit. This also allows 10^exp
to be computed more efficiently, without any numeric multiplication,
division or rounding.

Discussion: https://postgr.es/m/CAEZATCWhojfH4whaqgUKBe8D5jNHB8ytzemL-PnRx+KCTyMXmg@mail.gmail.com
2021-08-05 09:29:13 +01:00
Bruce Momjian
47a573d911 C comment: correct heading of extension query
Reported-by: Justin Pryzby

Discussion: https://postgr.es/m/20210803161345.GZ12533@telsasoft.com

Backpatch-through: 9.6
2021-08-03 12:26:08 -04:00
Bruce Momjian
a81c71e3a8 pg_upgrade: warn about extensions that need updating
Also create a script that can be run to update them.

Reported-by: Dave Cramer

Discussion: https://postgr.es/m/CADK3HHKawwbOcGwMGnDuAf3-U8YfvTcS8jqDv3UM=niijs3MMA@mail.gmail.com

Backpatch-through: 9.6
2021-08-03 11:58:15 -04:00
Tom Lane
93f99693f9 Use elog, not Assert, to report failure to provide an outer snapshot.
As of commit 84f5c2908, executing SQL commands (via SPI or otherwise)
requires having either an active Portal, or a caller-established
active snapshot.  We were simply Assert'ing that that's the case.
But we've now had a couple different reports of people testing
extensions that didn't meet this requirement, and were confused by
the resulting crash.  Let's convert the Assert to a test-and-elog,
in hopes of making the issue clearer for extension authors.

Per gripes from Liu Huailing and RekGRpth.  Back-patch to v11,
like the prior commit.

Discussion: https://postgr.es/m/OSZPR01MB6215671E3C5956A034A080DFBEEC9@OSZPR01MB6215.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/17035-14607d308ac8643c@postgresql.org
2021-07-31 11:50:14 -04:00
Dean Rasheed
053ec4e0c4 Fix corner-case errors and loss of precision in numeric_power().
This fixes a couple of related problems that arise when raising
numbers to very large powers.

Firstly, when raising a negative number to a very large integer power,
the result should be well-defined, but the previous code would only
cope if the exponent was small enough to go through power_var_int().
Otherwise it would throw an internal error, attempting to take the
logarithm of a negative number. Fix this by adding suitable handling
to the general case in power_var() to cope with negative bases,
checking for integer powers there.

Next, when raising a (positive or negative) number whose absolute
value is slightly less than 1 to a very large power, the result should
approach zero as the power is increased. However, in some cases, for
sufficiently large powers, this would lose all precision and return 1
instead of 0. This was due to the way that the local_rscale was being
calculated for the final full-precision calculation:

  local_rscale = rscale + (int) val - ln_dweight + 8

The first two terms on the right hand side are meant to give the
number of significant digits required in the result ("val" being the
estimated result weight). However, this failed to account for the fact
that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE
(1000), and the result weight might be less then -1000, causing their
sum to be negative, leading to a loss of precision. Fix this by
forcing the number of significant digits calculated to be nonnegative.
It's OK for it to be zero (when the result weight is less than -1000),
since the local_rscale value then includes a few extra digits to
ensure an accurate result.

Finally, add additional underflow checks to exp_var() and power_var(),
so that they consistently return zero for cases like this where the
result is indistinguishable from zero. Some paths through this code
already returned zero in such cases, but others were throwing overflow
errors.

Dean Rasheed, reviewed by Yugo Nagata.

Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6Su0bCR0w@mail.gmail.com
2021-07-31 11:25:39 +01:00
John Naylor
171bf1cea5 Fix range check in ECPG numeric to int conversion
The previous coding guarded against -INT_MAX instead of INT_MIN,
leading to -2147483648 being rejected as out of range.

Per bug #17128 from Kevin Sweet

Discussion: https://www.postgresql.org/message-id/flat/17128-55a8a879727a3e3a%40postgresql.org
Reviewed-by: Tom Lane
Backpatch to all supported branches
2021-07-30 16:18:59 -04:00
Alvaro Herrera
41d27ee7b8
Close yet another race condition in replication slot test code
Buildfarm shows that this test has a further failure mode when a
checkpoint starts earlier than expected, so we detect a "checkpoint
completed" line that's not the one we want.  Change the config to try
and prevent this.

Per buildfarm

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20210729.162038.534808353849568395.horikyota.ntt@gmail.com
2021-07-29 17:26:25 -04:00
Michael Paquier
efe169c900 Add missing exit() in pg_verifybackup when failing to find pg_waldump
pg_verifybackup needs by default pg_waldump to check after a range of
WAL segments required for a backup, except if --no-parse-wal is
specified.  The code checked for the presence of the binary pg_waldump
in an installation and reported an error, but it forgot to properly
exit().  This could lead to confusing errors reported.

Reviewed-by: Robert Haas, Fabien Coelho
Discussion: https://postgr.es/m/YQDMdB+B68yePFeT@paquier.xyz
Backpatch-through: 13
2021-07-29 11:00:00 +09:00
Fujii Masao
a66b05b422 Update minimum recovery point on truncation during WAL replay of abort record.
If a file is truncated, we must update minRecoveryPoint. Once a file is
truncated, there's no going back; it would not be safe to stop recovery
at a point earlier than that anymore.

Commit 7bffc9b7bf changed xact_redo_commit() so that it updates
minRecoveryPoint on truncation, but forgot to change xact_redo_abort().

Back-patch to all supported versions.

Reported-by: mengjuan.cmj@alibaba-inc.com
Author: Fujii Masao
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/b029fce3-4fac-4265-968e-16f36ff4d075.mengjuan.cmj@alibaba-inc.com
2021-07-29 01:34:13 +09:00
Alvaro Herrera
b8f91d7f92
Set pg_setting.pending_restart when pertinent config lines are removed
This changes the behavior of examining the pg_file_settings view after
changing a config option that requires restart.  The user needs to know
that any change of such options does not take effect until a restart,
and this worked correctly if the line is edited without removing it.
However, for the case where the line is removed altogether, the flag
doesn't get set, because a flag was only set in set_config_option, but
that's not called for lines removed.  Repair.

(Ref.: commits 62d16c7fc5 and a486e35706)

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/202107262302.xsfdfc5sb7sh@alvherre.pgsql
2021-07-27 15:44:12 -04:00
Fujii Masao
92913fc290 Avoid using ambiguous word "non-negative" in error messages.
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".

Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.

When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.

Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-07-28 01:21:52 +09:00
Bruce Momjian
0a5e708e2b pg_resetxlog: add option to set oldest xid & use by pg_upgrade
Add pg_resetxlog -u option to set the oldest xid in pg_control.
Previously -x set this value be -2 billion less than the -x value.
However, this causes the server to immediately scan all relation's
relfrozenxid so it can advance pg_control's oldest xid to be inside the
autovacuum_freeze_max_age range, which is inefficient and might disrupt
diagnostic recovery.  pg_upgrade will use this option to better create
the new cluster to match the old cluster.

Reported-by: Jason Harvey, Floris Van Nee

Discussion: https://postgr.es/m/20190615183759.GB239428@rfd.leadboat.com, 87da83168c644fd9aae38f546cc70295@opammb0562.comp.optiver.com

Author: Bertrand Drouvot

Backpatch-through: 9.6
2021-07-26 22:38:14 -04:00
Michael Paquier
2c7395aad7 Fix a couple of memory leaks in src/bin/pg_basebackup/
These have been introduced by 7fbe0c8, and could happen for
pg_basebackup and pg_receivewal.

Per report from Coverity for the ones in walmethods.c, I have spotted
the ones in receivelog.c after more review.

Backpatch-through: 10
2021-07-26 11:14:11 +09:00
Tom Lane
2b8f3f5a7c Get rid of artificial restriction on hash table sizes on Windows.
The point of introducing the hash_mem_multiplier GUC was to let users
reproduce the old behavior of hash aggregation, i.e. that it could use
more than work_mem at need.  However, the implementation failed to get
the job done on Win64, where work_mem is clamped to 2GB to protect
various places that calculate memory sizes using "long int".  As
written, the same clamp was applied to hash_mem.  This resulted in
severe performance regressions for queries requiring a bit more than
2GB for hash aggregation, as they now spill to disk and there's no
way to stop that.

Getting rid of the work_mem restriction seems like a good idea, but
it's a big job and could not conceivably be back-patched.  However,
there's only a fairly small number of places that are concerned with
the hash_mem value, and it turns out to be possible to remove the
restriction there without too much code churn or any ABI breaks.
So, let's do that for now to fix the regression, and leave the
larger task for another day.

This patch does introduce a bit more infrastructure that should help
with the larger task, namely pg_bitutils.h support for working with
size_t values.

Per gripe from Laurent Hasson.  Back-patch to v13 where the
behavior change came in.

Discussion: https://postgr.es/m/997817.1627074924@sss.pgh.pa.us
Discussion: https://postgr.es/m/MN2PR15MB25601E80A9B6D1BA6F592B1985E39@MN2PR15MB2560.namprd15.prod.outlook.com
2021-07-25 14:02:27 -04:00
Fujii Masao
8d091922ff Make the standby server promptly handle interrupt signals.
This commit changes the startup process in the standby server so that
it handles the interrupt signals after waiting for wal_retrieve_retry_interval
on the latch and resetting it, before entering another wait on the latch.
This change causes the standby server to promptly handle interrupt signals.

Otherwise, previously, there was the case where the standby needs to
wait extra five seconds to shutdown when the shutdown request arrived
while the startup process was waiting for wal_retrieve_retry_interval
on the latch.

Author: Fujii Masao, but implementation idea is from Soumyadeep Chakraborty
Reviewed-by: Soumyadeep Chakraborty
Discussion: https://postgr.es/m/9d7e6ab0-8a53-ddb9-63cd-289bcb25fe0e@oss.nttdata.com

Per discussion of BUG #17073, back-patch to all supported versions.
Discussion: https://postgr.es/m/17073-1a5fdaed0fa5d4d0@postgresql.org
2021-07-25 11:15:30 +09:00
Tom Lane
f47408cdc1 Fix check for conflicting session- vs transaction-level locks.
We have an implementation restriction that PREPARE TRANSACTION can't
handle cases where both session-lifespan and transaction-lifespan locks
are held on the same lockable object.  (That's because we'd otherwise
need to acquire a new PROCLOCK entry during post-prepare cleanup, which
is an operation that might fail.  The situation can only arise with odd
usages of advisory locks, so removing the restriction is probably not
worth the amount of effort it would take.)  AtPrepare_Locks attempted
to enforce this, but its logic was many bricks shy of a load, because
it only detected cases where the session and transaction locks had the
same lockmode.  Locks of different modes on the same object would lead
to the rather unhelpful message "PANIC: we seem to have dropped a bit
somewhere".

To fix, build a transient hashtable with one entry per locktag,
not one per locktag + mode, and use that to detect conflicts.

Per bug #17122 from Alexander Pyhalov.  This bug is ancient,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/17122-04f3c32098a62233@postgresql.org
2021-07-24 18:35:52 -04:00
Tom Lane
c0a6f83deb Make printf("%s", NULL) print "(null)" instead of crashing.
We previously took a hard-line attitude that callers should never print
a null string pointer, and doing so is worthy of an assertion failure
or crash.  However, we've long since flushed out any easy-to-find bugs
of that nature.  What remains is a lot of code that perhaps could fail
that way in hard-to-reach corner cases.  For example, in something as
simple as
    ereport(ERROR,
            (errcode(ERRCODE_UNDEFINED_OBJECT),
             errmsg("constraint \"%s\" for table \"%s\" does not exist",
                    conname, get_rel_name(relid))));
one must wonder whether it's completely guaranteed that get_rel_name
cannot return NULL in this context.  If such a situation did occur,
the existing policy converts what might be a pretty minor bug into
a server crash condition.  This is not good for robustness.

Hence, let's follow the lead of glibc and print "(null)" instead
of failing.  We should, of course, still consider it a bug if that
behavior is reachable in ordinary use; but crashing seems less
desirable than not crashing.

This fix works across-the-board in v12 and up, where we always use
src/port/snprintf.c.  Before that, on most platforms we're at the mercy
of the local libc, but it appears that Solaris 10 is the only supported
platform where we'd still get a crash.  Most other platforms such as
*BSD, macOS, and Solaris 11 have adopted glibc's behavior at some
point.  (AIX and HPUX just print "" not "(null)", but that's close
enough.)  I've not checked what Windows' native printf would do, but
it doesn't matter because we've long used snprintf.c on that platform.

In v12 and up, also const-ify related code so that we're not casting
away const on the constant string.  This is just neatnik-ism, since
next to no compilers will warn about that.

Discussion: https://postgr.es/m/17098-b960f3616c861f83@postgresql.org
2021-07-24 13:41:17 -04:00
Thomas Munro
1cfc9f2ef6 jit: Don't inline functions that access thread-locals.
Code inlined by LLVM can crash or fail with "Relocation type not
implemented yet!" if it tries to access thread local variables.  Don't
inline such code.

Back-patch to 11, where LLVM arrived.  Bug #16696.

Author: Dmitry Marakasov <amdmi3@amdmi3.ru>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/16696-29d944a33801fbfe@postgresql.org
2021-07-22 15:07:52 +12:00
John Naylor
0b5dc1f171 Document "B" and "us" as accepted units in postgres.conf.sample
In postgresql.conf, memory and file size GUCs can be specified with "B"
(bytes) as of b06d8e58b. Likewise, time GUCs can be specified with "us"
(microseconds) as of caf626b2c. Update postgres.conf.sample to reflect
that fact.

Pavel Luzanov

Backpatch to v12, which is the earliest version that allows both of
these units. A separate commit will document the "B" case for v11.

Discussion: https://www.postgresql.org/message-id/flat/f10d16fc-8fa0-1b3c-7371-cb3a35a13b7a%40postgrespro.ru
2021-07-21 10:18:42 -04:00
Tom Lane
0fce76b99d Fix corner-case uninitialized-variable issues in plpgsql.
If an error was raised during our initial attempt to check whether
a successfully-compiled expression is "simple", subsequent calls of
exec_stmt_execsql would suppose that stmt->mod_stmt was already computed
when it had not been.  This could lead to assertion failures in debug
builds; in production builds the effect would typically be to act as
if INTO STRICT had been specified even when it had not been.  Of course
that only matters if the subsequent attempt to execute the expression
succeeds, so that the problem can only be reached by fixing a failure
in some referenced, inline-able SQL function and then retrying the
calling plpgsql function in the same session.

(There might be even-more-obscure ways to change the expression's
behavior without changing the plpgsql function, but that one seems
like the only one people would be likely to hit in practice.)

The most foolproof way to fix this would be to arrange for
exec_prepare_plan to not set expr->plan until we've finished the
subsidiary simple-expression check.  But it seems hard to do that
without creating reference-count leak issues.  So settle for documenting
the hazard in a comment and fixing exec_stmt_execsql to test separately
for whether it's computed stmt->mod_stmt.  (That adds a test-and-branch
per execution, but hopefully that's negligible in context.)  In v11 and
up, also fix exec_stmt_call which had a variant of the same issue.

Per bug #17113 from Alexander Lakhin.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/17113-077605ce00e0e7ec@postgresql.org
2021-07-20 13:01:48 -04:00
Michael Paquier
fb2b86015a Fix some issues with WAL segment opening for pg_receivewal --compress
The logic handling the opening of new WAL segments was fuzzy when using
--compress if a partial, non-compressed, segment with the same base name
existed in the repository storing those files.  In this case, using
--compress would cause the code to first check for the existence and the
size of a non-compressed segment, followed by the opening of a new
compressed, partial, segment.  The code was accidentally working
correctly on most platforms as the buildfarm has proved, except
bowerbird where gzflush() could fail in this code path.  It is wrong
anyway to take the code path used pre-padding when creating a new
partial, non-compressed, segment, so let's fix it.

Note that this issue exists when users mix successive runs of
pg_receivewal with or without compression, as discovered with the tests
introduced by ffc9dda.

While on it, this refactors the code so as code paths that need to know
about the ".gz" suffix are down from four to one in walmethods.c, easing
a bit the introduction of new compression methods.  This addresses a
second issue where log messages generated for an unexpected failure
would not show the compressed segment name involved, which was
confusing, printing instead the name of the non-compressed equivalent.

Reported-by: Georgios Kokolatos
Discussion: https://postgr.es/m/YPDLz2x3o1aX2wRh@paquier.xyz
Backpatch-through: 10
2021-07-20 12:12:51 +09:00
Alvaro Herrera
ce413eba41
Make new replication slot test code even less racy
Further fix the test code in ead9e51e82, this time by waiting until
the checkpoint has completed before moving on; this ensures that the
WAL segment removal has already happened when we create the next slot.

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20210719.111318.2042379313472032754.horikyota.ntt@gmail.com
2021-07-19 17:21:07 -04:00
Amit Kapila
bfa2a926da Don't allow to set replication slot_name as ''.
We don't allow to create replication slot_name as an empty string ('') via
SQL API pg_create_logical_replication_slot() but it is allowed to be set
via Alter Subscription command. This will lead to apply worker repeatedly
keep trying to stream data via slot_name '' and the user is not allowed to
create the slot with that name.

Author: Japin Li
Reviewed-By: Ranier Vilela, Amit Kapila
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/MEYP282MB1669CBD98E721C77CA696499B61A9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2021-07-19 11:04:21 +05:30
Alvaro Herrera
7099ba0580
Make new replication slot test code less racy
The new test code added in ead9e51e82 is racy -- it hinges on
shared-memory state, which changes before the WARNING message is logged.
Put it the other way around.

Backpatch to 13.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/202107161809.zclasccpfcg3@alvherre.pgsql
2021-07-17 13:19:17 -04:00
Alvaro Herrera
cc340af334
Fix pg_dump for disabled triggers on partitioned tables
pg_dump failed to preserve the 'enabled' flag (which can be not only
disabled, but also REPLICA or ALWAYS) for partitions which had it
changed from their respective parents.  Attempt to handle that by
including a definition for such triggers in the dump, but replace the
standard CREATE TRIGGER line with an ALTER TRIGGER line.

Backpatch to 11, where these triggers can exist.  In branches 11 and 12,
pick up a few test lines from commit b9b408c487 to verify that
pg_upgrade is okay with these arrangements.

Co-authored-by: Justin Pryzby <pryzby@telsasoft.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com
2021-07-16 17:29:22 -04:00
Alvaro Herrera
c31516ae5b
Preserve firing-on state when cloning row triggers to partitions
When triggers are cloned from partitioned tables to their partitions,
the 'tgenabled' flag (origin/replica/always/disable) was not propagated.
Make it so that the flag on the trigger on partition is initially set to
the same value as on the partitioned table.

Add a test case to verify the behavior.

Backpatch to 11, where this appeared in commit 86f575948c.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com
2021-07-16 13:01:43 -04:00
Alvaro Herrera
866237a6fa
Advance old-segment horizon properly after slot invalidation
When some slots are invalidated due to the max_slot_wal_keep_size limit,
the old segment horizon should move forward to stay within the limit.
However, in commit c655077639 we forgot to call KeepLogSeg again to
recompute the horizon after invalidating replication slots.  In cases
where other slots remained, the limits would be recomputed eventually
for other reasons, but if all slots were invalidated, the limits would
not move at all afterwards.  Repair.

Backpatch to 13 where the feature was introduced.

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reported-by: Marcin Krupowicz <mk@071.ovh>
Discussion: https://postgr.es/m/17103-004130e8f27782c9@postgresql.org
2021-07-16 12:07:30 -04:00
Tom Lane
91a1651057 Ensure HAVE_DECL_XXX macros in MSVC builds match those in Unix.
Autoconf's AC_CHECK_DECLS() always defines HAVE_DECL_whatever
as 1 or 0, but some of the entries in msvc/Solution.pm showed
such symbols as "undef" instead of 0.  Fix that for consistency.
There's no live bug in current usages AFAICS, but it's not hard
to imagine one creeping in if more-complex #if tests get added.

Back-patch to v13, which is as far back as Solution.pm contains
this data.  The inconsistency still exists in the manually-filled
pg_config_ext.h.win32 files of older branches; but as long as the
problem is only latent, it doesn't seem worth the trouble to
clean things up there.

Discussion: https://postgr.es/m/3185430.1626133592@sss.pgh.pa.us
2021-07-15 11:00:43 -04:00
Michael Paquier
5226243459 Fix unexpected error messages for various flavors of ALTER TABLE
Some commands of ALTER TABLE could fail with the following error:
ERROR:  "tab" is of the wrong type

This error is unexpected, as all the code paths leading to
ATWrongRelkindError() should use a supported set of relkinds to generate
correct error messages.  This commit closes the gap with such mistakes,
by adding all the missing relkind combinations.  Tests are added to
check all the problems found.  Note that some combinations are not used,
but these are left around as it could have an impact on applications
relying on this code.

2ed532e has done a much larger refactoring on HEAD to make such error
messages easier to manage in the long-term, so nothing is needed there.

Author: Kyotaro Horiguchi
Reviewed-by: Peter Eisentraut, Ahsan Hadi, Michael Paquier
Discussion: https://postgr.es/m/20210216.181415.368926598204753659.horikyota.ntt@gmail.com
Backpatch-through: 11
2021-07-14 17:15:18 +09:00
David Rowley
2fde8e49a2 Robustify tuplesort's free_sort_tuple function
41469253e went to the trouble of removing a theoretical bug from
free_sort_tuple by checking if the tuple was NULL before freeing it. Let's
make this a little more robust by also setting the tuple to NULL so that
should we be called again we won't end up doing a pfree on the already
pfree'd tuple. Per advice from Tom Lane.

Discussion: https://postgr.es/m/3188192.1626136953@sss.pgh.pa.us
Backpatch-through: 9.6, same as 41469253e
2021-07-13 13:28:19 +12:00
David Rowley
204f646a22 Fix theoretical bug in tuplesort
This fixes a theoretical bug in tuplesort.c which, if a bounded sort was
used in combination with a byval Datum sort (tuplesort_begin_datum), when
switching the sort to a bounded heap in make_bounded_heap(), we'd call
free_sort_tuple().  The problem was that when sorting Datums of a byval
type, the tuple is NULL and free_sort_tuple() would free the memory for it
regardless of that.  This would result in a crash.

Here we fix that simply by adding a check to see if the tuple is NULL
before trying to disassociate and free any memory belonging to it.

The reason this bug is only theoretical is that nowhere in the current
code base do we do tuplesort_set_bound() when performing a Datum sort.
However, let's backpatch a fix for this as if any extension uses the code
in this way then it's likely to cause problems.

Author: Ronan Dunklau
Discussion: https://postgr.es/m/CAApHDvpdoqNC5FjDb3KUTSMs5dg6f+XxH4Bg_dVcLi8UYAG3EQ@mail.gmail.com
Backpatch-through: 9.6, oldest supported version
2021-07-13 12:42:43 +12:00
Heikki Linnakangas
2c27cc2659 Remove dead assignment to local variable.
This should have been removed in commit 7e30c186da, which split the loop
into two. Only the first loop uses the 'from' variable; updating it in
the second loop is bogus. It was never read after the first loop, so this
was harmless and surely optimized away by the compiler, but let's be tidy.

Backpatch to all supported versions.

Author: Ranier Vilela
Discussion: https://www.postgresql.org/message-id/CAEudQAoWq%2BAL3BnELHu7gms2GN07k-np6yLbukGaxJ1vY-zeiQ%40mail.gmail.com
2021-07-12 11:14:03 +03:00
Tom Lane
1c612bc98e Lock the extension during ALTER EXTENSION ADD/DROP.
Although we were careful to lock the object being added or dropped,
we failed to get any sort of lock on the extension itself.  This
allowed the ALTER to proceed in parallel with a DROP EXTENSION,
which is problematic for a couple of reasons.  If both commands
succeeded we'd be left with a dangling link in pg_depend, which
would cause problems later.  Also, if the ALTER failed for some
reason, it might try to print the extension's name, and that could
result in a crash or (in older branches) a silly error message
complaining about extension "(null)".

Per bug #17098 from Alexander Lakhin.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/17098-b960f3616c861f83@postgresql.org
2021-07-11 12:54:24 -04:00
Jeff Davis
edd9a2bf74 Fix assign_record_type_typmod().
If an error occurred in the wrong place, it was possible to leave an
unintialized entry in the hash table, leading to a crash. Fixed.

Also, be more careful about the order of operations so that an
allocation error doesn't leak memory in CacheMemoryContext or
unnecessarily advance NextRecordTypmod.

Backpatch through version 11. Earlier versions (prior to 35ea75632a)
do not exhibit the problem, because an uninitialized hash entry
contains a valid empty list.

Author: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/HE1PR8303MB009069D476225B9A9E194B8891779@HE1PR8303MB0090.EURPRD83.prod.outlook.com
Backpatch-through: 11
2021-07-10 10:27:27 -07:00
Dean Rasheed
f23a9b8a49 Fix numeric_mul() overflow due to too many digits after decimal point.
This fixes an overflow error when using the numeric * operator if the
result has more than 16383 digits after the decimal point by rounding
the result. Overflow errors should only occur if the result has too
many digits *before* the decimal point.

Discussion: https://postgr.es/m/CAEZATCUmeFWCrq2dNzZpRj5+6LfN85jYiDoqm+ucSXhb9U2TbA@mail.gmail.com
2021-07-10 12:46:13 +01:00
Tom Lane
9807b9aedc Avoid creating a RESULT RTE that's marked LATERAL.
Commit 7266d0997 added code to pull up simple constant function
results, converting the RTE_FUNCTION RTE to a dummy RTE_RESULT
RTE since it no longer need be scanned.  But I forgot to clear
the LATERAL flag if the RTE has it set.  If the function reduced
to a constant, it surely contains no lateral references so this
simplification is logically OK.  It's needed because various other
places will Assert that RESULT RTEs aren't LATERAL.

Per bug #17097 from Yaoguang Chen.  Back-patch to v13 where the
faulty code came in.

Discussion: https://postgr.es/m/17097-3372ef9f798fc94f@postgresql.org
2021-07-09 13:38:24 -04:00
Tom Lane
55cccdfdf1 Update configure's probe for libldap to work with OpenLDAP 2.5.
The separate libldap_r is gone and libldap itself is now always
thread-safe.  Unfortunately there seems no easy way to tell by
inspection whether libldap is thread-safe, so we have to take
it on faith that libldap is thread-safe if there's no libldap_r.
That should be okay, as it appears that libldap_r was a standard
part of the installation going back at least 20 years.

Report and patch by Adrian Ho.  Back-patch to all supported
branches, since people might try to build any of them with
a newer OpenLDAP.

Discussion: https://postgr.es/m/17083-a19190d9591946a7@postgresql.org
2021-07-09 12:38:55 -04:00
Tom Lane
6edccac166 Reject cases where a query in WITH rewrites to just NOTIFY.
Since the executor can't cope with a utility statement appearing
as a node of a plan tree, we can't support cases where a rewrite
rule inserts a NOTIFY into an INSERT/UPDATE/DELETE command appearing
in a WITH clause of a larger query.  (One can imagine ways around
that, but it'd be a new feature not a bug fix, and so far there's
been no demand for it.)  RewriteQuery checked for this, but it
missed the case where the DML command rewrites to *only* a NOTIFY.
That'd lead to crashes later on in planning.  Add the missed check,
and improve the level of testing of this area.

Per bug #17094 from Yaoguang Chen.  It's been busted since WITH
was introduced, so back-patch to all supported branches.

Discussion: https://postgr.es/m/17094-bf15dff55eaf2e28@postgresql.org
2021-07-09 11:02:26 -04:00
Thomas Munro
7dd2379df7 Remove more obsolete comments about semaphores.
Commit 6753333f stopped using semaphores as the sleep/wake mechanism for
heavyweight locks, but some obsolete references to that scheme remained
in comments.  As with similar commit 25b93a29, back-patch all the way.

Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CA%2BhUKGLafjB1uzXcy%3D%3D2L3cy7rjHkqOVn7qRYGBjk%3D%3DtMJE7Yg%40mail.gmail.com
2021-07-09 18:05:53 +12:00
David Rowley
87103002c8 Add missing Int64GetDatum macro in dbsize.c
I accidentally missed adding this when adjusting 55fe60938 for back
patching.  This adjustment was made for 9.6 to 13. 14 and master are not
affected.

Discussion: https://postgr.es/m/CAApHDvp=twCsGAGQG=A=cqOaj4mpknPBW-EZB-sd+5ZS5gCTtA@mail.gmail.com
2021-07-09 15:12:31 +12:00
David Rowley
6f88b68ff4 Fix incorrect return value in pg_size_pretty(bigint)
Due to how pg_size_pretty(bigint) was implemented, it's possible that when
given a negative number of bytes that the returning value would not match
the equivalent positive return value when given the equivalent positive
number of bytes.  This was due to two separate issues.

1. The function used bit shifting to convert the number of bytes into
larger units.  The rounding performed by bit shifting is not the same as
dividing.  For example -3 >> 1 = -2, but -3 / 2 = -1.  These two
operations are only equivalent with positive numbers.

2. The half_rounded() macro rounded towards positive infinity.  This meant
that negative numbers rounded towards zero and positive numbers rounded
away from zero.

Here we fix #1 by dividing the values instead of bit shifting.  We fix #2
by adjusting the half_rounded macro always to round away from zero.

Additionally, adjust the pg_size_pretty(numeric) function to be more
explicit that it's using division rather than bit shifting.  A casual
observer might have believed bit shifting was used due to a static
function being named numeric_shift_right.  However, that function was
calculating the divisor from the number of bits and performed division.
Here we make that more clear.  This change is just cosmetic and does not
affect the return value of the numeric version of the function.

Here we also add a set of regression tests both versions of
pg_size_pretty() which test the values directly before and after the
function switches to the next unit.

This bug was introduced in 8a1fab36a. Prior to that negative values were
always displayed in bytes.

Author: Dean Rasheed, David Rowley
Discussion: https://postgr.es/m/CAEZATCXnNW4HsmZnxhfezR5FuiGgp+mkY4AzcL5eRGO4fuadWg@mail.gmail.com
Backpatch-through: 9.6, where the bug was introduced.
2021-07-09 14:04:49 +12:00
Tom Lane
2f487116e9 Reduce overhead of cache-clobber testing in LookupOpclassInfo().
Commit 03ffc4d6d added logic to bypass all caching behavior in
LookupOpclassInfo when CLOBBER_CACHE_ALWAYS is enabled.  It doesn't
look like I stopped to think much about what that would cost, but
recent investigation shows that the cost is enormous: it roughly
doubles the time needed for cache-clobber test runs.

There does seem to be value in this behavior when trying to test
the opclass-cache loading logic itself, but for other purposes the
cost is excessive.  Hence, let's back off to doing this only when
debug_invalidate_system_caches_always is at least 3; or in older
branches, when CLOBBER_CACHE_RECURSIVELY is defined.

While here, clean up some other minor issues in LookupOpclassInfo.
Re-order the code so we aren't left with broken cache entries (leading
to later core dumps) in the unlikely case that we suffer OOM while
trying to allocate space for a new entry.  (That seems to be my
oversight in 03ffc4d6d.)  Also, in >= v13, stop allocating one array
entry too many.  That's evidently left over from sloppy reversion in
851b14b0c.

Back-patch to all supported branches, mainly to reduce the runtime
of cache-clobbering buildfarm animals.

Discussion: https://postgr.es/m/1370856.1625428625@sss.pgh.pa.us
2021-07-05 16:51:57 -04:00
Tom Lane
7fc97752d5 Don't try to print data type names in slot_store_error_callback().
The existing code tried to do syscache lookups in an already-failed
transaction, which is problematic to say the least.  After some
consideration of alternatives, the best fix seems to be to just drop
type names from the error message altogether.  The table and column
names seem like sufficient localization.  If the user is unsure what
types are involved, she can check the local and remote table
definitions.

Having done that, we can also discard the LogicalRepTypMap hash
table, which had no other use.  Arguably, LOGICAL_REP_MSG_TYPE
replication messages are now obsolete as well; but we should
probably keep them in case some other use emerges.  (The complexity
of removing something from the replication protocol would likely
outweigh any savings anyhow.)

Masahiko Sawada and Bharath Rupireddy, per complaint from Andres
Freund.  Back-patch to v10 where this code originated.

Discussion: https://postgr.es/m/20210106020229.ne5xnuu6wlondjpe@alap3.anarazel.de
2021-07-02 16:04:54 -04:00
Andrew Dunstan
a8b564b0c9
Fix prove_installcheck to use correct paths when used with PGXS
The prove_installcheck recipe in src/Makefile.global.in was emitting
bogus paths for a couple of elements when used with PGXS. Here we create
a separate recipe for the PGXS case that does it correctly. We also take
the opportunity to make the make the file more readable by breaking up
the prove_installcheck and prove_check recipes across several lines, and
to remove the setting for REGRESS_SHLIB to src/test/recovery/Makefile,
which is the only set of tests that actually need it.

Backpatch to all live branches

Discussion: https://postgr.es/m/f2401388-936b-f4ef-a07c-a0bcc49b3300@dunslane.net
2021-07-01 08:46:38 -04:00
Michael Paquier
41edb2db1b Fix incorrect PITR message for transaction ROLLBACK PREPARED
Reaching PITR on such a transaction would cause the generation of a LOG
message mentioning a transaction committed, not aborted.

Oversight in 4f1b890.

Author: Simon Riggs
Discussion: https://postgr.es/m/CANbhV-GJ6KijeCgdOrxqMCQ+C8QiK657EMhCy4csjrPcEUFv_Q@mail.gmail.com
Backpatch-through: 9.6
2021-06-30 11:49:16 +09:00
Tom Lane
1603deca34 Don't use abort(3) in libpq's fe-print.c.
Causing a core dump on out-of-memory seems pretty unfriendly,
and surely is far outside the expected behavior of a general-purpose
library.  Just print an error message (as we did already) and return.
These functions unfortunately don't have an error return convention,
but code using them is probably just looking for a quick-n-dirty
print method and wouldn't bother to check anyway.

Although these functions are semi-deprecated, it still seems
appropriate to back-patch this.  In passing, also back-patch
b90e6cef1, just to reduce cosmetic differences between the
branches.

Discussion: https://postgr.es/m/3122443.1624735363@sss.pgh.pa.us
2021-06-28 14:17:42 -04:00
Tom Lane
be567deb3a Don't depend on -fwrapv semantics in pgbench's random() function.
Instead use the common/int.h functions to check for integer overflow
in a more C-standard-compliant fashion.  This is motivated by recent
failures on buildfarm member moonjelly, where it appears that
development-tip gcc is optimizing without regard to the -fwrapv
switch.  Presumably that's a gcc bug that will be fixed soon, but
we might as well install cleaner coding here rather than wait.

(This does not address the question of whether we'll ever be able
to get rid of using -fwrapv.  Testing shows that this spot is the
only place where doing so creates visible regression test failures,
but unfortunately that proves very little.)

Back-patch to v12.  The common/int.h functions exist in v11, but
that branch doesn't use them in any client-side code.  I judge
that this case isn't interesting enough in the real world to take
even a small risk of issues from being the first such use.

Tom Lane and Fabien Coelho

Discussion: https://postgr.es/m/73927.1624815543@sss.pgh.pa.us
2021-06-28 12:40:37 -04:00
Amit Kapila
741deb2603 Fix race condition in TransactionGroupUpdateXidStatus().
When we cannot immediately acquire XactSLRULock in exclusive mode at
commit time, we add ourselves to a list of processes that need their XIDs
status update. We do this if the clog page where we need to update the
current transaction status is the same as the group leader's clog page,
otherwise, we allow the caller to clear it by itself. Now, when we can't
add ourselves to any group, we were not clearing the current proc if it
has already become a member of some group which was leading to an
assertion failure when the same proc was assigned to another backend after
the current backend exits.

Reported-by: Alexander Lakhin
Bug: 17072
Author: Amit Kapila
Tested-By: Alexander Lakhin
Backpatch-through: 11, where it was introduced
Discussion: https://postgr.es/m/17072-2f8764857ef2c92a@postgresql.org
2021-06-28 08:42:48 +05:30
Michael Paquier
fd7bc10ab4 Add test for CREATE INDEX CONCURRENTLY with not-so-immutable predicate
83158f7 has improved index_set_state_flags() so as it is possible to use
transactional updates when updating pg_index state flags, but there was
not really a test case which stressed directly the possibility it fixed.
This commit adds such a test, using a predicate that looks valid in
appearance but calls a stable function.

Author: Andrey Lepikhov
Discussion: https://postgr.es/m/9b905019-5297-7372-0ad2-e1a4bb66a719@postgrespro.ru
Backpatch-through: 9.6
2021-06-28 11:17:12 +09:00
Michael Paquier
acb60edf02 Make index_set_state_flags() transactional
3c84046 is the original commit that introduced index_set_state_flags(),
where the presence of SnapshotNow made necessary the use of an in-place
update.  SnapshotNow has been removed in 813fb03, so there is no actual
reasons to not make this operation transactional.

As reported by Andrey, it is possible to trigger the assertion of this
routine expecting no transactional updates when switching the pg_index
state flags, using a predicate mark as immutable but calling stable or
volatile functions.  83158f7 has been around for a couple of months on
HEAD now with no issues found related to it, so it looks safe enough for
a backpatch.

Reported-by: Andrey Lepikhov
Author: Michael Paquier
Reviewed-by: Anastasia Lubennikova
Discussion: https://postgr.es/m/20200903080440.GA8559@paquier.xyz
Discussion: https://postgr.es/m/9b905019-5297-7372-0ad2-e1a4bb66a719@postgrespro.ru
Backpatch-through: 9.6
2021-06-28 10:39:09 +09:00
Tom Lane
2d09448654 Remove memory leaks in isolationtester.
specscanner.l leaked a kilobyte of memory per token of the spec file.
Apparently somebody thought that the introductory code block would be
executed once; but it's once per yylex() call.

A couple of functions in isolationtester.c leaked small amounts of
memory due to not bothering to free one-time allocations.  Might
as well improve these so that valgrind gives this program a clean
bill of health.  Also get rid of an ugly static variable.

Coverity complained about one of the one-time leaks, which led me
to try valgrind'ing isolationtester, which led to discovery of the
larger leak.
2021-06-27 12:45:04 -04:00
Michael Paquier
88274a7a31 Remove non-existing variable reference in MSVC's Solution.pm
The version string is grabbed from PACKAGE_VERSION in pg_config.h in the
MSVC build since 8f4fb4c6, but an error message referenced a variable
that existed before that.  This had no consequences except if one messes
up enough with the version number of the build.

Author: Anton Voloshin
Discussion: https://postgr.es/m/af79ee1b-9962-b299-98e1-f90a289e19e6@postgrespro.ru
Backpatch-through: 13
2021-06-26 13:52:54 +09:00
Michael Paquier
0455b7ccce Remove some useless logs from the TAP tests of pgbench
002_pgbench_no_server was printing some array pointers instead of the
actual contents of those arrays for the expected outputs of stdout and
stderr for a tested command.  This does not add any new information that
can help with debugging as the test names allow to track failure
locations, if any.

This commit simply removes those logs as the rest of the printed
information is redundant with command_checks_all().

Per discussion with Andrew Dunstan and Álvaro Herrera.

Discussion: https://postgr.es/m/YNXNFaG7IgkzZanD@paquier.xyz
Backpatch-through: 11
2021-06-26 12:40:03 +09:00
Tom Lane
ba815f00a0 Remove unnecessary failure cases in RemoveRoleFromObjectPolicy().
It's not really necessary for this function to open or lock the
relation associated with the pg_policy entry it's modifying.  The
error checks it's making on the rel are if anything counterproductive
(e.g., if we don't want to allow installation of policies on system
catalogs, here is not the place to prevent that).  In particular, it
seems just wrong to insist on an ownership check.  That has the net
effect of forcing people to use superuser for DROP OWNED BY, which
surely is not an effect we want.  Also there is no point in rebuilding
the dependencies of the policy expressions, which aren't being
changed.  Lastly, locking the table also seems counterproductive; it's
not helping to prevent race conditions, since we failed to re-read the
pg_policy row after acquiring the lock.  That means that concurrent
DDL would likely result in "tuple concurrently updated/deleted"
errors; which is the same behavior this code will produce, with less
overhead.

Per discussion of bug #17062.  Back-patch to all supported versions,
as the failure cases this eliminates seem just as undesirable in 9.6
as in HEAD.

Discussion: https://postgr.es/m/1573181.1624220108@sss.pgh.pa.us
2021-06-25 13:59:38 -04:00
Tom Lane
4a20de9d9e Make walsenders show their replication commands in pg_stat_activity.
A walsender process that has executed a SQL command left the text of
that command in pg_stat_activity.query indefinitely, which is quite
confusing if it's in RUNNING state but not doing that query.  An easy
and useful fix is to treat replication commands as if they were SQL
queries, and show them in pg_stat_activity according to the same rules
as for regular queries.  While we're at it, it seems also sensible to
set debug_query_string, allowing error logging and debugging to see
the replication command.

While here, clean up assorted silliness in exec_replication_command:
* Clean up SQLCmd code path, and fix its only-accidentally-not-buggy
  memory management.
* Remove useless duplicate call of SnapBuildClearExportedSnapshot().
* replication_scanner_finish() was never called.

Back-patch of commit f560209c6 into v10-v13.  I'd originally felt
that this didn't merit back-patching, but subsequent confusion
while debugging walsender problems suggests that it'll be useful.
Also, the original commit has now aged long enough to provide some
comfort that it won't cause problems.

Discussion: https://postgr.es/m/2673480.1624557299@sss.pgh.pa.us
Discussion: https://postgr.es/m/880181.1600026471@sss.pgh.pa.us
2021-06-25 10:46:10 -04:00
Michael Paquier
af2e67b47a Cleanup some code related to pgbench log checks in TAP tests
This fixes a couple of problems within the so-said code of this commit
subject:
- Replace the use of open() with slurp_file(), fixing an issue reported
by buildfarm member fairywren whose perl installation keep around CRLF
characters, causing the matching patterns for the logs to fail.
- Remove the eval block, which is not really necessary.

This set of issues has come into light after fixing a different issue
with c13585fe, and this is wrong since this code has been introduced.

Reported-by: Andrew Dunstan, and buildfarm member fairywren
Author: Michael Paquier
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/0f49303e-7784-b3ee-200b-cbf67be2eb9e@dunslane.net
Backpatch-through: 11
2021-06-25 20:15:31 +09:00
Thomas Munro
d9c05a9ec4 Prepare for forthcoming LLVM 13 API change.
LLVM 13 (due out in September) has changed the semantics of
LLVMOrcAbsoluteSymbols(), so we need to bump some reference counts to
avoid a double-free that causes crashes and bad query results.

A proactive change seems necessary to avoid having a window of time
where our respective latest releases would interact badly.  It's
possible that the situation could change before then, though.

Thanks to Fabien Coelho for monitoring bleeding edge LLVM and Andres
Freund for tracking down the change.

Back-patch to 11, where the JIT code arrived.

Discussion: https://postgr.es/m/CA%2BhUKGLEy8mgtN7BNp0ooFAjUedDTJj5dME7NxLU-m91b85siA%40mail.gmail.com
2021-06-25 11:29:00 +12:00
Michael Paquier
7a9eaf111a Fix pattern matching logic for logs in TAP tests of pgbench
The logic checking for the format of per-thread logs used grep() with
directly "$re", which would cause the test to consider all the logs as
a match without caring about their format at all.  Using "/$re/" makes
grep() perform a regex test, which is what we want here.

While on it, improve some of the tests to be more picky with the
patterns expected and add more comments to describe the tests.

Issue discovered while digging into a separate patch.

Author: Fabien Coelho, Michael Paquier
Discussion: https://postgr.es/m/YNPsPAUoVDCpPOGk@paquier.xyz
Backpatch-through: 11
2021-06-25 06:52:47 +09:00
Amit Kapila
56e366f675 Fix ABI break introduced by commit 4daa140a2f.
Move the newly defined enum value REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT
at the end to avoid ABI break in the back branches. We need to back-patch
this till v11 because before that it is already at the end.

Reported-by: Tomas Vondra
Backpatch-through: 11
Discussion: https://postgr.es/m/CAExHW5sPKF-Oovx_qZe4p5oM6Dvof7_P+XgsNAViug15Fm99jA@mail.gmail.com
2021-06-24 15:21:50 +05:30
Heikki Linnakangas
6fb377e5f7 Another fix to relmapper race condition.
In previous commit, I missed that relmap_redo() was also not acquiring the
RelationMappingLock. Thanks to Thomas Munro for pointing that out.

Backpatch-through: 9.6, like previous commit.
Discussion: https://www.postgresql.org/message-id/CA%2BhUKGLev%3DPpOSaL3WRZgOvgk217et%2BbxeJcRr4eR-NttP1F6Q%40mail.gmail.com
2021-06-24 11:19:31 +03:00
Heikki Linnakangas
2a0ab13f8d Prevent race condition while reading relmapper file.
Contrary to the comment here, POSIX does not guarantee atomicity of a
read(), if another process calls write() concurrently. Or at least Linux
does not. Add locking to load_relmap_file() to avoid the race condition.

Fixes bug #17064. Thanks to Alexander Lakhin for the report and test case.

Backpatch-through: 9.6, all supported versions.
Discussion: https://www.postgresql.org/message-id/17064-bb0d7904ef72add3@postgresql.org
2021-06-24 10:45:34 +03:00
Tom Lane
5179a1ab77 Allow non-quoted identifiers as isolation test session/step names.
For no obvious reason, isolationtester has always insisted that
session and step names be written with double quotes.  This is
fairly tedious and does little for test readability, especially
since the names that people actually choose almost always look
like normal identifiers.  Hence, let's tweak the lexer to allow
SQL-like identifiers not only double-quoted strings.

(They're SQL-like, not exactly SQL, because I didn't add any
case-folding logic.  Also there's no provision for U&"..." names,
not that anyone's likely to care.)

There is one incompatibility introduced by this change: if you write
"foo""bar" with no space, that used to be taken as two identifiers,
but now it's just one identifier with an embedded quote mark.

I converted all the src/test/isolation/ specfiles to remove
unnecessary double quotes, but stopped there because my
eyes were glazing over already.

Like 741d7f104, back-patch to all supported branches, so that this
isn't a stumbling block for back-patching isolation test changes.

Discussion: https://postgr.es/m/759113.1623861959@sss.pgh.pa.us
2021-06-23 18:41:39 -04:00
Tom Lane
13f3655683 Don't assume GSSAPI result strings are null-terminated.
Our uses of gss_display_status() and gss_display_name() assumed
that the gss_buffer_desc strings returned by those functions are
null-terminated.  It appears that they generally are, given the
lack of field complaints up to now.  However, the available
documentation does not promise this, and some man pages
for gss_display_status() show examples that rely on the
gss_buffer_desc.length field instead of expecting null
termination.  Also, we now have a report that on some
implementations, clang's address sanitizer is of the opinion
that the byte after the specified length is undefined.

Hence, change the code to rely on the length field instead.

This might well be cosmetic rather than fixing any real bug, but
it's hard to be sure, so back-patch to all supported branches.
While here, also back-patch the v12 changes that made pg_GSS_error
deal honestly with multiple messages available from
gss_display_status.

Per report from Sudheer H R.

Discussion: https://postgr.es/m/5372B6D4-8276-42C0-B8FB-BD0918826FC3@tekenlight.com
2021-06-23 14:01:32 -04:00
Tom Lane
b961bdfe16 Improve display of query results in isolation tests.
Previously, isolationtester displayed SQL query results using some
ad-hoc code that clearly hadn't had much effort expended on it.
Field values longer than 14 characters weren't separated from
the next field, and usually caused misalignment of the columns
too.  Also there was no visual separation of a query's result
from subsequent isolationtester output.  This made test result
files confusing and hard to read.

To improve matters, let's use libpq's PQprint() function.  Although
that's long since unused by psql, it's still plenty good enough
for the purpose here.

Like 741d7f104, back-patch to all supported branches, so that this
isn't a stumbling block for back-patching isolation test changes.

Discussion: https://postgr.es/m/582362.1623798221@sss.pgh.pa.us
2021-06-23 11:12:31 -04:00
Tom Lane
e2cde85ef2 Use annotations to reduce instability of isolation-test results.
We've long contended with isolation test results that aren't entirely
stable.  Some test scripts insert long delays to try to force stable
results, which is not terribly desirable; but other erratic failure
modes remain, causing unrepeatable buildfarm failures.  I've spent a
fair amount of time trying to solve this by improving the server-side
support code, without much success: that way is fundamentally unable
to cope with diffs that stem from chance ordering of arrival of
messages from different server processes.

We can improve matters on the client side, however, by annotating
the test scripts themselves to show the desired reporting order
of events that might occur in different orders.  This patch adds
three types of annotations to deal with (a) test steps that might or
might not complete their waits before the isolationtester can see them
waiting; (b) test steps in different sessions that can legitimately
complete in either order; and (c) NOTIFY messages that might arrive
before or after the completion of a step in another session.  We might
need more annotation types later, but this seems to be enough to deal
with the instabilities we've seen in the buildfarm.  It also lets us
get rid of all the long delays that were previously used, cutting more
than a minute off the runtime of the isolation tests.

Back-patch to all supported branches, because the buildfarm
instabilities affect all the branches, and because it seems desirable
to keep isolationtester's capabilities the same across all branches
to simplify possible future back-patching of tests.

Discussion: https://postgr.es/m/327948.1623725828@sss.pgh.pa.us
2021-06-22 21:43:12 -04:00
Tom Lane
6f1321d5ae Restore the portal-level snapshot for simple expressions, too.
Commits 84f5c2908 et al missed the need to cover plpgsql's "simple
expression" code path.  If the first thing we execute after a
COMMIT/ROLLBACK is one of those, rather than a full-fledged SPI command,
we must explicitly do EnsurePortalSnapshotExists() to make sure we have
an outer snapshot.  Note that it wouldn't be good enough to just push a
snapshot for the duration of the expression execution: what comes back
might be toasted, so we'd better have a snapshot protecting it.

The test case demonstrating this fact cheats a bit by marking a SQL
function immutable even though it fetches from a table.  That's
nothing that users haven't been seen to do, though.

Per report from Jim Nasby.  Back-patch to v11, like the previous fix.

Discussion: https://postgr.es/m/378885e4-f85f-fc28-6c91-c4d1c080bf26@amazon.com
2021-06-22 17:48:39 -04:00
Thomas Munro
6dcb185bfc Back-patch "Tolerate version lookup failure for old style Windows locale names."
If users provide old style pre-standardized Windows locale names in a
CREATE COLLATION command, the OS is unable to provide version
information.  Continue without capturing version information, rather
than exposing an OS error.

This was originally done in commit 9f12a3b9 for 14 only, to support
future features that might encounter old style names from initdb's
default.  It wasn't done in 13 because I didn't consider that users
might actually want to use the old format explicitly (something we
should consider blocking in a future release with a better error
message, but that's not a policy we've decided on yet).

Back-patch to 13, based on the field complaint in pgsql-bugs #17058.

Reported-by: Yasushi Yamashita <developer@yamashi-ta.jp>
Discussion: https://postgr.es/m/17058-b49f5793c912c5aa%40postgresql.org
2021-06-22 13:21:25 +12:00
Tom Lane
33af10c598 Fix misbehavior of DROP OWNED BY with duplicate polroles entries.
Ordinarily, a pg_policy.polroles array wouldn't list the same role
more than once; but CREATE POLICY does not prevent that.  If we
perform DROP OWNED BY on a role that is listed more than once,
RemoveRoleFromObjectPolicy either suffered an assertion failure
or encountered a tuple-updated-by-self error.  Rewrite it to cope
correctly with duplicate entries, and add a CommandCounterIncrement
call to prevent the other problem.

Per discussion, there's other cleanup that ought to happen here,
but this seems like the minimum essential fix.

Per bug #17062 from Alexander Lakhin.  It's been broken all along,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/17062-11f471ae3199ca23@postgresql.org
2021-06-18 18:00:09 -04:00
Tom Lane
102f31a208 Avoid scribbling on input node tree in CREATE/ALTER DOMAIN.
This works fine in the "simple Query" code path; but if the
statement is in the plan cache then it's corrupted for future
re-execution.  Apply copyObject() to protect the original
tree from modification, as we've done elsewhere.

This narrow fix is applied only to the back branches.  In HEAD,
the problem was fixed more generally by commit 7c337b6b5; but
that changed ProcessUtility's API, so it's infeasible to
back-patch.

Per bug #17053 from Charles Samborski.

Discussion: https://postgr.es/m/931771.1623893989@sss.pgh.pa.us
Discussion: https://postgr.es/m/17053-3ca3f501bbc212b4@postgresql.org
2021-06-18 12:09:22 -04:00
Andrew Dunstan
5b6b5e5ee5
Don't set a fast default for anything but a plain table
The fast default code added in Release 11 omitted to check that the
table a fast default was being added to was a plain table. Thus one
could be added to a foreign table, which predicably blows up. Here we
perform that check.

In addition, on the back branches, since some of these might have
escaped into the wild, if we encounter a missing value for
an attribute of something other than a plain table we ignore it.

Fixes bug #17056

Backpatch to release 11,

Reviewed by: Andres Freund, Álvaro Herrera and Tom Lane
2021-06-18 07:44:58 -04:00
Amit Kapila
357cb8f07f Fix valgrind issue in pgoutput.c.
We use a tuple conversion map for partitions when replicating using an
ancestor's schema to convert tuples from partition's type to the
ancestor's. Before this map got initialized, we were processing
invalidation messages which access this map.

This issue happens only in version 13 as in HEAD we already have a code
that initializes each relation entry before we can process any
invalidation message. This issue is introduced by commit d250568121 in
version 13.

Reported-by: Tom Lane, as per buildfarm meber skink
Author: Amit Langote
Reviewed-by: Dilip Kumar, Amit Kapila
Discussion: https://www.postgresql.org/message-id/648020.1623854904@sss.pgh.pa.us
2021-06-18 08:51:18 +05:30
Heikki Linnakangas
42f782be91 Tidy up GetMultiXactIdMembers()'s behavior on error
One of the error paths left *members uninitialized. That's not a live
bug, because most callers don't look at *members when the function
returns -1, but let's be tidy. One caller, in heap_lock_tuple(), does
"if (members != NULL) pfree(members)", but AFAICS it never passes an
invalid 'multi' value so it should not reach that error case.

The callers are also a bit inconsistent in their expectations.
heap_lock_tuple() pfrees the 'members' array if it's not-NULL, others
pfree() it if "nmembers >= 0", and others if "nmembers > 0". That's
not a live bug either, because the function should never return 0, but
add an Assert for that to make it more clear. I left the callers alone
for now.

I also moved the line where we set *nmembers. It wasn't wrong before,
but I like to do that right next to the 'return' statement, to make it
clear that it's always set on return.

Also remove one unreachable return statement after ereport(ERROR), for
brevity and for consistency with the similar if-block right after it.

Author: Greg Nancarrow with the additional changes by me
Backpatch-through: 9.6, all supported versions
2021-06-17 14:52:45 +03:00
Peter Eisentraut
3989f8fb9d Fix subtransaction test for Python 3.10
Starting with Python 3.10, the stacktrace looks differently:
  -  PL/Python function "subtransaction_exit_subtransaction_in_with", line 3, in <module>
  -    s.__exit__(None, None, None)
  +  PL/Python function "subtransaction_exit_subtransaction_in_with", line 2, in <module>
  +    with plpy.subtransaction() as s:
Using try/except specifically makes the error look always the same.

(See https://github.com/python/cpython/pull/25719 for the discussion
of this change in Python.)

Author: Honza Horak <hhorak@redhat.com>
Discussion: https://www.postgresql.org/message-id/flat/853083.1620749597%40sss.pgh.pa.us
RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1959080
2021-06-17 09:01:22 +02:00
Tom Lane
d03a41d1c8 Fix plancache refcount leak after error in ExecuteQuery.
When stuffing a plan from the plancache into a Portal, one is
not supposed to risk throwing an error between GetCachedPlan and
PortalDefineQuery; if that happens, the plan refcount incremented
by GetCachedPlan will be leaked.  I managed to break this rule
while refactoring code in 9dbf2b7d7.  There is no visible
consequence other than some memory leakage, and since nobody is
very likely to trigger the relevant error conditions many times
in a row, it's not surprising we haven't noticed.  Nonetheless,
it's a bug, so rearrange the order of operations to remove the
hazard.

Noted on the way to looking for a better fix for bug #17053.
This mistake is pretty old, so back-patch to all supported
branches.
2021-06-16 19:30:17 -04:00
Heikki Linnakangas
e89a8e30e0 Fix outdated comment that talked about seek position of WAL file.
Since commit c24dcd0cfd, we have been using pg_pread() to read the WAL
file, which doesn't change the seek position (unless we fall back to
the implementation in src/port/pread.c). Update comment accordingly.

Backpatch-through: 12, where we started to use pg_pread()
2021-06-16 12:45:53 +03:00
Andrew Dunstan
d906d106f8
Further refinement of stuck_on_old_timeline recovery test
TestLib::perl2host can take a file argument as well as a directory
argument, so that code becomes substantially simpler. Also add comments
on why we're using forward slashes, and why we're setting
PERL_BADLANG=0.

Discussion: https://postgr.es/m/e9947bcd-20ee-027c-f0fe-01f736b7e345@dunslane.net
2021-06-15 15:36:50 -04:00
Amit Kapila
602a32a685 Fix decoding of speculative aborts.
During decoding for speculative inserts, we were relying for cleaning
toast hash on confirmation records or next change records. But that
could lead to multiple problems (a) memory leak if there is neither a
confirmation record nor any other record after toast insertion for a
speculative insert in the transaction, (b) error and assertion failures
if the next operation is not an insert/update on the same table.

The fix is to start queuing spec abort change and clean up toast hash
and change record during its processing. Currently, we are queuing the
spec aborts for both toast and main table even though we perform cleanup
while processing the main table's spec abort record. Later, if we have a
way to distinguish between the spec abort record of toast and the main
table, we can avoid queuing the change for spec aborts of toast tables.

Reported-by: Ashutosh Bapat
Author: Dilip Kumar
Reviewed-by: Amit Kapila
Backpatch-through: 9.6, where it was introduced
Discussion: https://postgr.es/m/CAExHW5sPKF-Oovx_qZe4p5oM6Dvof7_P+XgsNAViug15Fm99jA@mail.gmail.com
2021-06-15 08:41:16 +05:30
Tom Lane
2da5ddf834 Update variant expected-result file.
This should have been updated in d2d8a229b, but it was overlooked.
According to 31a877f18 which added it, this file is meant to show the
results you get under default_transaction_isolation = serializable.
We've largely lost track of that goal in other isolation tests, but
as long as we've got this one, it should be right.

Noted while fooling about with the isolationtester.
2021-06-14 21:58:26 -04:00
Tom Lane
0483ae7c51 Remove orphaned expected-result file.
This should have been removed in 43e084197, which removed the
corresponding spec file.  Noted while fooling about with the
isolationtester.
2021-06-14 21:28:21 -04:00
Tom Lane
bc7885b7f5 Work around portability issue with newer versions of mktime().
Recent glibc versions have made mktime() fail if tm_isdst is
inconsistent with the prevailing timezone; in particular it fails for
tm_isdst = 1 when the zone is UTC.  (This seems wildly inconsistent
with the POSIX-mandated treatment of "incorrect" values for the other
fields of struct tm, so if you ask me it's a bug, but I bet they'll
say it's intentional.)  This has been observed to cause cosmetic
problems when pg_restore'ing an archive created in a different
timezone.

To fix, do mktime() using the field values from the archive, and if
that fails try again with tm_isdst = -1.  This will give a result
that's off by the UTC-offset difference from the original zone, but
that was true before, too.  It's not terribly critical since we don't
do anything with the result except possibly print it.  (Someday we
should flush this entire bit of logic and record a standard-format
timestamp in the archive instead.  That's not okay for a back-patched
bug fix, though.)

Also, guard our only other use of mktime() by having initdb's
build_time_t() set tm_isdst = -1 not 0.  This case could only have
an issue in zones that are DST year-round; but I think some do exist,
or could in future.

Per report from Wells Oliver.  Back-patch to all supported
versions, since any of them might need to run with a newer glibc.

Discussion: https://postgr.es/m/CAOC+FBWDhDHO7G-i1_n_hjRzCnUeFO+H-Czi1y10mFhRWpBrew@mail.gmail.com
2021-06-13 14:32:42 -04:00
Andrew Dunstan
47d5781cb1
Further tweaks to stuck_on_old_timeline recovery test
Translate path slashes on target directory path. This was confusing old
branches, but is applied to all branches for the sake of uniformity.
Perl is perfectly able to understand paths with forward slashes.

Along the way, restore the previous archive_wait query, for the sake of
uniformity with other tests, per gripe from Tom Lane.
2021-06-13 07:19:34 -04:00
Michael Paquier
545aa6a593 Ignore more environment variables in pg_regress.c
This is similar to the work done in 8279f68 for TestLib.pm, where
environment variables set may cause unwanted failures if using a
temporary installation with pg_regress.  The list of variables reset is
adjusted in each stable branch depending on what is supported.

Comments are added to remember that the lists in TestLib.pm and
pg_regress.c had better be kept in sync.

Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/YMNR9GYDn+fHlMta@paquier.xyz
Backpatch-through: 9.6
2021-06-13 20:07:45 +09:00
Tom Lane
94e21021cd Restore robustness of TAP tests that wait for postmaster restart.
Several TAP tests use poll_query_until() to wait for the postmaster
to restart.  They were checking to see if a trivial query
(e.g. "SELECT 1") succeeds.  However, that's problematic in the wake
of commit 11e9caff8, because now that we feed said query to psql
via stdin, we risk IPC::Run whining about a SIGPIPE failure if psql
quits before reading the query.  Hence, we can't use a nonempty
query in cases where we need to wait for connection failures to
stop happening.

Per the precedent of commits c757a3da0 and 6d41dd045, we can pass
"undef" as the query in such cases to ensure that IPC::Run has
nothing to write.  However, then we have to say that the expected
output is empty, and this exposes a deficiency in poll_query_until:
if psql fails altogether and returns empty stdout, poll_query_until
will treat that as a success!  That's because, contrary to its
documentation, it makes no actual check for psql failure, looking
neither at the exit status nor at stderr.

To fix that, adjust poll_query_until to insist on empty stderr as
well as a stdout match.  (I experimented with checking exit status
instead, but it seems that psql often does exit(1) in cases that we
need to consider successes.  That might be something to fix someday,
but it would be a non-back-patchable behavior change.)

Back-patch to v10.  The test cases needing this exist only as far
back as v11, but it seems wise to keep poll_query_until's behavior
the same in v10, in case we back-patch another such test case in
future.  (9.6 does not currently need this change, because in that
branch poll_query_until can't be told to accept empty stdout as
a success case.)

Per assorted buildfarm failures, mostly on hoverfly.

Discussion: https://postgr.es/m/CAA4eK1+zM6L4QSA1XMvXY_qqWwdUmqkOS1+hWvL8QcYEBGA1Uw@mail.gmail.com
2021-06-12 15:12:10 -04:00
Tom Lane
f479ea94bd Ensure pg_filenode_relation(0, 0) returns NULL.
Previously, a zero value for the relfilenode resulted in
a confusing error message about "unexpected duplicate".
This function returns NULL for other invalid relfilenode
values, so zero should be treated likewise.

It's been like this all along, so back-patch to all supported
branches.

Justin Pryzby

Discussion: https://postgr.es/m/20210612023324.GT16435@telsasoft.com
2021-06-12 13:29:24 -04:00
Tom Lane
8b5055812c Don't use Asserts to check for violations of replication protocol.
Using an Assert to check the validity of incoming messages is an
extremely poor decision.  In a debug build, it should not be that easy
for a broken or malicious remote client to crash the logrep worker.
The consequences could be even worse in non-debug builds, which will
fail to make such checks at all, leading to who-knows-what misbehavior.
Hence, promote every Assert that could possibly be triggered by wrong
or out-of-order replication messages to a full test-and-ereport.

To avoid bloating the set of messages the translation team has to cope
with, establish a policy that replication protocol violation error
reports don't need to be translated.  Hence, all the new messages here
use errmsg_internal().  A couple of old messages are changed likewise
for consistency.

Along the way, fix some non-idiomatic or outright wrong uses of
hash_search().

Most of these mistakes are new with the "streaming replication"
patch (commit 464824323), but a couple go back a long way.
Back-patch as appropriate.

Discussion: https://postgr.es/m/1719083.1623351052@sss.pgh.pa.us
2021-06-12 12:59:15 -04:00
Andrew Dunstan
45322bd9b9
Fix new recovery test for use under msys
Commit caba8f0d43 wasn't quite right for msys, as demonstrated by
several buildfarm animals, including jacana and fairywren. We need to
use the msys perl in the archive command, but call it in such a way that
Windows will understand the path. Furthermore, inside the copy script we
need to convert a Windows path to an msys path.
2021-06-12 08:51:29 -04:00
Michael Paquier
fad9c8e93a Improve log pattern detection in recently-added TAP tests
ab55d74 has introduced some tests with rows found as missing in logical
replication subscriptions for partitioned tables, relying on a logic
with a lookup of the logs generated, scanning the whole file.  This
commit makes the logic more precise, by scanning the logs only from the
position before the key queries are run to the position where we check
for the logs.  This will reduce the risk of issues with log patterns
overlapping with each other if those tests get more complicated in the
future.

Per discussion with Tom Lane.

Discussion: https://postgr.es/m/YMP+Gx2S8meYYHW4@paquier.xyz
Backpatch-through: 13
2021-06-12 15:30:00 +09:00
Michael Paquier
f865d2affc Remove PGSSLCRLDIR from the list of variables ignored in TAP tests
This variable was present in the list added by 9d660670, but it is not
supported by this branch.  Issue noticed while diving into a similar
change for pg_regress.c.

Backpatch-through: 9.6
2021-06-12 10:39:21 +09:00
Alvaro Herrera
065ce069ae
Report sort phase progress in parallel btree build
We were already reporting it, but only after the parallel workers were
finished, which is visibly much later than what happens in a serial
build.

With this change we report it when the leader starts its own sort phase
when participating in the build (the normal case).  Now this might
happen a little later than when the workers start their sorting phases,
but a) communicating the actual phase start from workers is likely to be
a hassle, and b) the sort phase start is pretty fuzzy anyway, since
sorting per se is actually initiated by tuplesort.c internally earlier
than tuplesort_performsort() is called.

Backpatch to pg12, where the progress reporting code for CREATE INDEX
went in.

Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-by: Greg Nancarrow <gregn4422@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/1128176d-1eee-55d4-37ca-e63644422adb
2021-06-11 19:07:32 -04:00
Tom Lane
b270713fd4 Fix multiple crasher bugs in partitioned-table replication logic.
apply_handle_tuple_routing(), having detected and reported that
the tuple it needed to update didn't exist, tried to update that
tuple anyway, leading to a null-pointer dereference.

logicalrep_partition_open() failed to ensure that the
LogicalRepPartMapEntry it built for a partition was fully
independent of that for the partition root, leading to
trouble if the root entry was later freed or rebuilt.

Meanwhile, on the publisher's side, pgoutput_change() sometimes
attempted to apply execute_attr_map_tuple() to a NULL tuple.

The first of these was reported by Sergey Bernikov in bug #17055;
I found the other two while developing some test cases for this
sadly under-tested code.

Diagnosis and patch for the first issue by Amit Langote; patches
for the others by me; new test cases by me.  Back-patch to v13
where this logic came in.

Discussion: https://postgr.es/m/17055-9ba800ec8522668b@postgresql.org
2021-06-11 16:12:36 -04:00
Alvaro Herrera
218b101008
Fix race condition in invalidating obsolete replication slots
The code added to mark replication slots invalid in commit c655077639
had the race condition that a slot can be dropped or advanced
concurrently with checkpointer trying to invalidate it.  Rewrite the
code to close those races.

The changes to ReplicationSlotAcquire's API added with c655077639 are
not necessary anymore.  To avoid an ABI break in released branches, this
commit leaves that unchanged; it'll be changed in a master-only commit
separately.

Backpatch to 13, where this code first appeared.

Reported-by: Andres Freund <andres@anarazel.de>
Author: Andres Freund <andres@anarazel.de>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20210408001037.wfmk6jud36auhfqm@alap3.anarazel.de
2021-06-11 12:16:14 -04:00
Tom Lane
6e43f1c2df Rearrange logrep worker's snapshot handling some more.
It turns out that worker.c's code path for TRUNCATE was also
careless about establishing a snapshot while executing user-defined
code, allowing the checks added by commit 84f5c2908 to fail when
a trigger is fired in that context.

We could just wrap Push/PopActiveSnapshot around the truncate call,
but it seems better to establish a policy of holding a snapshot
throughout execution of a replication step.  To help with that and
possible future requirements, replace the previous ensure_transaction
calls with pairs of begin/end_replication_step calls.

Per report from Mark Dilger.  Back-patch to v11, like the previous
changes.

Discussion: https://postgr.es/m/B4A3AF82-79ED-4F4C-A4E5-CD2622098972@enterprisedb.com
2021-06-10 12:27:27 -04:00
Robert Haas
3465328aa1 Adjust new test case to set wal_keep_size.
Per buildfarm member conchuela and Kyotaro Horiguchi, it's possible
for the WAL segment that the cascading standby needs to be removed
too quickly. Hopefully this will prevent that.

Kyotaro Horiguchi

Discussion: http://postgr.es/m/20210610.101240.1270925505780628275.horikyota.ntt@gmail.com
2021-06-10 09:47:25 -04:00
Robert Haas
0826564292 Fix corner case failure of new standby to follow new primary.
This only happens if (1) the new standby has no WAL available locally,
(2) the new standby is starting from the old timeline, (3) the promotion
happened in the WAL segment from which the new standby is starting,
(4) the timeline history file for the new timeline is available from
the archive but the WAL files for are not (i.e. this is a race),
(5) the WAL files for the new timeline are available via streaming,
and (6) recovery_target_timeline='latest'.

Commit ee994272ca introduced this
logic and was an improvement over the previous code, but it mishandled
this case. If recovery_target_timeline='latest' and restore_command is
set, validateRecoveryParameters() can change recoveryTargetTLI to be
different from receiveTLI. If streaming is then tried afterward,
expectedTLEs gets initialized with the history of the wrong timeline.
It's supposed to be a list of entries explaining how to get to the
target timeline, but in this case it ends up with a list of entries
explaining how to get to the new standby's original timeline, which
isn't right.

Dilip Kumar and Robert Haas, reviewed by Kyotaro Horiguchi.

Discussion: http://postgr.es/m/CAFiTN-sE-jr=LB8jQuxeqikd-Ux+jHiXyh4YDiZMPedgQKup0g@mail.gmail.com
2021-06-09 16:17:13 -04:00
Robert Haas
99a0a2ada8 Allow PostgresNode.pm's backup method to accept backup_options.
Partial back-port of commit 081876d75e.
A test case for a pending bug fix needs this capability, but the code
on 9.6 is significantly different, so I'm only back-patching this
change as far as v10. We'll have to work around the problem another
way in v9.6.

Discussion: http://postgr.es/m/CAFiTN-tcivNvL0Rg6rD7_CErNfE75H7+gh9WbMxjbgsattja1Q@mail.gmail.com
2021-06-09 12:28:39 -04:00
Michael Paquier
6bfcc7d0d1 Fix inconsistencies in psql --help=commands
The set of subcommands supported by \dAp, \do and \dy was described
incorrectly in psql's --help.  The documentation was already consistent
with the code.

Reported-by: inoas, from IRC
Author: Matthijs van der Vleuten
Reviewed-by: Neil Chen
Discussion: https://postgr.es/m/6a984e24-2171-4039-9050-92d55e7b23fe@www.fastmail.com
Backpatch-through: 9.6
2021-06-09 16:25:49 +09:00
Tom Lane
c5b2818413 Force NO SCROLL for plpgsql's implicit cursors.
Further thought about bug #17050 suggests that it's a good idea
to use CURSOR_OPT_NO_SCROLL for the implicit cursor opened by
a plpgsql FOR-over-query loop.  This ensures that, if somebody
commits inside the loop, PersistHoldablePortal won't try to
rewind and re-read the cursor.  While we'd have selected NO_SCROLL
anyway if FOR UPDATE/SHARE appears in the query, there are other
hazards with volatile functions; and in any case, it's silly to
expend effort storing rows that we know for certain won't be needed.

(While here, improve the comment in exec_run_select, which was a bit
confused about the rationale for when we can use parallel mode.
Cursor operations aren't a hazard for nameless portals.)

This wasn't an issue until v11, which introduced the possibility
of persisting such cursors.  Hence, back-patch to v11.

Per bug #17050 from Алексей Булгаков.

Discussion: https://postgr.es/m/17050-f77aa827dc85247c@postgresql.org
2021-06-08 18:40:06 -04:00
Tom Lane
c1fd756fd2 Avoid misbehavior when persisting a non-stable cursor.
PersistHoldablePortal has long assumed that it should store the
entire output of the query-to-be-persisted, which requires rewinding
and re-reading the output.  This is problematic if the query is not
stable: we might get different row contents, or even a different
number of rows, which'd confuse the cursor state mightily.

In the case where the cursor is NO SCROLL, this is very easy to
solve: just store the remaining query output, without any rewinding,
and tweak the portal's cursor state to match.  Aside from removing
the semantic problem, this could be significantly more efficient
than storing the whole output.

If the cursor is scrollable, there's not much we can do, but it
was already the case that scrolling a volatile query's result was
pretty unsafe.  We can just document more clearly that getting
correct results from that is not guaranteed.

There are already prohibitions in place on using SCROLL with
FOR UPDATE/SHARE, which is one way for a SELECT query to have
non-stable results.  We could imagine prohibiting SCROLL when
the query contains volatile functions, but that would be
expensive to enforce.  Moreover, it could break applications
that work just fine, if they have functions that are in fact
stable but the user neglected to mark them so.  So settle for
documenting the hazard.

While this problem has existed in some guise for a long time,
it got a lot worse in v11, which introduced the possibility
of persisting plpgsql cursors (perhaps implicit ones) even
when they violate the rules for what can be marked WITH HOLD.
Hence, I've chosen to back-patch to v11 but not further.

Per bug #17050 from Алексей Булгаков.

Discussion: https://postgr.es/m/17050-f77aa827dc85247c@postgresql.org
2021-06-08 17:50:15 -04:00
Michael Paquier
949e32ee5d Remove unnecessary declaration in win32_port.h
Mis-merge introduced by e2f21ff, where pgwin32_setenv() was listed but
not defined in win32env.c.  This had no consequences as this routine
does not exist in this branch.

Only REL_12_STABLE and REL_13_STABLE got that wrong.

Backpatch-through: 12
2021-06-08 13:40:03 +09:00
Tom Lane
5b64368742 Fix incautious handling of possibly-miscoded strings in client code.
An incorrectly-encoded multibyte character near the end of a string
could cause various processing loops to run past the string's
terminating NUL, with results ranging from no detectable issue to
a program crash, depending on what happens to be in the following
memory.

This isn't an issue in the server, because we take care to verify
the encoding of strings before doing any interesting processing
on them.  However, that lack of care leaked into client-side code
which shouldn't assume that anyone has validated the encoding of
its input.

Although this is certainly a bug worth fixing, the PG security team
elected not to regard it as a security issue, primarily because
any untrusted text should be sanitized by PQescapeLiteral or
the like before being incorporated into a SQL or psql command.
(If an app fails to do so, the same technique can be used to
cause SQL injection, with probably much more dire consequences
than a mere client-program crash.)  Those functions were already
made proof against this class of problem, cf CVE-2006-2313.

To fix, invent PQmblenBounded() which is like PQmblen() except it
won't return more than the number of bytes remaining in the string.
In HEAD we can make this a new libpq function, as PQmblen() is.
It seems imprudent to change libpq's API in stable branches though,
so in the back branches define PQmblenBounded as a macro in the files
that need it.  (Note that just changing PQmblen's behavior would not
be a good idea; notably, it would completely break the escaping
functions' defense against this exact problem.  So we just want a
version for those callers that don't have any better way of handling
this issue.)

Per private report from houjingyi.  Back-patch to all supported branches.
2021-06-07 14:15:25 -04:00
Andrew Dunstan
4ed9dacb2f
In PostgresNode.pm, don't pass SQL to psql on the command line
The Msys shell mangles certain patterns in its command line, so avoid
handing arbitrary SQL to psql on the command line and instead use
IPC::Run's redirection facility for stdin. This pattern is already
mostly whats used, but query_poll_until() was not doing the right thing.

Problem discovered on the buildfarm when a new TAP test failed on msys.
2021-06-03 17:29:11 -04:00
Michael Paquier
75d66d10e0 Reduce risks of conflicts in internal queries of REFRESH MATVIEW CONCURRENTLY
The internal SQL queries used by REFRESH MATERIALIZED VIEW CONCURRENTLY
include some aliases for its diff and temporary relations with
rather-generic names: diff, newdata, newdata2 and mv.  Depending on the
queries used for the materialized view, using CONCURRENTLY could lead to
some internal failures if the query and those internal aliases conflict.

Those names have been chosen in 841c29c8.  This commit switches instead
to a naming pattern which is less likely going to cause conflicts, based
on an idea from Thomas Munro, by appending _$ to those aliases.  This is
not perfect as those new names could still conflict, but at least it has
the advantage to keep the code readable and simple while reducing the
likelihood of conflicts to be close to zero.

Reported-by: Mathis Rudolf
Author: Bharath Rupireddy
Reviewed-by: Bernd Helmle, Thomas Munro, Michael Paquier
Discussion: https://postgr.es/m/109c267a-10d2-3c53-b60e-720fcf44d9e8@credativ.de
Backpatch-through: 9.6
2021-06-03 15:28:37 +09:00
Michael Paquier
9d66067054 Ignore more environment variables in TAP tests
Various environment variables were not getting reset in the TAP tests,
which would cause failures depending on the tests or the environment
variables involved.  For example, PGSSL{MAX,MIN}PROTOCOLVERSION could
cause failures in the SSL tests.  Even worse, a junk value of
PGCLIENTENCODING makes a server startup fail.  The list of variables
reset is adjusted in each stable branch depending on what is supported.

While on it, simplify a bit the code per a suggestion from Andrew
Dunstan, using a list of variables instead of doing single deletions.

Reviewed-by: Andrew Dunstan, Daniel Gustafsson
Discussion: https://postgr.es/m/YLbjjRpucIeZ78VQ@paquier.xyz
Backpatch-through: 9.6
2021-06-03 11:51:05 +09:00
Tom Lane
6753a5b7e8 Fix planner's row-mark code for inheritance from a foreign table.
Commit 428b260f8 broke planning of cases where row marks are needed
(SELECT FOR UPDATE, etc) and one of the query's tables is a foreign
table that has regular table(s) as inheritance children.  We got the
reverse case right, but apparently were thinking that foreign tables
couldn't be inheritance parents.  Not so; so we need to be able to
add a CTID junk column while adding a new child, not only a wholerow
junk column.

Back-patch to v12 where the faulty code came in.

Amit Langote

Discussion: https://postgr.es/m/CA+HiwqEmo3FV1LAQ4TVyS2h1WM=kMkZUmbNuZSCnfHvMcUcPeA@mail.gmail.com
2021-06-02 14:38:14 -04:00
Tom Lane
e5b0fffa17 Reject SELECT ... GROUP BY GROUPING SETS (()) FOR UPDATE.
This case should be disallowed, just as FOR UPDATE with a plain
GROUP BY is disallowed; FOR UPDATE only makes sense when each row
of the query result can be identified with a single table row.
However, we missed teaching CheckSelectLocking() to check
groupingSets as well as groupClause, so that it would allow
degenerate grouping sets.  That resulted in a bad plan and
a null-pointer dereference in the executor.

Looking around for other instances of the same bug, the only one
I found was in examine_simple_variable().  That'd just lead to
silly estimates, but it should be fixed too.

Per private report from Yaoguang Chen.
Back-patch to all supported branches.
2021-06-01 11:12:56 -04:00
Amit Kapila
d250568121 pgoutput: Fix memory leak due to RelationSyncEntry.map.
Release memory allocated when creating the tuple-conversion map and its
component TupleDescs when its owning sync entry is invalidated.
TupleDescs must also be freed when no map is deemed necessary, to begin
with.

Reported-by: Andres Freund
Author: Amit Langote
Reviewed-by: Takamichi Osumi, Amit Kapila
Backpatch-through: 13, where it was introduced
Discussion: https://postgr.es/m/MEYP282MB166933B1AB02B4FE56E82453B64D9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2021-06-01 14:25:19 +05:30
Michael Paquier
e2f21ff606 Add fallback implementation for setenv()
This fixes the code compilation on Windows with MSVC and Kerberos, as
a missing implementation of setenv() causes a compilation failure of the
GSSAPI code.  This was only reproducible when building the code with
Kerberos, something that buildfarm animal hamerkop has fixed recently.

This issue only happens on 12 and 13, as this code has been introduced
in b0b39f7.  HEAD is already able to compile properly thanks to
7ca37fb0, and this commit is a minimal cherry-pick of it.

Thanks to Tom Lane for the discussion.

Discussion: https://postgr.es/m/YLDtm5WGjPxm6ua4@paquier.xyz
Backpatch-through: 12
2021-06-01 09:27:25 +09:00
Tom Lane
fe6f63286a Fix mis-planning of repeated application of a projection.
create_projection_plan contains a hidden assumption (here made
explicit by an Assert) that a projection-capable Path will yield a
projection-capable Plan.  Unfortunately, that assumption is violated
only a few lines away, by create_projection_plan itself.  This means
that two stacked ProjectionPaths can yield an outcome where we try to
jam the upper path's tlist into a non-projection-capable child node,
resulting in an invalid plan.

There isn't any good reason to have stacked ProjectionPaths; indeed the
whole concept is faulty, since the set of Vars/Aggs/etc needed by the
upper one wouldn't necessarily be available in the output of the lower
one, nor could the lower one create such values if they weren't
available from its input.  Hence, we can fix this by adjusting
create_projection_path to strip any top-level ProjectionPath from the
subpath it's given.  (This amounts to saying "oh, we changed our
minds about what we need to project here".)

The test case added here only fails in v13 and HEAD; before that, we
don't attempt to shove the Sort into the parallel part of the plan,
for reasons that aren't entirely clear to me.  However, all the
directly-related code looks generally the same as far back as v11,
where the hazard was introduced (by d7c19e62a).  So I've got no faith
that the same type of bug doesn't exist in v11 and v12, given the
right test case.  Hence, back-patch the code changes, but not the
irrelevant test case, into those branches.

Per report from Bas Poot.

Discussion: https://postgr.es/m/534fca83789c4a378c7de379e9067d4f@politie.nl
2021-05-31 12:03:00 -04:00
Noah Misch
23059a0457 Raise a timeout to 180s, in test 010_logical_decoding_timelines.pl.
Per buildfarm member hornet.  Also, update Pod documentation showing the
lower value.  Back-patch to v10, where the test first appeared.
2021-05-31 00:30:01 -07:00
Thomas Munro
d41fda6aa3 Fix race condition when sharing tuple descriptors.
Parallel query processes that called BlessTupleDesc() for identical
tuple descriptors at the same moment could crash.  There was code to
handle that rare case, but it dereferenced a bogus DSA pointer.  Repair.

Back-patch to 11, where commit cc5f8136 added support for sharing tuple
descriptors in parallel queries.

Reported-by: Eric Thinnes <e.thinnes@gmx.de>
Discussion: https://postgr.es/m/99aaa2eb-e194-bf07-c29a-1a76b4f2bcf9%40gmx.de
2021-05-29 15:13:44 +12:00
Andrew Dunstan
bb18bc2249
fix syntax error 2021-05-28 09:36:42 -04:00
Andrew Dunstan
c828a7246e
Report configured port in MSVC built pg_config
This is a long standing omission, discovered when trying to write code
that relied on it.

Backpatch to all live branches.
2021-05-28 09:31:50 -04:00
Michael Paquier
ab81d004e4 Fix MSVC scripts when building with GSSAPI/Kerberos
The deliverables of upstream Kerberos on Windows are installed with
paths that do not match our MSVC scripts.  First, the include folder was
named "inc/" in our scripts, but the upstream MSIs use "include/".
Second, the build would fail with 64-bit environments as the libraries
are named differently.

This commit adjusts the MSVC scripts to be compatible with the latest
installations of upstream, and I have checked that the compilation was
able to work with the 32-bit and 64-bit installations.

Special thanks to Kondo Yuta for the help in investigating the situation
in hamerkop, which had an incorrect configuration for the GSS
compilation.

Reported-by: Brian Ye
Discussion: https://postgr.es/m/162128202219.27274.12616756784952017465@wrigleys.postgresql.org
Backpatch-through: 9.6
2021-05-27 20:11:21 +09:00
Amit Kapila
9012e5594c Improve docs and error messages for parallel vacuum.
The error messages, docs, and one of the options were using
'parallel degree' to indicate parallelism used by vacuum command. We
normally use 'parallel workers' at other places so change it for parallel
vacuum accordingly.

Author: Bharath Rupireddy
Reviewed-by: Dilip Kumar, Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CALj2ACWz=PYrrFXVsEKb9J1aiX4raA+UBe02hdRp_zqDkrWUiw@mail.gmail.com
2021-05-25 09:40:16 +05:30
Michael Paquier
a23c0b00f0 Disallow SSL renegotiation
SSL renegotiation is already disabled as of 48d23c72, however this does
not prevent the server to comply with a client willing to use
renegotiation.  In the last couple of years, renegotiation had its set
of security issues and flaws (like the recent CVE-2021-3449), and it
could be possible to crash the backend with a client attempting
renegotiation.

This commit takes one extra step by disabling renegotiation in the
backend in the same way as SSL compression (f9264d15) or tickets
(97d3a0b0).  OpenSSL 1.1.0h has added an option named
SSL_OP_NO_RENEGOTIATION able to achieve that.  In older versions
there is an option called SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS that
was undocumented, and could be set within the SSL object created when
the TLS connection opens, but I have decided not to use it, as it feels
trickier to rely on, and it is not official.  Note that this option is
not usable in OpenSSL < 1.1.0h as the internal contents of the *SSL
object are hidden to applications.

SSL renegotiation concerns protocols up to TLSv1.2.

Per original report from Robert Haas, with a patch based on a suggestion
by Andres Freund.

Author: Michael Paquier
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/YKZBXx7RhU74FlTE@paquier.xyz
Backpatch-through: 9.6
2021-05-25 10:11:13 +09:00
Tom Lane
5b4791b458 Fix access to no-longer-open relcache entry in logical-rep worker.
If we redirected a replicated tuple operation into a partition child
table, and then tried to fire AFTER triggers for that event, the
relation cache entry for the child table was already closed.  This has
no visible ill effects as long as the entry is still there and still
valid, but an unluckily-timed cache flush could result in a crash or
other misbehavior.

To fix, postpone the ExecCleanupTupleRouting call (which is what
closes the child table) until after we've fired triggers.  This
requires a bit of refactoring so that the cleanup function can
have access to the necessary state.

In HEAD, I took the opportunity to simplify some of worker.c's
function APIs based on use of the new ApplyExecutionData struct.
However, it doesn't seem safe/practical to back-patch that aspect,
at least not without a lot of analysis of possible interactions
with a04daa97a.

In passing, add an Assert to afterTriggerInvokeEvents to catch
such cases.  This seems worthwhile because we've grown a number
of fairly unstructured ways of calling AfterTriggerEndQuery.

Back-patch to v13, where worker.c grew the ability to deal with
partitioned target tables.

Discussion: https://postgr.es/m/3382681.1621381328@sss.pgh.pa.us
2021-05-22 21:24:48 -04:00
Tom Lane
849c7971d1 Disallow whole-row variables in GENERATED expressions.
This was previously allowed, but I think that was just an oversight.
It's a clear violation of the rule that a generated column cannot
depend on itself or other generated columns.  Moreover, because the
code was relying on the assumption that no such cross-references
exist, it was pretty easy to crash ALTER TABLE and perhaps other
places.  Even if you managed not to crash, you got quite unstable,
implementation-dependent results.

Per report from Vitaly Ustinov.
Back-patch to v12 where GENERATED came in.

Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
2021-05-21 15:12:19 -04:00
Tom Lane
77e3204ecb Fix usage of "tableoid" in GENERATED expressions.
We consider this supported (though I've got my doubts that it's a
good idea, because tableoid is not immutable).  However, several
code paths failed to fill the field in soon enough, causing such
a GENERATED expression to see zero or the wrong value.  This
occurred when ALTER TABLE adds a new GENERATED column to a table
with existing rows, and during regular INSERT or UPDATE on a
foreign table with GENERATED columns.

Noted during investigation of a report from Vitaly Ustinov.
Back-patch to v12 where GENERATED came in.

Discussion: https://postgr.es/m/CAM_DEiWR2DPT6U4xb-Ehigozzd3n3G37ZB1+867zbsEVtYoJww@mail.gmail.com
2021-05-21 15:02:07 -04:00
Tom Lane
d18ee6f92d Restore the portal-level snapshot after procedure COMMIT/ROLLBACK.
COMMIT/ROLLBACK necessarily destroys all snapshots within the session.
The original implementation of intra-procedure transactions just
cavalierly did that, ignoring the fact that this left us executing in
a rather different environment than normal.  In particular, it turns
out that handling of toasted datums depends rather critically on there
being an outer ActiveSnapshot: otherwise, when SPI or the core
executor pop whatever snapshot they used and return, it's unsafe to
dereference any toasted datums that may appear in the query result.
It's possible to demonstrate "no known snapshots" and "missing chunk
number N for toast value" errors as a result of this oversight.

Historically this outer snapshot has been held by the Portal code,
and that seems like a good plan to preserve.  So add infrastructure
to pquery.c to allow re-establishing the Portal-owned snapshot if it's
not there anymore, and add enough bookkeeping support that we can tell
whether it is or not.

We can't, however, just re-establish the Portal snapshot as part of
COMMIT/ROLLBACK.  As in normal transaction start, acquiring the first
snapshot should wait until after SET and LOCK commands.  Hence, teach
spi.c about doing this at the right time.  (Note that this patch
doesn't fix the problem for any PLs that try to run intra-procedure
transactions without using SPI to execute SQL commands.)

This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD,
rename that to allow_nonatomic.

replication/logical/worker.c also needs some fixes, because it wasn't
careful to hold a snapshot open around AFTER trigger execution.
That code doesn't use a Portal, which I suspect someday we're gonna
have to fix.  But for now, just rearrange the order of operations.
This includes back-patching the recent addition of finish_estate()
to centralize the cleanup logic there.

This also back-patches commit 2ecfeda3e into v13, to improve the
test coverage for worker.c (it was that test that exposed that
worker.c's snapshot management is wrong).

Per bug #15990 from Andreas Wicht.  Back-patch to v11 where
intra-procedure COMMIT was added.

Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
2021-05-21 14:03:53 -04:00
Amit Kapila
c83c0257e4 Fix deadlock for multiple replicating truncates of the same table.
While applying the truncate change, the logical apply worker acquires
RowExclusiveLock on the relation being truncated. This allowed truncate on
the relation at a time by two apply workers which lead to a deadlock. The
reason was that one of the workers after updating the pg_class tuple tries
to acquire SHARE lock on the relation and started to wait for the second
worker which has acquired RowExclusiveLock on the relation. And when the
second worker tries to update the pg_class tuple, it starts to wait for
the first worker which leads to a deadlock. Fix it by acquiring
AccessExclusiveLock on the relation before applying the truncate change as
we do for normal truncate operation.

Author: Peter Smith, test case by Haiying Tang
Reviewed-by: Dilip Kumar, Amit Kapila
Backpatch-through: 11
Discussion: https://postgr.es/m/CAHut+PsNm43p0jM+idTvWwiGZPcP0hGrHMPK9TOAkc+a4UpUqw@mail.gmail.com
2021-05-21 08:03:38 +05:30
Tom Lane
c64183f234 Avoid detoasting failure after COMMIT inside a plpgsql FOR loop.
exec_for_query() normally tries to prefetch a few rows at a time
from the query being iterated over, so as to reduce executor
entry/exit overhead.  Unfortunately this is unsafe if we have
COMMIT or ROLLBACK within the loop, because there might be
TOAST references in the data that we prefetched but haven't
yet examined.  Immediately after the COMMIT/ROLLBACK, we have
no snapshots in the session, meaning that VACUUM is at liberty
to remove recently-deleted TOAST rows.

This was originally reported as a case triggering the "no known
snapshots" error in init_toast_snapshot(), but even if you miss
hitting that, you can get "missing toast chunk", as illustrated
by the added isolation test case.

To fix, just disable prefetching in non-atomic contexts.  Maybe
there will be performance complaints prompting us to work harder
later, but it's not clear at the moment that this really costs
much, and I doubt we'd want to back-patch any complicated fix.

In passing, adjust that error message in init_toast_snapshot()
to be a little clearer about the likely cause of the problem.

Patch by me, based on earlier investigation by Konstantin Knizhnik.

Per bug #15990 from Andreas Wicht.  Back-patch to v11 where
intra-procedure COMMIT was added.

Discussion: https://postgr.es/m/15990-eee2ac466b11293d@postgresql.org
2021-05-20 18:32:37 -04:00
Tom Lane
9dc76f032c Clean up cpluspluscheck violation.
"typename" is a C++ keyword, so pg_upgrade.h fails to compile in C++.
Fortunately, there seems no likely reason for somebody to need to
do that.  Nonetheless, it's project policy that all .h files should
pass cpluspluscheck, so rename the argument to fix that.

Oversight in 57c081de0; back-patch as that was.  (The policy requiring
pg_upgrade.h to pass cpluspluscheck only goes back to v12, but it
seems best to keep this code looking the same in all branches.)
2021-05-20 13:03:08 -04:00
David Rowley
010429eb1f Fix typo and outdated information in README.barrier
README.barrier didn't seem to get the memo when atomics were added. Fix
that.

Author: Tatsuo Ishii, David Rowley
Discussion: https://postgr.es/m/20210516.211133.2159010194908437625.t-ishii%40sraoss.co.jp
Backpatch-through: 9.6, oldest supported release
2021-05-18 09:55:54 +12:00
Tom Lane
c3cc73e144 Be more careful about barriers when releasing BackgroundWorkerSlots.
ForgetBackgroundWorker lacked any memory barrier at all, while
BackgroundWorkerStateChange had one but unaccountably did
additional manipulation of the slot after the barrier.  AFAICS,
the rule must be that the barrier is immediately before setting
or clearing slot->in_use.

It looks like back in 9.6 when ForgetBackgroundWorker was first
written, there might have been some case for not needing a
barrier there, but I'm not very convinced of that --- the fact
that the load of bgw_notify_pid is in the caller doesn't seem
to guarantee no memory ordering problem.  So patch 9.6 too.

It's likely that this doesn't fix any observable bug on Intel
hardware, but machines with weaker memory ordering rules could
have problems here.

Discussion: https://postgr.es/m/4046084.1620244003@sss.pgh.pa.us
2021-05-15 12:21:06 -04:00
Peter Geoghegan
fa675af59f Harden nbtree deduplication posting split code.
Add a defensive "can't happen" error to code that handles nbtree posting
list splits (promote an existing assertion).  This avoids a segfault in
the event of an insertion of a newitem that is somehow identical to an
existing non-pivot tuple in the index.  An nbtree index should never
have two index tuples with identical TIDs.

This scenario is not particular unlikely in the event of any kind of
corruption that leaves the index in an inconsistent state relative to
the heap relation that is indexed.  There are two known reports of
preventable hard crashes.  Doing nothing seems unacceptable given the
general expectation that nbtree will cope reasonably well with corrupt
data.

Discussion: https://postgr.es/m/CAH2-Wz=Jr_d-dOYEEmwz0-ifojVNWho01eAqewfQXgKfoe114w@mail.gmail.com
Backpatch: 13-, where nbtree deduplication was introduced.
2021-05-14 15:08:00 -07:00
Tom Lane
dc714c120e Prevent infinite insertion loops in spgdoinsert().
Formerly we just relied on operator classes that assert longValuesOK
to eventually shorten the leaf value enough to fit on an index page.
That fails since the introduction of INCLUDE-column support (commit
09c1c6ab4), because the INCLUDE columns might alone take up more
than a page, meaning no amount of leaf-datum compaction will get
the job done.  At least with spgtextproc.c, that leads to an infinite
loop, since spgtextproc.c won't throw an error for not being able
to shorten the leaf datum anymore.

To fix without breaking cases that would otherwise work, add logic
to spgdoinsert() to verify that the leaf tuple size is decreasing
after each "choose" step.  Some opclasses might not decrease the
size on every single cycle, and in any case, alignment roundoff
of the tuple size could obscure small gains.  Therefore, allow
up to 10 cycles without additional savings before throwing an
error.  (Perhaps this number will need adjustment, but it seems
quite generous right now.)

As long as we've developed this logic, let's back-patch it.
The back branches don't have INCLUDE columns to worry about, but
this seems like a good defense against possible bugs in operator
classes.  We already know that an infinite loop here is pretty
unpleasant, so having a defense seems to outweigh the risk of
breaking things.  (Note that spgtextproc.c is actually the only
known opclass with longValuesOK support, so that this is all moot
for known non-core opclasses anyway.)

Per report from Dilip Kumar.

Discussion: https://postgr.es/m/CAFiTN-uxP_soPhVG840tRMQTBmtA_f_Y8N51G7DKYYqDh7XN-A@mail.gmail.com
2021-05-14 15:07:34 -04:00