Commit Graph

34903 Commits

Author SHA1 Message Date
Tom Lane
820aa9ef4c Don't clobber the calling user's credentials cache in Kerberos test.
Embarrassing oversight in this test script, which fortunately is not
run by default.

Report and patch by Jacob Champion.

Discussion: https://postgr.es/m/1fcb175bafef6560f47a8c31229fa7c938486b8d.camel@vmware.com
2021-01-25 14:53:13 -05:00
Tom Lane
4641b2a30f Fix broken ruleutils support for function TRANSFORM clauses.
I chanced to notice that this dumped core due to a faulty Assert.
To add insult to injury, the output has been misformatted since v11.
Obviously we need some regression testing here.

Discussion: https://postgr.es/m/d1cc628c-3953-4209-957b-29427acc38c8@www.fastmail.com
2021-01-25 13:03:11 -05:00
David Rowley
06cdfe21d3 Fix hypothetical bug in heap backward scans
Both heapgettup() and heapgettup_pagemode() incorrectly set the first page
to scan in a backward scan in which the number of pages to scan was
specified by heap_setscanlimits().  The code incorrectly started the scan
at the end of the relation when startBlk was 0, or otherwise at
startBlk - 1, neither of which is correct when only scanning a subset of
pages.

The fix here checks if heap_setscanlimits() has changed the number of
pages to scan and if so we set the first page to scan as the final page in
the specified range during backward scans.

Proper adjustment of this code was forgotten when heap_setscanlimits() was
added in 7516f5259 back in 9.5.  However, practice, nowhere in core code
performs backward scans after having used heap_setscanlimits(), yet, it is
possible an extension uses the heap functions in this way, hence
backpatch.

An upcoming patch does use heap_setscanlimits() with backward scans, so
this must be fixed before that can go in.

Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvpGc9h0_oVD2CtgBcxCS1N-qDYZSeBRnUh+0CWJA9cMaA@mail.gmail.com
Backpatch-through: 9.5, all supported versions
2021-01-25 19:53:27 +13:00
Tom Lane
5db6ba3034 Update time zone data files to tzdata release 2021a.
DST law changes in Russia (Volgograd zone) and South Sudan.
Historical corrections for Australia, Bahamas, Belize, Bermuda,
Ghana, Israel, Kenya, Nigeria, Palestine, Seychelles, and Vanuatu.
Notably, the Australia/Currie zone has been corrected to the point
where it is identical to Australia/Hobart.
2021-01-24 16:29:47 -05:00
Tom Lane
1cce024fd2 Fix pull_varnos' miscomputation of relids set for a PlaceHolderVar.
Previously, pull_varnos() took the relids of a PlaceHolderVar as being
equal to the relids in its contents, but that fails to account for the
possibility that we have to postpone evaluation of the PHV due to outer
joins.  This could result in a malformed plan.  The known cases end up
triggering the "failed to assign all NestLoopParams to plan nodes"
sanity check in createplan.c, but other symptoms may be possible.

The right value to use is the join level we actually intend to evaluate
the PHV at.  We can get that from the ph_eval_at field of the associated
PlaceHolderInfo.  However, there are some places that call pull_varnos()
before the PlaceHolderInfos have been created; in that case, fall back
to the conservative assumption that the PHV will be evaluated at its
syntactic level.  (In principle this might result in missing some legal
optimization, but I'm not aware of any cases where it's an issue in
practice.)  Things are also a bit ticklish for calls occurring during
deconstruct_jointree(), but AFAICS the ph_eval_at fields should have
reached their final values by the time we need them.

The main problem in making this work is that pull_varnos() has no
way to get at the PlaceHolderInfos.  We can fix that easily, if a
bit tediously, in HEAD by passing it the planner "root" pointer.
In the back branches that'd cause an unacceptable API/ABI break for
extensions, so leave the existing entry points alone and add new ones
with the additional parameter.  (If an old entry point is called and
encounters a PHV, it'll fall back to using the syntactic level,
again possibly missing some valid optimization.)

Back-patch to v12.  The computation is surely also wrong before that,
but it appears that we cannot reach a bad plan thanks to join order
restrictions imposed on the subquery that the PlaceHolderVar came from.
The error only became reachable when commit 4be058fe9 allowed trivial
subqueries to be collapsed out completely, eliminating their join order
restrictions.

Per report from Stephan Springl.

Discussion: https://postgr.es/m/171041.1610849523@sss.pgh.pa.us
2021-01-21 15:37:23 -05:00
Tom Lane
561dd8d8a2 Further tweaking of PG_SYSROOT heuristics for macOS.
It emerges that in some phases of the moon (perhaps to do with
directory entry order?), xcrun will report that the SDK path is
  /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
which is normally a symlink to a version-numbered sibling directory.
Our heuristic to skip non-version-numbered pathnames was rejecting
that, which is the wrong thing to do.  We'd still like to end up
with a version-numbered PG_SYSROOT value, but we can have that by
dereferencing the symlink.

Like the previous fix, back-patch to all supported versions.

Discussion: https://postgr.es/m/522433.1611089678@sss.pgh.pa.us
2021-01-20 12:07:35 -05:00
Heikki Linnakangas
0326635dd4 Fix bug in detecting concurrent page splits in GiST insert
In commit 9eb5607e69, I got the condition on checking for split or
deleted page wrong: I used && instead of ||. The comment correctly said
"concurrent split _or_ deletion".

As a result, GiST insertion could miss a concurrent split, and insert to
wrong page. Duncan Sands demonstrated this with a test script that did a
lot of concurrent inserts.

Backpatch to v12, where this was introduced. REINDEX is required to fix
indexes that were affected by this bug.

Backpatch-through: 12
Reported-by: Duncan Sands
Discussion: https://www.postgresql.org/message-id/a9690483-6c6c-3c82-c8ba-dc1a40848f11%40deepbluecap.com
2021-01-20 11:58:27 +02:00
Michael Paquier
5ad672fc2f Fix ALTER DEFAULT PRIVILEGES with duplicated objects
Specifying duplicated objects in this command would lead to unique
constraint violations in pg_default_acl or "tuple already updated by
self" errors.  Similarly to GRANT/REVOKE, increment the command ID after
each subcommand processing to allow this case to work transparently.

A regression test is added by tweaking one of the existing queries of
privileges.sql to stress this case.

Reported-by: Andrus
Author: Michael Paquier
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/ae2a7dc1-9d71-8cba-3bb9-e4cb7eb1f44e@hot.ee
Backpatch-through: 9.5
2021-01-20 11:39:17 +09:00
Tom Lane
6253159965 Remove faulty support for MergeAppend plan with WHERE CURRENT OF.
Somebody extended search_plan_tree() to treat MergeAppend exactly
like Append, which is 100% wrong, because unlike Append we can't
assume that only one input node is actively returning tuples.
Hence a cursor using a MergeAppend across a UNION ALL or inheritance
tree could falsely match a WHERE CURRENT OF query at a row that
isn't actually the cursor's current output row, but coincidentally
has the same TID (in a different table) as the current output row.

Delete the faulty code; this means that such a case will now return
an error like 'cursor "foo" is not a simply updatable scan of table
"bar"', instead of silently misbehaving.  Users should not find that
surprising though, as the same cursor query could have failed that way
already depending on the chosen plan.  (It would fail like that if the
sort were done with an explicit Sort node instead of MergeAppend.)

Expand the clearly-inadequate commentary to be more explicit about
what this code is doing, in hopes of forestalling future mistakes.

It's been like this for awhile, so back-patch to all supported
branches.

Discussion: https://postgr.es/m/482865.1611075182@sss.pgh.pa.us
2021-01-19 13:25:45 -05:00
Tom Lane
ba80489c66 Avoid crash with WHERE CURRENT OF and a custom scan plan.
execCurrent.c's search_plan_tree() assumed that ForeignScanStates
and CustomScanStates necessarily have a valid ss_currentRelation.
This is demonstrably untrue for postgres_fdw's remote join and
remote aggregation plans, and non-leaf custom scans might not have
an identifiable scan relation either.  Avoid crashing by ignoring
such nodes when the field is null.

This solution will lead to errors like 'cursor "foo" is not a
simply updatable scan of table "bar"' in cases where maybe we
could have allowed WHERE CURRENT OF to work.  That's not an issue
for postgres_fdw's usages, since joins or aggregations would render
WHERE CURRENT OF invalid anyway.  But an otherwise-transparent
upper level custom scan node might find this annoying.  When and if
someone cares to expend work on such a scenario, we could invent a
custom-scan-provider callback to determine what's safe.

Report and patch by David Geier, commentary by me.  It's been like
this for awhile, so back-patch to all supported branches.

Discussion: https://postgr.es/m/0253344d-9bdd-11c4-7f0d-d88c02cd7991@swarm64.com
2021-01-18 18:32:45 -05:00
Noah Misch
a0d31b1c94 Fix pg_dump for GRANT OPTION among initial privileges.
The context is an object that no longer bears some aclitem that it bore
initially.  (A user issued REVOKE or GRANT statements upon the object.)
pg_dump is forming SQL to reproduce the object ACL.  Since initdb
creates no ACL bearing GRANT OPTION, reaching this bug requires an
extension where the creation script establishes such an ACL.  No PGXN
extension does that.  If an installation did reach the bug, pg_dump
would have omitted a semicolon, causing a REVOKE and the next SQL
statement to fail.  Separately, since the affected code exists to
eliminate an entire aclitem, it wants plain REVOKE, not REVOKE GRANT
OPTION FOR.  Back-patch to 9.6, where commit
23f34fa4ba first appeared.

Discussion: https://postgr.es/m/20210109102423.GA160022@rfd.leadboat.com
2021-01-16 12:21:39 -08:00
Noah Misch
66087f7e92 Prevent excess SimpleLruTruncate() deletion.
Every core SLRU wraps around.  With the exception of pg_notify, the wrap
point can fall in the middle of a page.  Account for this in the
PagePrecedes callback specification and in SimpleLruTruncate()'s use of
said callback.  Update each callback implementation to fit the new
specification.  This changes SerialPagePrecedesLogically() from the
style of asyncQueuePagePrecedes() to the style of CLOGPagePrecedes().
(Whereas pg_clog and pg_serial share a key space, pg_serial is nothing
like pg_notify.)  The bug fixed here has the same symptoms and user
followup steps as 592a589a04.  Back-patch
to 9.5 (all supported versions).

Reviewed by Andrey Borodin and (in earlier versions) by Tom Lane.

Discussion: https://postgr.es/m/20190202083822.GC32531@gust.leadboat.com
2021-01-16 12:21:39 -08:00
Tomas Vondra
943a113bcb Disallow CREATE STATISTICS on system catalogs
Add a check that CREATE STATISTICS does not add extended statistics on
system catalogs, similarly to indexes etc.  It can be overriden using
the allow_system_table_mods GUC.

This bug exists since 7b504eb282, adding the extended statistics, so
backpatch all the way back to PostgreSQL 10.

Author: Tomas Vondra
Reported-by: Dean Rasheed
Backpatch-through: 10
Discussion: https://postgr.es/m/CAEZATCXAPrrOKwEsyZKQ4uzzJQWBCt6QAvOcgqRGdWwT1zb%2BrQ%40mail.gmail.com
2021-01-15 23:33:16 +01:00
Tom Lane
f5d044eaef Improve our heuristic for selecting PG_SYSROOT on macOS.
In cases where Xcode is newer than the underlying macOS version,
asking xcodebuild for the SDK path will produce a pointer to the
SDK shipped with Xcode, which may end up building code that does
not work on the underlying macOS version.  It appears that in
such cases, xcodebuild's answer also fails to match the default
behavior of Apple's compiler: assuming one has installed Xcode's
"command line tools", there will be an SDK for the OS's own version
in /Library/Developer/CommandLineTools, and the compiler will
default to using that.  This is all pretty poorly documented,
but experimentation suggests that "xcrun --show-sdk-path" gives
the sysroot path that the compiler is actually using, at least
in some cases.  Hence, try that first, but revert to xcodebuild
if xcrun fails (in very old Xcode, it is missing or lacks the
--show-sdk-path switch).

Also, "xcrun --show-sdk-path" may give a path that is valid but lacks
any OS version identifier.  We don't really want that, since most
of the motivation for wiring -isysroot into the build flags at all
is to ensure that all parts of a PG installation are built against
the same SDK, even when considering extensions built later and/or on
a different machine.  Insist on finding "N.N" in the directory name
before accepting the result.  (Adding "--sdk macosx" to the xcrun
call seems to produce the same answer as xcodebuild, but usually
more quickly because it's cached, so we also try that as a fallback.)

The core reason why we don't want to use Xcode's default SDK in cases
like this is that Apple's technology for introducing new syscalls
does not play nice with Autoconf: for example, configure will think
that preadv/pwritev exist when using a Big Sur SDK, even when building
on an older macOS version where they don't exist.  It'd be nice to
have a better solution to that problem, but this patch doesn't attempt
to fix that.

Per report from Sergey Shinderuk.  Back-patch to all supported versions.

Discussion: https://postgr.es/m/ed3b8e5d-0da8-6ebd-fd1c-e0ac80a4b204@postgrespro.ru
2021-01-15 11:29:13 -05:00
Fujii Masao
19a1d76853 Fix calculation of how much shared memory is required to store a TOC.
Commit ac883ac453 refactored shm_toc_estimate() but changed its calculation
of shared memory size for TOC incorrectly. Previously this could cause too
large memory to be allocated.

Back-patch to v11 where the bug was introduced.

Author: Takayuki Tsunakawa
Discussion: https://postgr.es/m/TYAPR01MB2990BFB73170E2C4921E2C4DFEA80@TYAPR01MB2990.jpnprd01.prod.outlook.com
2021-01-15 12:45:51 +09:00
Tom Lane
0d221ecece pg_dump: label PUBLICATION TABLE ArchiveEntries with an owner.
This is the same fix as commit 9eabfe300 applied to INDEX ATTACH
entries, but for table-to-publication attachments.  As in that
case, even though the backend doesn't record "ownership" of the
attachment, we still ought to label it in the dump archive with
the role name that should run the ALTER PUBLICATION command.
The existing behavior causes the ALTER to be done by the original
role that started the restore; that will usually work fine, but
there may be corner cases where it fails.

The bulk of the patch is concerned with changing struct
PublicationRelInfo to include a pointer to the associated
PublicationInfo object, so that we can get the owner's name
out of that when the time comes.  While at it, I rewrote
getPublicationTables() to do just one query of pg_publication_rel,
not one per table.

Back-patch to v10 where this code was introduced.

Discussion: https://postgr.es/m/1165710.1610473242@sss.pgh.pa.us
2021-01-14 16:19:38 -05:00
Alvaro Herrera
c9b702552a
Prevent drop of tablespaces used by partitioned relations
When a tablespace is used in a partitioned relation (per commits
ca4103025d in pg12 for tables and 33e6c34c32 in pg11 for indexes),
it is possible to drop the tablespace, potentially causing various
problems.  One such was reported in bug #16577, where a rewriting ALTER
TABLE causes a server crash.

Protect against this by using pg_shdepend to keep track of tablespaces
when used for relations that don't keep physical files; we now abort a
tablespace if we see that the tablespace is referenced from any
partitioned relations.

Backpatch this to 11, where this problem has been latent all along.  We
don't try to create pg_shdepend entries for existing partitioned
indexes/tables, but any ones that are modified going forward will be
protected.

Note slight behavior change: when trying to drop a tablespace that
contains both regular tables as well as partitioned ones, you'd
previously get ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE and now you'll
get ERRCODE_DEPENDENT_OBJECTS_STILL_EXIST.  Arguably, the latter is more
correct.

It is possible to add protecting pg_shdepend entries for existing
tables/indexes, by doing
  ALTER TABLE ONLY some_partitioned_table SET TABLESPACE pg_default;
  ALTER TABLE ONLY some_partitioned_table SET TABLESPACE original_tablespace;
for each partitioned table/index that is not in the database default
tablespace.  Because these partitioned objects do not have storage, no
file needs to be actually moved, so it shouldn't take more time than
what's required to acquire locks.

This query can be used to search for such relations:
SELECT ... FROM pg_class WHERE relkind IN ('p', 'I') AND reltablespace <> 0

Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/16577-881633a9f9894fd5@postgresql.org
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
2021-01-14 15:32:14 -03:00
Tom Lane
3e214fafce Disallow a digit as the first character of a variable name in pgbench.
The point of this restriction is to avoid trying to substitute variables
into timestamp literal values, which may contain strings like '12:34'.

There is a good deal more that should be done to reduce pgbench's
tendency to substitute where it shouldn't.  But this is sufficient to
solve the case complained of by Jaime Soler, and it's simple enough
to back-patch.

Back-patch to v11; before commit 9d36a3866, pgbench had a slightly
different definition of what a variable name is, and anyway it seems
unwise to change long-stable branches for this.

Fabien Coelho

Discussion: https://postgr.es/m/alpine.DEB.2.22.394.2006291740420.805678@pseudo
2021-01-13 14:52:49 -05:00
Amit Kapila
0b2ae3c928 Fix memory leak in SnapBuildSerialize.
The memory for the snapshot was leaked while serializing it to disk during
logical decoding. This memory will be freed only once walsender stops
streaming the changes. This can lead to a huge memory increase when master
logs Standby Snapshot too frequently say when the user is trying to create
many replication slots.

Reported-by: funnyxj.fxj@alibaba-inc.com
Diagnosed-by: funnyxj.fxj@alibaba-inc.com
Author: Amit Kapila
Backpatch-through: 9.5
Discussion: https://postgr.es/m/033ab54c-6393-42ee-8ec9-2b399b5d8cde.funnyxj.fxj@alibaba-inc.com
2021-01-13 08:40:06 +05:30
Tom Lane
436d9c5cab pg_dump: label INDEX ATTACH ArchiveEntries with an owner.
Although a partitioned index's attachment to its parent doesn't
have separate ownership, the ArchiveEntry for it needs to be
marked with an owner anyway, to ensure that the ALTER command
is run by the appropriate role when restoring with
--use-set-session-authorization.  Without this, the ALTER will
be run by the role that started the restore session, which will
usually work but it's formally the wrong thing.

Back-patch to v11 where this type of ArchiveEntry was added.
In HEAD, add equivalent commentary to the just-added TABLE ATTACH
case, which I'd made do the right thing already.

Discussion: https://postgr.es/m/1094034.1610418498@sss.pgh.pa.us
2021-01-12 13:37:38 -05:00
Alvaro Herrera
ddf0be8ca4
Fix thinko in comment
This comment has been wrong since its introduction in commit
2c03216d83.

Author: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAD21AoAzz6qipFJBbGEaHmyWxvvNDp8httbwLR9tUQWaTjUs2Q@mail.gmail.com
2021-01-12 11:48:45 -03:00
Tom Lane
8354371d0a Fix ancient bug in parsing of BRE-mode regular expressions.
brenext(), when parsing a '*' quantifier, forgot to return any "value"
for the token; per the equivalent case in next(), it should return
value 1 to indicate that greedy rather than non-greedy behavior is
wanted.  The result is that the compiled regexp could behave like 'x*?'
rather than the intended 'x*', if we were unlucky enough to have
a zero in v->nextvalue at this point.  That seems to happen with some
reliability if we have '.*' at the beginning of a BRE-mode regexp,
although that depends on the initial contents of a stack-allocated
struct, so it's not guaranteed to fail.

Found by Alexander Lakhin using valgrind testing.  This bug seems
to be aboriginal in Spencer's code, so back-patch all the way.

Discussion: https://postgr.es/m/16814-6c5e3edd2bdf0d50@postgresql.org
2021-01-08 12:16:00 -05:00
Tom Lane
29556bf1be Adjust createdb TAP tests to work on recent OpenBSD.
We found last February that the error-case tests added by commit
008cf0409 failed on OpenBSD, because that platform doesn't really
check locale names.  At the time it seemed that that was only an issue
for LC_CTYPE, but testing on a more recent version of OpenBSD shows
that it's now equally lax about LC_COLLATE.

Rather than dropping the LC_COLLATE test too, put back LC_CTYPE
(reverting c4b0edb07), and adjust these tests to accept the different
error message that we get if setlocale() doesn't reject a bogus locale
name.  The point of these tests is not really what the backend does
with the locale name, but to show that createdb quotes funny locale
names safely; so we're not losing test reliability this way.

Back-patch as appropriate.

Discussion: https://postgr.es/m/231373.1610058324@sss.pgh.pa.us
2021-01-07 20:36:09 -05:00
Tom Lane
f2a69e9047 Further second thoughts about idle_session_timeout patch.
On reflection, the order of operations in PostgresMain() is wrong.
These timeouts ought to be shut down before, not after, we do the
post-command-read CHECK_FOR_INTERRUPTS, to guarantee that any
timeout error will be detected there rather than at some ill-defined
later point (possibly after having wasted a lot of work).

This is really an error in the original idle_in_transaction_timeout
patch, so back-patch to 9.6 where that was introduced.
2021-01-07 11:45:09 -05:00
Fujii Masao
9f540f8406 Detect the deadlocks between backends and the startup process.
The deadlocks that the recovery conflict on lock is involved in can
happen between hot-standby backends and the startup process.
If a backend takes an access exclusive lock on the table and which
finally triggers the deadlock, that deadlock can be detected
as expected. On the other hand, previously, if the startup process
took an access exclusive lock and which finally triggered the deadlock,
that deadlock could not be detected and could remain even after
deadlock_timeout passed. This is a bug.

The cause of this bug was that the code for handling the recovery
conflict on lock didn't take care of deadlock case at all. It assumed
that deadlocks involving the startup process and backends were able
to be detected by the deadlock detector invoked within backends.
But this assumption was incorrect. The startup process also should
have invoked the deadlock detector if necessary.

To fix this bug, this commit makes the startup process invoke
the deadlock detector if deadlock_timeout is reached while handling
the recovery conflict on lock. Specifically, in that case, the startup
process requests all the backends holding the conflicting locks to
check themselves for deadlocks.

Back-patch to v9.6. v9.5 has also this bug, but per discussion we decided
not to back-patch the fix to v9.5. Because v9.5 doesn't have some
infrastructure codes (e.g., 37c54863cf) that this bug fix patch depends on.
We can apply those codes for the back-patch, but since the next minor
version release is the final one for v9.5, it's risky to do that. If we
unexpectedly introduce new bug to v9.5 by the back-patch, there is no
chance to fix that. We determined that the back-patch to v9.5 would give
more risk than gain.

Author: Fujii Masao
Reviewed-by: Bertrand Drouvot, Masahiko Sawada, Kyotaro Horiguchi
Discussion: https://postgr.es/m/4041d6b6-cf24-a120-36fa-1294220f8243@oss.nttdata.com
2021-01-06 12:31:23 +09:00
Dean Rasheed
740780a615 Add an explicit cast to double when using fabs().
Commit bc43b7c2c0 used fabs() directly on an int variable, which
apparently requires an explicit cast on some platforms.

Per buildfarm.
2021-01-05 11:50:11 +00:00
Dean Rasheed
258b770055 Fix numeric_power() when the exponent is INT_MIN.
In power_var_int(), the computation of the number of significant
digits to use in the computation used log(Abs(exp)), which isn't safe
because Abs(exp) returns INT_MIN when exp is INT_MIN. Use fabs()
instead of Abs(), so that the exponent is cast to a double before the
absolute value is taken.

Back-patch to 9.6, where this was introduced (by 7d9a4737c2).

Discussion: https://postgr.es/m/CAEZATCVd6pMkz=BrZEgBKyqqJrt2xghr=fNc8+Z=5xC6cgWrWA@mail.gmail.com
2021-01-05 11:07:12 +00:00
Tom Lane
7c98759e04 Fix integer-overflow corner cases in substring() functions.
If the substring start index and length overflow when added together,
substring() misbehaved, either throwing a bogus "negative substring
length" error on a case that should succeed, or failing to complain that
a negative length is negative (and instead returning the whole string,
in most cases).  Unsurprisingly, the text, bytea, and bit variants of
the function all had this issue.  Rearrange the logic to ensure that
negative lengths are always rejected, and add an overflow check to
handle the other case.

Also install similar guards into detoast_attr_slice() (nee
heap_tuple_untoast_attr_slice()), since it's far from clear that
no other code paths leading to that function could pass it values
that would overflow.

Patch by myself and Pavel Stehule, per bug #16804 from Rafi Shamim.

Back-patch to v11.  While these bugs are old, the common/int.h
infrastructure for overflow-detecting arithmetic didn't exist before
commit 4d6ad3125, and it doesn't seem like these misbehaviors are bad
enough to justify developing a standalone fix for the older branches.

Discussion: https://postgr.es/m/16804-f4eeeb6c11ba71d4@postgresql.org
2021-01-04 18:32:40 -05:00
Peter Geoghegan
5f95664e19 Get heap page max offset with buffer lock held.
On further reflection it seems better to call PageGetMaxOffsetNumber()
after acquiring a buffer lock on the page.  This shouldn't really
matter, but doing it this way is cleaner.

Follow-up to commit 42288174.

Backpatch: 12-, just like commit 42288174
2020-12-30 17:21:39 -08:00
Peter Geoghegan
4f70e0910e Fix index deletion latestRemovedXid bug.
The logic for determining the latest removed XID for the purposes of
generating recovery conflicts in REDO routines was subtly broken.  It
failed to follow links from HOT chains, and so failed to consider all
relevant heap tuple headers in some cases.

To fix, expand the loop that deals with LP_REDIRECT line pointers to
also deal with HOT chains.  The new version of the loop is loosely based
on a similar loop from heap_prune_chain().

The impact of this bug is probably quite limited, since the horizon code
necessarily deals with heap tuples that are pointed to by LP_DEAD-set
index tuples.  The process of setting LP_DEAD index tuples (e.g. within
the kill_prior_tuple mechanism) is highly correlated with opportunistic
pruning of pointed-to heap tuples.  Plus the question of generating a
recovery conflict usually comes up some time after index tuple LP_DEAD
bits were initially set, unlike heap pruning, where a latestRemovedXid
is generated at the point of the pruning operation (heap pruning has no
deferred "would-be page split" style processing that produces conflicts
lazily).

Only backpatch to Postgres 12, the first version where this logic runs
during original execution (following commit 558a9165e0).  The index
latestRemovedXid mechanism has had the same bug since it first appeared
over 10 years ago (in commit a760893d), but backpatching to all
supported versions now seems like a bad idea on balance.  Running the
new improved code during recovery seems risky, especially given the lack
of complaints from the field.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wz=Eib393+HHcERK_9MtgNS7Ew1HY=RDC_g6GL46zM5C6Q@mail.gmail.com
Backpatch: 12-
2020-12-30 16:29:01 -08:00
Tom Lane
3ca19490b4 Fix up usage of krb_server_keyfile GUC parameter.
secure_open_gssapi() installed the krb_server_keyfile setting as
KRB5_KTNAME unconditionally, so long as it's not empty.  However,
pg_GSS_recvauth() only installed it if KRB5_KTNAME wasn't set already,
leading to a troubling inconsistency: in theory, clients could see
different sets of server principal names depending on whether they
use GSSAPI encryption.  Always using krb_server_keyfile seems like
the right thing, so make both places do that.  Also fix up
secure_open_gssapi()'s lack of a check for setenv() failure ---
it's unlikely, surely, but security-critical actions are no place
to be sloppy.

Also improve the associated documentation.

This patch does nothing about secure_open_gssapi()'s use of setenv(),
and indeed causes pg_GSS_recvauth() to use it too.  That's nominally
against project portability rules, but since this code is only built
with --with-gssapi, I do not feel a need to do something about this
in the back branches.  A fix will be forthcoming for HEAD though.

Back-patch to v12 where GSSAPI encryption was introduced.  The
dubious behavior in pg_GSS_recvauth() goes back further, but it
didn't have anything to be inconsistent with, so let it be.

Discussion: https://postgr.es/m/2187460.1609263156@sss.pgh.pa.us
2020-12-30 11:38:42 -05:00
Noah Misch
635140ace0 In pg_upgrade cross-version test, handle lack of oldstyle_length().
This suffices for testing v12 -> v13; some other version pairs need more
changes.  Back-patch to v10, which removed the function.
2020-12-30 01:43:46 -08:00
Tom Lane
7ed616f818 Improve log messages related to pg_hba.conf not matching a connection.
Include details on whether GSS encryption has been activated;
since we added "hostgssenc" type HBA entries, that's relevant info.

Kyotaro Horiguchi and Tom Lane.  Back-patch to v12 where
GSS encryption was introduced.

Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
2020-12-28 17:58:58 -05:00
Tom Lane
4cfdd8a47a Fix assorted issues in backend's GSSAPI encryption support.
Unrecoverable errors detected by GSSAPI encryption can't just be
reported with elog(ERROR) or elog(FATAL), because attempting to
send the error report to the client is likely to lead to infinite
recursion or loss of protocol sync.  Instead make this code do what
the SSL encryption code has long done, which is to just report any
such failure to the server log (with elevel COMMERROR), then pretend
we've lost the connection by returning errno = ECONNRESET.

Along the way, fix confusion about whether message translation is done
by pg_GSS_error() or its callers (the latter should do it), and make
the backend version of that function work more like the frontend
version.

Avoid allocating the port->gss struct until it's needed; we surely
don't need to allocate it in the postmaster.

Improve logging of "connection authorized" messages with GSS enabled.
(As part of this, I back-patched the code changes from dc11f31a1.)

Make BackendStatusShmemSize() account for the GSS-related space that
will be allocated by CreateSharedBackendStatus().  This omission
could possibly cause out-of-shared-memory problems with very high
max_connections settings.

Remove arbitrary, pointless restriction that only GSS authentication
can be used on a GSS-encrypted connection.

Improve documentation; notably, document the fact that libpq now
prefers GSS encryption over SSL encryption if both are possible.

Per report from Mikael Gustavsson.  Back-patch to v12 where
this code was introduced.

Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
2020-12-28 17:44:17 -05:00
Tom Lane
b3a5bf719c Fix bugs in libpq's GSSAPI encryption support.
The critical issue fixed here is that if a GSSAPI-encrypted connection
is successfully made, pqsecure_open_gss() cleared conn->allow_ssl_try,
as an admittedly-hacky way of preventing us from then trying to tunnel
SSL encryption over the already-encrypted connection.  The problem
with that is that if we abandon the GSSAPI connection because of a
failure during authentication, we would not attempt SSL encryption
in the next try with the same server.  This can lead to unexpected
connection failure, or silently getting a non-encrypted connection
where an encrypted one is expected.

Fortunately, we'd only manage to make a GSSAPI-encrypted connection
if both client and server hold valid tickets in the same Kerberos
infrastructure, which is a relatively uncommon environment.
Nonetheless this is a very nasty bug with potential security
consequences.  To fix, don't reset the flag, instead adding a
check for conn->gssenc being already true when deciding whether
to try to initiate SSL.

While here, fix some lesser issues in libpq's GSSAPI code:

* Use the need_new_connection stanza when dropping an attempted
GSSAPI connection, instead of partially duplicating that code.
The consequences of this are pretty minor: AFAICS it could only
lead to auth_req_received or password_needed remaining set when
they shouldn't, which is not too harmful.

* Fix pg_GSS_error() to not repeat the "mprefix" it's given multiple
times, and to notice any failure return from gss_display_status().

* Avoid gratuitous dependency on NI_MAXHOST in
pg_GSS_load_servicename().

Per report from Mikael Gustavsson.  Back-patch to v12 where
this code was introduced.

Discussion: https://postgr.es/m/e5b0b6ed05764324a2f3fe7acfc766d5@smhi.se
2020-12-28 15:43:44 -05:00
Tom Lane
d37965965a Further fix thinko in plpgsql memory leak fix.
There's a second call of get_eval_mcontext() that should also be
get_stmt_mcontext().  This is actually dead code, since no
interesting allocations happen before switching back to the
original context, but we should keep it in sync with the other
call to forestall possible future bugs.

Discussion: https://postgr.es/m/f075f7be-c654-9aa8-3ffc-e9214622f02a@enterprisedb.com
2020-12-28 11:55:37 -05:00
Tom Lane
fd1347f9d8 Fix thinko in plpgsql memory leak fix.
Commit a6b1f5365 intended to place the transient "target" list of
a CALL statement in the function's statement-lifespan context,
but I fat-fingered that and used get_eval_mcontext() instead of
get_stmt_mcontext().  The eval_mcontext belongs to the "simple
expression" infrastructure, which is destroyed at transaction end.
The net effect is that a CALL in a procedure to another procedure
that has OUT or INOUT parameters would fail if the called procedure
did a COMMIT.

Per report from Peter Eisentraut.  Back-patch to v11, like the
prior patch.

Discussion: https://postgr.es/m/f075f7be-c654-9aa8-3ffc-e9214622f02a@enterprisedb.com
2020-12-28 11:41:25 -05:00
Michael Paquier
190f0af0d1 Fix inconsistent code with shared invalidations of snapshots
The code in charge of processing a single invalidation message has been
using since 568d413 the structure for relation mapping messages.  This
had fortunately no consequence as both locate the database ID at the
same location, but it could become a problem in the future if this area
of the code changes.

Author: Konstantin Knizhnik
Discussion: https://postgr.es/m/8044c223-4d3a-2cdb-42bf-29940840ce94@postgrespro.ru
Backpatch-through: 9.5
2020-12-28 22:17:02 +09:00
Noah Misch
c966988060 Invalidate acl.c caches when pg_authid changes.
This makes existing sessions reflect "ALTER ROLE ... [NO]INHERIT" as
quickly as they have been reflecting "GRANT role_name".  Back-patch to
9.5 (all supported versions).

Reviewed by Nathan Bossart.

Discussion: https://postgr.es/m/20201221095028.GB3777719@rfd.leadboat.com
2020-12-25 10:42:03 -08:00
Tom Lane
f853ffa220 Avoid time-of-day-dependent failure in log rotation test.
Buildfarm members pogona and petalura have shown a failure when
pg_ctl/t/004_logrotate.pl starts just before local midnight.
The default rotate-at-midnight behavior occurs just before the
Perl script examines current_logfiles, so it figures that the
rotation it's already requested has occurred ... but in reality,
that rotation happens just after it looks, so the expected new
log data goes into a different file than the one it's examining.

In HEAD, src/test/kerberos/t/001_auth.pl has acquired similar code
that evidently has a related failure mode.  Besides being quite new,
few buildfarm critters run that test, so it's unsurprising that
we've not yet seen a failure there.

Fix both cases by setting log_rotation_age = 0 so that no time-based
rotation can occur.  Also absorb 004_logrotate.pl's decision to
set lc_messages = 'C' into the kerberos test, in hopes that it will
work in non-English prevailing locales.

Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2020-12-24%2022%3A10%3A04
Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-02-01%2022%3A20%3A04
2020-12-24 21:37:46 -05:00
Tom Lane
3d8068edce Fix race condition between shutdown and unstarted background workers.
If a database shutdown (smart or fast) is commanded between the time
some process decides to request a new background worker and the time
that the postmaster can launch that worker, then nothing happens
because the postmaster won't launch any bgworkers once it's exited
PM_RUN state.  This is fine ... unless the requesting process is
waiting for that worker to finish (or even for it to start); in that
case the requestor is stuck, and only manual intervention will get us
to the point of being able to shut down.

To fix, cancel pending requests for workers when the postmaster sends
shutdown (SIGTERM) signals, and similarly cancel any new requests that
arrive after that point.  (We can optimize things slightly by only
doing the cancellation for workers that have waiters.)  To fit within
the existing bgworker APIs, the "cancel" is made to look like the
worker was started and immediately stopped, causing deregistration of
the bgworker entry.  Waiting processes would have to deal with
premature worker exit anyway, so this should introduce no bugs that
weren't there before.  We do have a side effect that registration
records for restartable bgworkers might disappear when theoretically
they should have remained in place; but since we're shutting down,
that shouldn't matter.

Back-patch to v10.  There might be value in putting this into 9.6
as well, but the management of bgworkers is a bit different there
(notably see 8ff518699) and I'm not convinced it's worth the effort
to validate the patch for that branch.

Discussion: https://postgr.es/m/661570.1608673226@sss.pgh.pa.us
2020-12-24 17:00:43 -05:00
Michael Paquier
67a9411d2f Fix portability issues with parsing of recovery_target_xid
The parsing of this parameter has been using strtoul(), which is not
portable across platforms.  On most Unix platforms, unsigned long has a
size of 64 bits, while on Windows it is 32 bits.  It is common in
recovery scenarios to rely on the output of txid_current() or even the
newer pg_current_xact_id() to get a transaction ID for setting up
recovery_target_xid.  The value returned by those functions includes the
epoch in the computed result, which would cause strtoul() to fail where
unsigned long has a size of 32 bits once the epoch is incremented.

WAL records and 2PC data include only information about 32-bit XIDs and
it is not possible to have XIDs across more than one epoch, so
discarding the high bits from the transaction ID set has no impact on
recovery.  On the contrary, the use of strtoul() prevents a consistent
behavior across platforms depending on the size of unsigned long.

This commit changes the parsing of recovery_target_xid to use
pg_strtouint64() instead, available down to 9.6.  There is one TAP test
stressing recovery with recovery_target_xid, where a tweak based on
pg_reset{xlog,wal} is added to bump the XID epoch so as this change gets
tested, as per an idea from Alexander Lakhin.

Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/16780-107fd0c0385b1035@postgresql.org
Backpatch-through: 9.6
2020-12-23 12:51:39 +09:00
Tom Lane
38bef9e433 Remove "invalid concatenation of jsonb objects" error case.
The jsonb || jsonb operator arbitrarily rejected certain combinations
of scalar and non-scalar inputs, while being willing to concatenate
other combinations.  This was of course quite undocumented.  Rather
than trying to document it, let's just remove the restriction,
creating a uniform rule that unless we are handling an object-to-object
concatenation, non-array inputs are converted to one-element arrays,
resulting in an array-to-array concatenation.  (This does not change
the behavior for any case that didn't throw an error before.)

Per complaint from Joel Jacobson.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/163099.1608312033@sss.pgh.pa.us
2020-12-21 13:11:29 -05:00
Tom Lane
ba04cedf1d Avoid memcpy() with same source and destination during relmapper init.
A narrow reading of the C standard says that memcpy(x,x,n) is undefined,
although it's hard to envision an implementation that would really
misbehave.  However, analysis tools such as valgrind might whine about
this; accordingly, let's band-aid relmapper.c to not do it.

See also 5b630501e, d3f4e8a8a, ad7b48ea0, and other similar fixes.
Apparently, none of those folk tried valgrinding initdb?  This has been
like this for long enough that I'm surprised it hasn't been reported
before.

Back-patch, just in case anybody wants to use a back branch on a platform
that complains about this; we back-patched those earlier fixes too.

Discussion: https://postgr.es/m/161790.1608310142@sss.pgh.pa.us
2020-12-18 15:46:44 -05:00
Andrew Dunstan
bf88c6d602 Use native methods to open input in TestLib::slurp_file on Windows.
This is a backport of commits 114541d58e and 6f59826f0 to the remaining
live branches.
2020-12-15 10:00:48 -05:00
Jeff Davis
706d84fe70 Revert "Cannot use WL_SOCKET_WRITEABLE without WL_SOCKET_READABLE."
This reverts commit 3a9e64aa0d.

Commit 4bad60e3 fixed the root of the problem that 3a9e64aa worked
around.

This enables proper pipelining of commands after terminating
replication, eliminating an undocumented limitation.

Discussion: https://postgr.es/m/3d57bc29-4459-578b-79cb-7641baf53c57%40iki.fi
Backpatch-through: 9.5
2020-12-14 23:48:04 -08:00
Bruce Momjian
8e456b8e59 initdb: complete getopt_long alphabetization
Backpatch-through: 9.5
2020-12-12 12:59:09 -05:00
Bruce Momjian
035c7dfddf initdb: properly alphabetize getopt_long options in C string
Backpatch-through: 9.5
2020-12-12 12:51:16 -05:00
Tom Lane
2f1997b155 Teach contain_leaked_vars that assignment SubscriptingRefs are leaky.
array_get_element and array_get_slice qualify as leakproof, since
they will silently return NULL for bogus subscripts.  But
array_set_element and array_set_slice throw errors for such cases,
making them clearly not leakproof.  contain_leaked_vars was evidently
written with only the former case in mind, as it gave the wrong answer
for assignment SubscriptingRefs (nee ArrayRefs).

This would be a live security bug, were it not that assignment
SubscriptingRefs can only occur in INSERT and UPDATE target lists,
while we only care about leakproofness for qual expressions; so the
wrong answer can't occur in practice.  Still, that's a rather shaky
answer for a security-related question; and maybe in future somebody
will want to ask about leakproofness of a tlist.  So it seems wise to
fix and even back-patch this correction.

(We would need some change here anyway for the upcoming
generic-subscripting patch, since extensions might make different
tradeoffs about whether to throw errors.  Commit 558d77f20 attempted
to lay groundwork for that by asking check_functions_in_node whether a
SubscriptingRef contains leaky functions; but that idea fails now that
the implementation methods of a SubscriptingRef are not SQL-visible
functions that could be marked leakproof or not.)

Back-patch to 9.6.  While 9.5 has the same issue, the code's a bit
different.  It seems quite unlikely that we'd introduce any actual bug
in the short time 9.5 has left to live, so the work/risk/reward balance
isn't attractive for changing 9.5.

Discussion: https://postgr.es/m/3143742.1607368115@sss.pgh.pa.us
2020-12-08 17:50:54 -05:00
Andres Freund
27b57f806d jit: Correct parameter type for generated expression evaluation functions.
clang only uses the 'i1' type for scalar booleans, not for pointers to
booleans (as the pointer might be pointing into a larger memory
allocation). Therefore a pointer-to-bool needs to the "storage" boolean.

There's no known case of wrong code generation due to this, but it seems quite
possible that it could cause problems (see e.g. 72559438f9).

Author: Andres Freund
Discussion: https://postgr.es/m/20201207212142.wz5tnbk2jsaqzogb@alap3.anarazel.de
Backpatch: 11-, where jit support was added
2020-12-07 18:39:32 -08:00
Andres Freund
fa7a52fb99 backpatch "jit: Add support for LLVM 12."
As there haven't been problem on the buildfarm due to this change,
backpatch 6c57f2ed16 now.

Author: Andres Freund
Discussion: https://postgr.es/m/20201016011244.pmyvr3ee2gbzplq4@alap3.anarazel.de
Backpatch: 11-, where jit support was added
2020-12-07 18:39:32 -08:00
Heikki Linnakangas
98f8cdd48c Fix more race conditions in the newly-added pg_rewind test.
pg_rewind looks at the control file to check what timeline a server is on.
But promotion doesn't immediately write a checkpoint, it merely writes
an end-of-recovery WAL record. If pg_rewind runs immediately after
promotion, before the checkpoint has completed, it will think think that
the server is still on the earlier timeline. We ran into this issue a long
time ago already, see commit 484a848a73.

It's a bit bogus that pg_rewind doesn't determine the timeline correctly
until the end-of-recovery checkpoint has completed. We probably should
fix that. But for now work around it by waiting for the checkpoint
to complete before running pg_rewind, like we did in commit 484a848a73.

In the passing, tidy up the new test a little bit. Rerder the INSERTs so
that the comments make more sense, remove a spurious CHECKPOINT call after
pg_rewind has already run, and add --debug option, so that if this fails
again, we'll have more data.

Per buildfarm failure at https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=rorqual&dt=2020-12-06%2018%3A32%3A19&stg=pg_rewind-check.
Backpatch to all supported versions.

Discussion: https://www.postgresql.org/message-id/1713707e-e318-761c-d287-5b6a4aa807e8@iki.fi
2020-12-07 14:55:22 +02:00
Tom Lane
77a94c30da Fix missed step in removal of useless RESULT RTEs in the planner.
Commit 4be058fe9 forgot that the append_rel_list would already be
populated at the time we remove useless result RTEs, and it might contain
PlaceHolderVars that need to be adjusted like the ones in the main parse
tree.  This could lead to "no relation entry for relid N" failures later
on, when the planner tries to do something with an unadjusted PHV.

Per report from Tom Ellis.  Back-patch to v12 where the bug came in.

Discussion: https://postgr.es/m/20201205173056.GF30712@cloudinit-builder
2020-12-05 16:16:13 -05:00
Heikki Linnakangas
ad3fb04b9c Fix race conditions in newly-added test.
Buildfarm has been failing sporadically on the new test.  I was able to
reproduce this by adding a random 0-10 s delay in the walreceiver, just
before it connects to the primary. There's a race condition where node_3
is promoted before it has fully caught up with node_1, leading to diverged
timelines. When node_1 is later reconfigured as standby following node_3,
it fails to catch up:

LOG:  primary server contains no more WAL on requested timeline 1
LOG:  new timeline 2 forked off current database system timeline 1 before current recovery point 0/30000A0

That's the situation where you'd need to use pg_rewind, but in this case
it happens already when we are just setting up the actual pg_rewind
scenario we want to test, so change the test so that it waits until
node_3 is connected and fully caught up before promoting it, so that you
get a clean, controlled failover.

Also rewrite some of the comments, for clarity. The existing comments
detailed what each step in the test did, but didn't give a good overview
of the situation the steps were trying to create.

For reasons I don't understand, the test setup had to be written slightly
differently in 9.6 and 9.5 than in later versions. The 9.5/9.6 version
needed node 1 to be reinitialized from backup, whereas in later versions
it could be shut down and reconfigured to be a standby. But even 9.5 should
support "clean switchover", where primary makes sure that pending WAL is
replicated to standby on shutdown. It would be nice to figure out what's
going on there, but that's independent of pg_rewind and the scenario that
this test tests.

Discussion: https://www.postgresql.org/message-id/b0a3b95b-82d2-6089-6892-40570f8c5e60%40iki.fi
2020-12-04 18:24:57 +02:00
Heikki Linnakangas
020f153dbb Fix pg_rewind bugs when rewinding a standby server.
If the target is a standby server, its WAL doesn't end at the last
checkpoint record, but at minRecoveryPoint. We must scan all the
WAL from the last common checkpoint all the way up to minRecoveryPoint
for modified pages, and also consider that portion when determining
whether the server needs rewinding.

Backpatch to all supported versions.

Author: Ian Barwick and me
Discussion: https://www.postgresql.org/message-id/CABvVfJU-LDWvoz4-Yow3Ay5LZYTuPD7eSjjE4kGyNZpXC6FrVQ%40mail.gmail.com
2020-12-03 15:57:54 +02:00
Bruce Momjian
c50b6f7a9c pg_checksums: data_checksum_version is unsigned so use %u not %d
While the previous behavior didn't generate a warning, we might as well
use an accurate *printf specification.

Backpatch-through: 12
2020-12-01 20:27:05 -05:00
Tom Lane
5f9b05ada1 Ensure that expandTableLikeClause() re-examines the same table.
As it stood, expandTableLikeClause() re-did the same relation_openrv
call that transformTableLikeClause() had done.  However there are
scenarios where this would not find the same table as expected.
We hold lock on the LIKE source table, so it can't be renamed or
dropped, but another table could appear before it in the search path.
This explains the odd behavior reported in bug #16758 when cloning a
table as a temp table of the same name.  This case worked as expected
before commit 502898192 introduced the need to open the source table
twice, so we should fix it.

To make really sure we get the same table, let's re-open it by OID not
name.  That requires adding an OID field to struct TableLikeClause,
which is a little nervous-making from an ABI standpoint, but as long
as it's at the end I don't think there's any serious risk.

Per bug #16758 from Marc Boeren.  Like the previous patch,
back-patch to all supported branches.

Discussion: https://postgr.es/m/16758-840e84a6cfab276d@postgresql.org
2020-12-01 14:02:28 -05:00
Alvaro Herrera
f0ff52f25c
Avoid memcpy() with a NULL source pointer and count == 0
When memcpy() is called on a pointer, the compiler is entitled to assume
that the pointer is not null, which can lead to optimizing nearby code
in potentially undesirable ways.  We still want such optimizations
(gcc's -fdelete-null-pointer-checks) in cases where they're valid.

Related: commit 13bba02271.

Backpatch to pg11, where this particular instance appeared.

Reported-by: Ranier Vilela <ranier.vf@gmail.com>
Reported-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CAEudQApUndmQkr5fLrCKXQ7+ib44i7S+Kk93pyVThS85PnG3bQ@mail.gmail.com
Discussion: https://postgr.es/m/CALNJ-vSdhwSM5f4tnNn1cdLHvXMVe_S+V3nR5GwNrmCPNB2VtQ@mail.gmail.com
2020-12-01 11:46:56 -03:00
Thomas Munro
50f3385742 Free disk space for dropped relations on commit.
When committing a transaction that dropped a relation, we previously
truncated only the first segment file to free up disk space (the one
that won't be unlinked until the next checkpoint).

Truncate higher numbered segments too, even though we unlink them on
commit.  This frees the disk space immediately, even if other backends
have open file descriptors and might take a long time to get around to
handling shared invalidation events and closing them.  Also extend the
same behavior to the first segment, in recovery.

Back-patch to all supported releases.

Bug: #16663
Reported-by: Denis Patron <denis.patron@previnet.it>
Reviewed-by: Pavel Borisov <pashkin.elfe@gmail.com>
Reviewed-by: Neil Chen <carpenter.nail.cz@gmail.com>
Reviewed-by: David Zhang <david.zhang@highgo.ca>
Discussion: https://postgr.es/m/16663-fe97ccf9932fc800%40postgresql.org
2020-12-01 13:37:00 +13:00
Tom Lane
16a1e1ea4f Prevent parallel index build in a standalone backend.
This can't work if there's no postmaster, and indeed the code got an
assertion failure trying.  There should be a check on IsUnderPostmaster
gating the use of parallelism, as the planner has for ordinary
parallel queries.

Commit 40d964ec9 got this right, so follow its model of checking
IsUnderPostmaster at the same place where we check for
max_parallel_maintenance_workers == 0.  In general, new code
implementing parallel utility operations should do the same.

Report and patch by Yulin Pei, cosmetically adjusted by me.
Back-patch to v11 where this code came in.

Discussion: https://postgr.es/m/HK0PR01MB22747D839F77142D7E76A45DF4F50@HK0PR01MB2274.apcprd01.prod.exchangelabs.com
2020-11-30 14:38:00 -05:00
Tom Lane
4f728a1f5f Fix miscomputation of direct_lateral_relids for join relations.
If a PlaceHolderVar is to be evaluated at a join relation, but
its value is only needed there and not at higher levels, we neglected
to update the joinrel's direct_lateral_relids to include the PHV's
source rel.  This causes problems because join_is_legal() then won't
allow joining the joinrel to the PHV's source rel at all, leading
to "failed to build any N-way joins" planner failures.

Per report from Andreas Seltenreich.  Back-patch to 9.5
where the problem originated.

Discussion: https://postgr.es/m/87blfgqa4t.fsf@aurora.ydns.eu
2020-11-30 12:22:43 -05:00
Heikki Linnakangas
ee42f78910 Remove leftover comments, left behind by removal of WITH OIDS.
Author: Amit Langote
Discussion: https://www.postgresql.org/message-id/CA%2BHiwqGaRoF3XrhPW-Y7P%2BG7bKo84Z_h%3DkQHvMh-80%3Dav3wmOw%40mail.gmail.com
2020-11-30 10:29:26 +02:00
Tom Lane
3958abdad3 Fix recently-introduced breakage in psql's \connect command.
Through my misreading of what the existing code actually did,
commits 85c54287a et al. broke psql's behavior for the case where
"\c connstring" provides a password in the connstring.  We should
use that password in such a case, but as of 85c54287a we ignored it
(and instead, prompted for a password).

Commit 94929f1cf fixed that in HEAD, but since I thought it was
cleaning up a longstanding misbehavior and not one I'd just created,
I didn't back-patch it.

Hence, back-patch the portions of 94929f1cf having to do with
password management.  In addition to fixing the introduced bug,
this means that "\c -reuse-previous=on connstring" will allow
re-use of an existing connection's password if the connstring
doesn't change user/host/port.  That didn't happen before, but
it seems like a bug fix, and anyway I'm loath to have significant
differences in this code across versions.

Also fix an error with the same root cause about whether or not to
override a connstring's setting of client_encoding.  As of 85c54287a
we always did so; restore the previous behavior of overriding only
when stdin/stdout are a terminal and there's no environment setting
of PGCLIENTENCODING.  (I find that definition a bit surprising, but
right now doesn't seem like the time to revisit it.)

Per bug #16746 from Krzysztof Gradek.  As with the previous patch,
back-patch to all supported branches.

Discussion: https://postgr.es/m/16746-44b30e2edf4335d4@postgresql.org
2020-11-29 15:22:04 -05:00
Noah Misch
7b5b90be95 Retry initial slurp_file("current_logfiles"), in test 004_logrotate.pl.
Buildfarm member topminnow failed when the test script attempted this
before the syslogger would have created the file.  Back-patch to v12,
which introduced the test.
2020-11-28 21:52:31 -08:00
Tom Lane
cbc7a7a10c Fix a recently-introduced race condition in LISTEN/NOTIFY handling.
Commit 566372b3d fixed some race conditions involving concurrent
SimpleLruTruncate calls, but it introduced new ones in async.c.
A newly-listening backend could attempt to read Notify SLRU pages that
were in process of being truncated, possibly causing an error.  Also,
the QUEUE_TAIL pointer could become set to a value that's not equal to
the queue position of any backend.  While that's fairly harmless in
v13 and up (thanks to commit 51004c717), in older branches it resulted
in near-permanent disabling of the queue truncation logic, so that
continued use of NOTIFY led to queue-fill warnings and eventual
inability to send any more notifies.  (A server restart is enough to
make that go away, but it's still pretty unpleasant.)

The core of the problem is confusion about whether QUEUE_TAIL
represents the "logical" tail of the queue (i.e., the oldest
still-interesting data) or the "physical" tail (the oldest data we've
not yet truncated away).  To fix, split that into two variables.
QUEUE_TAIL regains its definition as the logical tail, and we
introduce a new variable to track the oldest un-truncated page.

Per report from Mikael Gustavsson.  Like the previous patch,
back-patch to all supported branches.

Discussion: https://postgr.es/m/1b8561412e8a4f038d7a491c8b922788@smhi.se
2020-11-28 14:03:40 -05:00
Fujii Masao
fce17e486f Fix CLUSTER progress reporting of number of blocks scanned.
Previously pg_stat_progress_cluster view reported the current block
number in heap scan as the number of heap blocks scanned (i.e.,
heap_blks_scanned). This reported number could be incorrect when
synchronize_seqscans is enabled, because it allowed the heap scan to
start at block in middle. This could result in wraparounds in the
heap_blks_scanned column when the heap scan wrapped around.
This commit fixes the bug by calculating the number of blocks from
the block that the heap scan starts at to the current block in scan,
and reporting that number in the heap_blks_scanned column.

Also, in pg_stat_progress_cluster view, previously heap_blks_scanned
could not reach heap_blks_total at the end of heap scan phase
if the last pages scanned were empty. This commit fixes the bug by
manually updating heap_blks_scanned to the same value as
heap_blks_total when the heap scan phase finishes.

Back-patch to v12 where pg_stat_progress_cluster view was introduced.

Reported-by: Matthias van de Meent
Author: Matthias van de Meent
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/CAEze2WjCBWSGkVfYag001Rc4+-nNLDpWM7QbyD6yPvuhKs-gYQ@mail.gmail.com
2020-11-27 20:18:02 +09:00
Tom Lane
ea7a167daa In psql's \d commands, don't truncate attribute default values.
Historically, psql has truncated the text of a column's default
expression at 128 characters.  This is unlike any other behavior
in describe.c, and it's become particularly confusing now that
the limit is only applied to the expression proper and not to
the "generated always as (...) stored" text that may get wrapped
around it.

Excavation in our git history suggests that the original motivation
for this limit was not really to limit the display width (as I'd long
supposed), but to make it safe to use a fixed-width output buffer to
store the result.  That implementation restriction is long gone of
course, but the limit remained.  Let's just get rid of it.

While here, rearrange the logic about when to free the output string
so that it's not so dependent on unstated assumptions about the
possible values of attidentity and attgenerated.

Per bug #16743 from David Turon.  Back-patch to v12 where GENERATED
came in.  (Arguably we could take it back further, but I'm hesitant
to change the behavior of long-stable branches for this.)

Discussion: https://postgr.es/m/16743-7b1bacc4af76e7ad@postgresql.org
2020-11-25 16:19:25 -05:00
Andrew Gierth
ae5aa26dc3 Properly check index mark/restore in ExecSupportsMarkRestore.
Previously this code assumed that all IndexScan nodes supported
mark/restore, which is not true since it depends on optional index AM
support functions. This could lead to errors about missing support
functions in rare edge cases of mergejoins with no sort keys, where an
unordered non-btree index scan was placed on the inner path without a
protecting Materialize node. (Normally, the fact that merge join
requires ordered input would avoid this error.)

Backpatch all the way since this bug is ancient.

Per report from Eugen Konkov on irc.

Discussion: https://postgr.es/m/87o8jn50be.fsf@news-spur.riddles.org.uk
2020-11-24 21:17:02 +00:00
Heikki Linnakangas
888fa2baeb Skip allocating hash table in EXPLAIN-only mode.
This is a backpatch of commit 2cccb627f1, backpatched due to popular
demand. Backpatch to all supported versions.

Author: Alexey Bashtanov
Discussion: https://www.postgresql.org/message-id/36823f65-050d-ae24-aa4d-a37726998240%40imap.cc
2020-11-20 14:46:47 +02:00
Tom Lane
5b83604270 On macOS, use -isysroot in link steps as well as compile steps.
We previously put the -isysroot switch only into CPPFLAGS, theorizing
that it was only needed to find the right copies of include files.
However, it seems that we also need to use it while linking programs,
to find the right stub ".tbd" files for libraries.  We got away
without that up to now, but apparently that was mostly luck.  It may
also be that failures are only observed when the Xcode version is
noticeably out of sync with the host macOS version; the case that's
prompting action right now is that builds fail when using latest Xcode
(12.2) on macOS Catalina, even though it's fine on Big Sur.

Hence, add -isysroot to LDFLAGS as well.  (It seems that the more
common practice is to put it in CFLAGS, whence it'd be included at
both compile and link steps.  However, we can't mess with CFLAGS in
the platform template file without confusing configure's logic for
choosing default CFLAGS.)

Back-patch of 49407dc32 into all supported branches.

Report and patch by James Hilliard (some cosmetic mods by me)

Discussion: https://postgr.es/m/20201120003314.20560-1-james.hilliard1@gmail.com
2020-11-20 00:58:26 -05:00
Thomas Munro
2ded1f1fbb Adjust DSM and DSA slot usage constants (back-patch).
1.  Previously, a DSA area would create up to four segments at each size
before doubling the size.  After this commit, it will create only two at
each size, so it ramps up faster and therefore needs fewer slots.

2.  Previously, the total limit on DSM slots allowed for 2 per connection.
Switch to 5 per connection.

This back-patches commit d061ea21 from release 13 into 10-12 based on a
field complaint.

Discussion: https://postgr.es/m/CAO03teA%2BjE1qt5iWDWzHqaufqBsF6EoOgZphnazps_tr_jDPZA%40mail.gmail.com
Discussion: https://postgr.es/m/CA%2BhUKGL6H2BpGbiF7Lj6QiTjTGyTLW_vLR%3DSn2tEBeTcYXiMKw%40mail.gmail.com
2020-11-20 10:51:30 +13:00
Tom Lane
87ab464219 Further fixes for CREATE TABLE LIKE: cope with self-referential FKs.
Commit 502898192 was too careless about the order of execution of the
additional ALTER TABLE operations generated by expandTableLikeClause.
It just stuck them all at the end, which seems okay for most purposes.
But it falls down in the case where LIKE is importing a primary key
or unique index and the outer CREATE TABLE includes a FOREIGN KEY
constraint that needs to depend on that index.  Weird as that is,
it used to work, so we ought to keep it working.

To fix, make parse_utilcmd.c insert LIKE clauses between index-creation
and FK-creation commands in the transformed list of commands, and change
utility.c so that the commands generated by expandTableLikeClause are
executed immediately not at the end.  One could imagine scenarios where
this wouldn't work either; but currently expandTableLikeClause only
makes column default expressions, CHECK constraints, and indexes, and
this ordering seems fine for those.

Per bug #16730 from Sofoklis Papasofokli.  Like the previous patch,
back-patch to all supported branches.

Discussion: https://postgr.es/m/16730-b902f7e6e0276b30@postgresql.org
2020-11-19 15:03:17 -05:00
Tom Lane
4097a79069 Don't Insert() a VFD entry until it's fully built.
Otherwise, if FDDEBUG is enabled, the debugging output fails because
it tries to read the fileName, which isn't set up yet (and should in
fact always be NULL).

AFAICT, this has been wrong since Berkeley.  Before 96bf88d52,
it would accidentally fail to crash on platforms where snprintf()
is forgiving about being passed a NULL pointer for %s; but the
file name intended to be included in the debug output wouldn't
ever have shown up.

Report and fix by Greg Nancarrow.  Although this is only visibly
broken in custom-made builds, it still seems worth back-patching
to all supported branches, as the FDDEBUG code is pretty useless
as it stands.

Discussion: https://postgr.es/m/CAJcOf-cUDgm9qYtC_B6XrC6MktMPNRby2p61EtSGZKnfotMArw@mail.gmail.com
2020-11-16 20:32:35 -05:00
Tom Lane
54a9406440 Do not return NULL for error cases in satisfies_hash_partition().
Since this function is used as a CHECK constraint condition,
returning NULL is tantamount to returning TRUE, which would have the
effect of letting in a row that doesn't satisfy the hash condition.
Admittedly, the cases for which this is done should be unreachable
in practice, but that doesn't make it any less a bad idea.  It also
seems like a dartboard was used to decide which error cases should
throw errors as opposed to returning NULL.

For the checks for NULL input values, I just switched it to returning
false.  There's some argument that an error would be better; but the
case really should be can't-happen in a generated hash constraint,
so it's likely not worth more code for.

For the parent-relation-open-failure case, it seems like we might
as well let relation_open throw an error, instead of having an
impossible-to-diagnose constraint failure.

Back-patch to v11 where this code came in.

Discussion: https://postgr.es/m/24067.1605134819@sss.pgh.pa.us
2020-11-16 16:39:59 -05:00
Tom Lane
029fa664ec Use "true" not "TRUE" in one ICU function call.
This was evidently missed in commit 6337865f3, which generally did
s/TRUE/true/ everywhere.  It escaped notice up to now because ICU
versions before ICU 68 provided definitions of "TRUE" and "FALSE"
regardless.  With ICU 68, it fails to compile.

Per report from Condor.  Back-patch to v11 where 6337865f3 came in.
(I've not tested v10, where this call originated, but I imagine
it's fine since we defined TRUE in c.h back then.)

Discussion: https://postgr.es/m/7a6f3336165bfe3ca66abcda7966f9d0@stz-bg.com
2020-11-16 15:16:39 -05:00
Tom Lane
4ac8ee9d48 Fix fuzzy thinking about amcanmulticol versus amcaninclude.
These flags should be independent: in particular an index AM should
be able to say that it supports include columns without necessarily
supporting multiple key columns.  The included-columns patch got
this wrong, possibly aided by the fact that it didn't bother to
update the documentation.

While here, clarify some text about amcanreturn, which was a little
vague about what should happen when amcanreturn reports that only
some of the index columns are returnable.

Noted while reviewing the SP-GiST included-columns patch, which
quite incorrectly (and unsafely) changed SP-GiST to claim
amcanmulticol = true as a workaround for this bug.

Backpatch to v11 where included columns were introduced.
2020-11-15 16:10:48 -05:00
Bruce Momjian
d7fa90fae1 doc: wire protocol data type for history file content is bytea
Document that though the history file content is marked as bytea, it is
the same a text, and neither is btyea-escaped or encoding converted.

Reported-by: Brar Piening

Discussion: https://postgr.es/m/6a1b9cd9-17e3-df67-be55-86102af6bdf5@gmx.de

Backpatch-through: 13 - 9.5 (not master)
2020-11-12 14:33:28 -05:00
Tomas Vondra
0d0626e27d Remove duplicate code in brin_memtuple_initialize
Commit 8bf74967da moved some of the code from brin_new_memtuple to
brin_memtuple_initialize, but this resulted in some of the code being
duplicate. Fix by removing the duplicate lines and backpatch to 10.

Author: Tomas Vondra
Backpatch-through: 10
Discussion: https://postgr.es/m/5eb50c97-9a8e-b691-8c40-1b2a55611c4c%40enterprisedb.com
2020-11-11 18:49:41 +01:00
Tom Lane
171c457cd0 Fix and simplify some usages of TimestampDifference().
Introduce TimestampDifferenceMilliseconds() to simplify callers
that would rather have the difference in milliseconds, instead of
the select()-oriented seconds-and-microseconds format.  This gets
rid of at least one integer division per call, and it eliminates
some apparently-easy-to-mess-up arithmetic.

Two of these call sites were in fact wrong:

* pg_prewarm's autoprewarm_main() forgot to multiply the seconds
by 1000, thus ending up with a delay 1000X shorter than intended.
That doesn't quite make it a busy-wait, but close.

* postgres_fdw's pgfdw_get_cleanup_result() thought it needed to compute
microseconds not milliseconds, thus ending up with a delay 1000X longer
than intended.  Somebody along the way had noticed this problem but
misdiagnosed the cause, and imposed an ad-hoc 60-second limit rather
than fixing the units.  This was relatively harmless in context, because
we don't care that much about exactly how long this delay is; still,
it's wrong.

There are a few more callers of TimestampDifference() that don't
have a direct need for seconds-and-microseconds, but can't use
TimestampDifferenceMilliseconds() either because they do need
microsecond precision or because they might possibly deal with
intervals long enough to overflow 32-bit milliseconds.  It might be
worth inventing another API to improve that, but that seems outside
the scope of this patch; so those callers are untouched here.

Given the fact that we are fixing some bugs, and the likelihood
that future patches might want to back-patch code that uses this
new API, back-patch to all supported branches.

Alexey Kondratov and Tom Lane

Discussion: https://postgr.es/m/3b1c053a21c07c1ed5e00be3b2b855ef@postgrespro.ru
2020-11-10 22:51:55 -05:00
Tom Lane
5ce51f8280 Work around cross-version-upgrade issues created by commit 9e38c2bb5.
Summarily changing the STYPE of regression-test aggregates that
depend on array_append or array_cat is an issue for the buildfarm's
cross-version-upgrade tests, because those aggregates (as defined
in the back branches) now won't load into HEAD.  Although this seems
like only a minimal risk for genuine user-defined aggregates, we
need to do something for the buildfarm.  Hence, adjust the aggregate
definitions, in both HEAD and the back branches.

Discussion: https://postgr.es/m/1401824.1604537031@sss.pgh.pa.us
Discussion: https://postgr.es/m/E1kaQ2c-0005lx-Eg@gemulon.postgresql.org
2020-11-10 18:32:36 -05:00
Tom Lane
6bb1b38fa5 Stamp 12.5. 2020-11-09 17:26:33 -05:00
Noah Misch
3855e5b476 Ignore attempts to \gset into specially treated variables.
If an interactive psql session used \gset when querying a compromised
server, the attacker could execute arbitrary code as the operating
system account running psql.  Using a prefix not found among specially
treated variables, e.g. every lowercase string, precluded the attack.
Fix by issuing a warning and setting no variable for the column in
question.  Users wanting the old behavior can use a prefix and then a
meta-command like "\set HISTSIZE :prefix_HISTSIZE".  Back-patch to 9.5
(all supported versions).

Reviewed by Robert Haas.  Reported by Nick Cleaton.

Security: CVE-2020-25696
2020-11-09 07:32:13 -08:00
Noah Misch
ac8f6243cb In security-restricted operations, block enqueue of at-commit user code.
Specifically, this blocks DECLARE ... WITH HOLD and firing of deferred
triggers within index expressions and materialized view queries.  An
attacker having permission to create non-temp objects in at least one
schema could execute arbitrary SQL functions under the identity of the
bootstrap superuser.  One can work around the vulnerability by disabling
autovacuum and not manually running ANALYZE, CLUSTER, REINDEX, CREATE
INDEX, VACUUM FULL, or REFRESH MATERIALIZED VIEW.  (Don't restore from
pg_dump, since it runs some of those commands.)  Plain VACUUM (without
FULL) is safe, and all commands are fine when a trusted user owns the
target object.  Performance may degrade quickly under this workaround,
however.  Back-patch to 9.5 (all supported versions).

Reviewed by Robert Haas.  Reported by Etienne Stalmans.

Security: CVE-2020-25695
2020-11-09 07:32:12 -08:00
Peter Eisentraut
a98b461944 Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 3bbbf347254dd914c5ae4b5d0bba9a1ddc28eaa0
2020-11-09 12:36:44 +01:00
Tom Lane
94ec005f33 In INSERT/UPDATE, use the table's real tuple descriptor as target.
This back-patches commit 20d3fe900 into the v12 and v13 branches.
At the time I thought that commit was not fixing any observable
bug, but Bertrand Drouvot showed otherwise: adding a dropped column
to the previously-considered scenario crashes v12 and v13, unless the
dropped column happens to be an integer.  That is, of course, because
the tupdesc we derive from the plan output tlist fails to describe
the dropped column accurately, so that we'll do the wrong thing with
a tuple in which that column isn't NULL.

There is no bug in pre-v12 branches because they already did use
the table's real tuple descriptor for any trigger-returned tuple.
It seems that this set of bugs can be blamed on the changes that
removed es_trig_tuple_slot, though I've not attempted to pin that
down precisely.

Although there's no code change needed in HEAD, update the test case
to include a dropped column there too.

Discussion: https://postgr.es/m/db5d97c8-f48a-51e2-7b08-b73d5434d425@amazon.com
Discussion: https://postgr.es/m/16644-5da7ef98a7ac4545@postgresql.org
2020-11-08 13:08:36 -05:00
Peter Eisentraut
16eadc4695 Fix redundant error messages in client tools
A few client tools duplicate error messages already provided by libpq.

Discussion: https://www.postgresql.org/message-id/flat/3e937641-88a1-e697-612e-99bba4b8e5e4%40enterprisedb.com
2020-11-07 22:42:17 +01:00
Alvaro Herrera
8ad6a0c1bb
Plug memory leak in index_get_partition
The list of indexes was being leaked when asked for an index that
doesn't have an index partition in the table partition.  Not a common
case admittedly --and in most cases where it occurs, caller throws an
error anyway-- but worth fixing for cleanliness and in case any
third-party code is calling this function.

While at it, remove use of lfirst_oid() to obtain a value we already
have.

Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/20201105203606.GF22691@telsasoft.com
2020-11-06 22:52:15 -03:00
Tomas Vondra
8149e9f9a0 Properly detoast data in brin_form_tuple
brin_form_tuple failed to consider the values may be toasted, inserting
the toast pointer into the index. This may easily result in index
corruption, as the toast data may be deleted and cleaned up by vacuum.
The cleanup however does not care about indexes, leaving invalid toast
pointers behind, which triggers errors like this:

  ERROR:  missing chunk number 0 for toast value 16433 in pg_toast_16426

A less severe consequence are inconsistent failures due to the index row
being too large, depending on whether brin_form_tuple operated on plain
or toasted version of the row. For example

    CREATE TABLE t (val TEXT);
    INSERT INTO t VALUES ('... long value ...')
    CREATE INDEX idx ON t USING brin (val);

would likely succeed, as the row would likely include toast pointer.
Switching the order of INSERT and CREATE INDEX would likely fail:

    ERROR:  index row size 8712 exceeds maximum 8152 for index "idx"

because this happens before the row values are toasted.

The bug exists since PostgreSQL 9.5 where BRIN indexes were introduced.
So backpatch all the way back.

Author: Tomas Vondra
Reviewed-by: Alvaro Herrera
Backpatch-through: 9.5
Discussion: https://postgr.es/m/20201001184133.oq5uq75sb45pu3aw@development
Discussion: https://postgr.es/m/20201104010544.zexj52mlldagzowv%40development
2020-11-07 00:40:40 +01:00
Tom Lane
f07811009f Revert "Accept relations of any kind in LOCK TABLE".
Revert 59ab4ac32, as well as the followup fix 33862cb9c, in all
branches.  We need to think a bit harder about what the behavior
of LOCK TABLE on views should be, and there's no time for that
before next week's releases.  We'll take another crack at this
later.

Discussion: https://postgr.es/m/16703-e348f58aab3cf6cc@postgresql.org
2020-11-06 16:17:57 -05:00
Tom Lane
f4fa4a8213 Revert "pg_dump: Lock all relations, not just plain tables".
Revert 403a3d91c, as well as the followup fix 7f4235032, in all
branches.  We need to think a bit harder about what the behavior
of LOCK TABLE on views should be, and there's no time for that
before next week's releases.  We'll take another crack at this
later.

Discussion: https://postgr.es/m/16703-e348f58aab3cf6cc@postgresql.org
2020-11-06 15:48:21 -05:00
Tom Lane
0bdf1ef3d5 Don't throw an error for LOCK TABLE on a self-referential view.
LOCK TABLE has complained about "infinite recursion" when applied
to a self-referential view, ever since we made it recurse into views
in v11.  However, that breaks pg_dump's new assumption that it's
okay to lock every relation.  There doesn't seem to be any good
reason to throw an error: if we just abandon the recursion, we've
still satisfied the requirement of locking every referenced relation.

Per bug #16703 from Andrew Bille (via Alexander Lakhin).

Discussion: https://postgr.es/m/16703-e348f58aab3cf6cc@postgresql.org
2020-11-05 11:44:32 -05:00
Peter Eisentraut
ea90879381 Enable hash partitioning of text arrays
hash_array_extended() needs to pass PG_GET_COLLATION() to the hash
function of the element type.  Otherwise, the hash function of a
collation-aware data type such as text will error out, since the
introduction of nondeterministic collation made hash functions require
a collation, too.

The consequence of this is that before this change, hash partitioning
using an array over text in the partition key would not work.

Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://www.postgresql.org/message-id/flat/32c1fdae-95c6-5dc6-058a-a90330a3b621%40enterprisedb.com
2020-11-04 12:46:52 +01:00
Tom Lane
55416b26a9 Guard against core dump from uninitialized subplan.
If the planner erroneously puts a non-parallel-safe SubPlan into
a parallelized portion of the query tree, nodeSubplan.c will fail
in the worker processes because it finds a null in es_subplanstates,
which it's unable to cope with.  It seems worth a test-and-elog to
make that an error case rather than a core dump case.

This probably should have been included in commit 16ebab688, which
was responsible for allowing nulls to appear in es_subplanstates
to begin with.  So, back-patch to v10 where that came in.

Discussion: https://postgr.es/m/924226.1604422326@sss.pgh.pa.us
2020-11-03 16:16:36 -05:00
Tom Lane
136f87ea57 Allow users with BYPASSRLS to alter their own passwords.
The intention in commit 491c029db was to require superuserness to
change the BYPASSRLS property, but the actual effect of the coding
in AlterRole() was to require superuserness to change anything at all
about a BYPASSRLS role.  Other properties of a BYPASSRLS role should
be changeable under the same rules as for a normal role, though.

Fix that, and also take care of some documentation omissions related
to BYPASSRLS and REPLICATION role properties.

Tom Lane and Stephen Frost, per bug report from Wolfgang Walther.
Back-patch to all supported branches.

Discussion: https://postgr.es/m/a5548a9f-89ee-3167-129d-162b5985fcf8@technowledgy.de
2020-11-03 15:41:32 -05:00
Tom Lane
d3befe9b98 Fix unportable use of getnameinfo() in pg_hba_file_rules view.
fill_hba_line() thought it could get away with passing sizeof(struct
sockaddr_storage) rather than the actual addrlen previously returned
by getaddrinfo().  While that appears to work on many platforms,
it does not work on FreeBSD 11: you get back a failure, which leads
to the view showing NULL for the address and netmask columns in all
rows.  The POSIX spec for getnameinfo() is pretty clearly on
FreeBSD's side here: you should pass the actual address length.
So it seems plausible that there are other platforms where this
coding also fails, and we just hadn't noticed.

Also, IMO the fact that getnameinfo() failure leads to a NULL output
is pretty bogus in itself.  Our pg_getnameinfo_all() wrapper is
careful to emit "???" on failure, and we should use that in such
cases.  NULL should only be emitted in rows that don't have IP
addresses.

Per bug #16695 from Peter Vandivier.  Back-patch to v10 where this
code was added.

Discussion: https://postgr.es/m/16695-a665558e2f630be7@postgresql.org
2020-11-02 21:11:50 -05:00
Michael Paquier
a8795445bc Extend PageIsVerified() to handle more custom options
This is useful for checks of relation pages without having to load the
pages into the shared buffers, and two cases can make use of that: page
verification in base backups and the online, lock-safe, flavor.

Compatibility is kept with past versions using a routine that calls the
new extended routine with the set of options compatible with the
original version.  Contrary to d401c576, a macro cannot be used as there
may be external code relying on the presence of the original routine.

This is applied down to 11, where this will be used by a follow-up
commit addressing a set of issues with page verification in base
backups.

Extracted from a larger patch by the same author.

Author: Anastasia Lubennikova
Reviewed-by: Michael Paquier, Julien Rouhaud
Discussion: https://postgr.es/m/608f3476-0598-2514-2c03-e05c7d2b0cbd@postgrespro.ru
Backpatch-through: 11
2020-11-02 10:41:30 +09:00
Tom Lane
7d72fd9e6d Avoid null pointer dereference if error result lacks SQLSTATE.
Although error results received from the backend should always have
a SQLSTATE field, ones generated by libpq won't, making this code
vulnerable to a crash after, say, untimely loss of connection.
Noted by Coverity.

Oversight in commit 403a3d91c.  Back-patch to 9.5, as that was.
2020-11-01 11:26:31 -05:00
Michael Paquier
41a033b505 Preserve index data in pg_statistic across REINDEX CONCURRENTLY
Statistics associated to an index got lost after running REINDEX
CONCURRENTLY, while the non-concurrent case preserves these correctly.
The concurrent and non-concurrent operations need to be consistent for
the end-user, and missing statistics would force to wait for a new
analyze to happen, which could take some time depending on the activity
of the existing autovacuum workers.  This issue is fixed by copying any
existing entries in pg_statistic associated to the old index to the new
one.  Note that this copy is already done with the data of the index in
the stats collector.

Reported-by: Fabrízio de Royes Mello
Author: Michael Paquier, Fabrízio de Royes Mello
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/CAFcNs+qpFPmiHd1oTXvcPdvAHicJDA9qBUSujgAhUMJyUMb+SA@mail.gmail.com
Backpatch-through: 12
2020-11-01 21:24:15 +09:00
Noah Misch
741b84e9f7 Reproduce debug_query_string==NULL on parallel workers.
Certain background workers initiate parallel queries while
debug_query_string==NULL, at which point they attempted strlen(NULL) and
died to SIGSEGV.  Older debug_query_string observers allow NULL, so do
likewise in these newer ones.  Back-patch to v11, where commit
7de4a1bcc5 introduced the first of these.

Discussion: https://postgr.es/m/20201014022636.GA1962668@rfd.leadboat.com
2020-10-31 08:44:13 -07:00
Tom Lane
25b587f03a Stabilize timetz test across DST transitions.
The timetz test cases I added in commit a9632830b were unintentionally
sensitive to whether or not DST is active in the PST8PDT time zone.
Thus, they'll start failing this coming weekend, as reported by
Bernhard M. Wiedemann in bug #16689.  Fortunately, DST-awareness is
not significant to the purpose of these test cases, so we can just
force them all to PDT (DST hours) to preserve stability of the
results.

Back-patch to v10, as the prior patch was.

Discussion: https://postgr.es/m/16689-57701daa23b377bf@postgresql.org
2020-10-29 15:28:35 -04:00
Tom Lane
cb0982ba97 Use mode "r" for popen() in psql's evaluate_backtick().
In almost all other places, we use plain "r" or "w" mode in popen()
calls (the exceptions being for COPY data).  This one has been
overlooked (possibly because it's buried in a ".l" flex file?),
but it's using PG_BINARY_R.

Kensuke Okamura complained in bug #16688 that we fail to strip \r
when stripping the trailing newline from a backtick result string.
That's true enough, but we'd also fail to convert embedded \r\n
cleanly, which also seems undesirable.  Fixing the popen() mode
seems like the best way to deal with this.

It's been like this for a long time, so back-patch to all supported
branches.

Discussion: https://postgr.es/m/16688-c649c7b69cd7e6f8@postgresql.org
2020-10-28 14:35:53 -04:00
Tom Lane
43330cdd40 Calculate extraUpdatedCols in query rewriter, not parser.
It's unsafe to do this at parse time because addition of generated
columns to a table would not invalidate stored rules containing
UPDATEs on the table ... but there might now be dependent generated
columns that were not there when the rule was made.  This also fixes
an oversight that rewriteTargetView failed to update extraUpdatedCols
when transforming an UPDATE on an updatable view.  (Since the new
calculation is downstream of that, rewriteTargetView doesn't actually
need to do anything; but before, there was a demonstrable bug there.)

In v13 and HEAD, this leads to easily-visible bugs because (since
commit c6679e4fc) we won't recalculate generated columns that aren't
listed in extraUpdatedCols.  In v12 this bitmap is mostly just used
for trigger-firing decisions, so you'd only notice a problem if a
trigger cared whether a generated column had been updated.

I'd complained about this back in May, but then forgot about it
until bug #16671 from Michael Paul Killian revived the issue.

Back-patch to v12 where this field was introduced.  If existing
stored rules contain any extraUpdatedCols values, they'll be
ignored because the rewriter will overwrite them, so the bug will
be fixed even for existing rules.  (But note that if someone were
to update to 13.1 or 12.5, store some rules with UPDATEs on tables
having generated columns, and then downgrade to a prior minor version,
they might observe issues similar to what this patch fixes.  That
seems unlikely enough to not be worth going to a lot of effort to fix.)

Discussion: https://postgr.es/m/10206.1588964727@sss.pgh.pa.us
Discussion: https://postgr.es/m/16671-2fa55851859fb166@postgresql.org
2020-10-28 13:47:02 -04:00
Tom Lane
93f726c04f Fix use-after-free bug with event triggers and ALTER TABLE.
EventTriggerAlterTableEnd neglected to make sure that it built its
output list in the right context.  In simple cases this was masked
because the function is called in PortalContext which will be
sufficiently long-lived anyway; but that doesn't make it not a bug.
Commit ced138e8c fixed this in HEAD and v13, but mistakenly chose
not to back-patch further.  Back-patch the same code change all
the way (I didn't bother with the test case though, as it would
prove nothing in pre-v13 branches).

Per report from Arseny Sher.
Original fix by Jehan-Guillaume de Rorthais.

Discussion: https://postgr.es/m/877drcyprb.fsf@ars-thinkpad
Discussion: https://postgr.es/m/20200902193715.6e0269d4@firost
2020-10-27 15:37:13 -04:00
Bruce Momjian
197014baa4 Makefile comment: remove reference to tools/thread/thread_test
You can't compile thread_test alone anymore, and the location moved too.

Reported-by: Tom Lane

Discussion: https://postgr.es/m/1062278.1603819969@sss.pgh.pa.us

Backpatch-through: 9.5
2020-10-27 14:00:48 -04:00
Alvaro Herrera
9f829aeb36
pg_dump: Lock all relations, not just plain tables
Now that LOCK TABLE can take any relation type, acquire lock on all
relations that are to be dumped.  This prevents schema changes or
deadlock errors that could cause a dump to fail after expending much
effort.  The server is tested to have the capability and the feature
disabled if it doesn't, so that a patched pg_dump doesn't fail when
connecting to an unpatched server.

Backpatch to 9.5.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Wells Oliver <wells.oliver@gmail.com>
Discussion: https://postgr.es/m/20201021200659.GA32358@alvherre.pgsql
2020-10-27 14:31:37 -03:00
Alvaro Herrera
7ffead21a4
Accept relations of any kind in LOCK TABLE
The restriction that only tables and views can be locked by LOCK TABLE
is quite arbitrary, since the underlying mechanism can lock any relation
type.  Drop the restriction so that programs such as pg_dump can lock
all relations they're interested in, preventing schema changes that
could cause a dump to fail after expending much effort.

Backpatch to 9.5.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Wells Oliver <wells.oliver@gmail.com>
Discussion: https://postgr.es/m/20201021200659.GA32358@alvherre.pgsql
2020-10-27 13:49:19 -03:00
Tom Lane
de78c10072 Fix corner case for a BEFORE ROW UPDATE trigger returning OLD.
If the old row has any "missing" attributes that are supposed to
be retrieved from an associated tuple descriptor, the wrong things
happened because the trigger result is shoved directly into an
executor slot that lacks the missing-attribute data.  Notably,
CHECK-constraint verification would incorrectly see those columns
as NULL, and so would RETURNING-list evaluation.

Band-aid around this by forcibly expanding the tuple before passing
it to the trigger function.  (IMO it was a fundamental misdesign to
put the missing-attribute data into tuple constraints, which so
much of the system considers to be optional.  But we're probably
stuck with that now, and will have to continue to apply band-aids
as we find other places with similar issues.)

Back-patch to v12.  v11 would also have the issue, except that
commit 920311ab1 already applied a similar band-aid.  That forced
expansion in more cases than seem really necessary, though, so
this isn't a directly equivalent fix.

Amit Langote, with some cosmetic changes by me

Discussion: https://postgr.es/m/16644-5da7ef98a7ac4545@postgresql.org
2020-10-25 13:57:46 -04:00
David Rowley
2a15eeeddf Fix incorrect parameter name in a function header comment
Author: Zhijie Hou
Discussion: https://postgr.es/m/14cd74ea00204cc8a7ea5d738ac82cd1@G08CNEXMBPEKD05.g08.fujitsu.local
Backpatch-through: 12, where the mistake was introduced
2020-10-25 22:41:29 +13:00
Tom Lane
bdc79ddd10 Fix ancient bug in ecpg's pthread_once() emulation for Windows.
We must not set the "done" flag until after we've executed the
initialization function.  Otherwise, other threads can fall through
the initial unlocked test before initialization is really complete.

This has been seen to cause rare failures of ecpg's thread/descriptor
test, and it could presumably cause other sorts of misbehavior in
threaded ECPG-using applications, since ecpglib relies on
pthread_once() in several places.

Diagnosis and patch by me, based on investigation by Alexander Lakhin.
Back-patch to all supported branches (the bug dates to 2007).

Discussion: https://postgr.es/m/16685-d6cd241872c101d3@postgresql.org
2020-10-24 13:12:26 -04:00
Tom Lane
78ccf7f42b Update time zone data files to tzdata release 2020d.
DST law changes in Palestine, with a whopping 120 hours' notice.
Also some historical corrections for Palestine.
2020-10-22 21:24:07 -04:00
Tom Lane
f56c42e503 Sync our copy of the timezone library with IANA release tzcode2020d.
There's no functional change at all here, but I'm curious to see
whether this change successfully shuts up Coverity's warning about
a useless strcmp(), which appeared with the previous update.

Discussion: http://mm.icann.org/pipermail/tz/2020-October/029370.html
2020-10-22 21:16:04 -04:00
Tom Lane
f656517ecd Fix connection string handling in psql's \connect command.
psql's \connect claims to be able to re-use previous connection
parameters, but in fact it only re-uses the database name, user name,
host name (and possibly hostaddr, depending on version), and port.
This is problematic for assorted use cases.  Notably, pg_dump[all]
emits "\connect databasename" commands which we would like to have
re-use all other parameters.  If such a script is loaded in a psql run
that initially had "-d connstring" with some non-default parameters,
those other parameters would be lost, potentially causing connection
failure.  (Thus, this is the same kind of bug addressed in commits
a45bc8a4f and 8e5793ab6, although the details are much different.)

To fix, redesign do_connect() so that it pulls out all properties
of the old PGconn using PQconninfo(), and then replaces individual
properties in that array.  In the case where we don't wish to re-use
anything, get libpq's default settings using PQconndefaults() and
replace entries in that, so that we don't need different code paths
for the two cases.

This does result in an additional behavioral change for cases where
the original connection parameters allowed multiple hosts, say
"psql -h host1,host2", and the \connect request allows re-use of the
host setting.  Because the previous coding relied on PQhost(), it
would only permit reconnection to the same host originally selected.
Although one can think of scenarios where that's a good thing, there
are others where it is not.  Moreover, that behavior doesn't seem to
meet the principle of least surprise, nor was it documented; nor is
it even clear it was intended, since that coding long pre-dates the
addition of multi-host support to libpq.  Hence, this patch is content
to drop it and re-use the host list as given.

Per Peter Eisentraut's comments on bug #16604.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
2020-10-21 16:19:01 -04:00
Alvaro Herrera
a818000f6a
Use fast checkpoint in PostgresNode::backup()
Should cause tests to be a bit faster
2020-10-21 14:37:25 -03:00
Alvaro Herrera
0e6b6f8c71
Fix ALTER TABLE .. ENABLE/DISABLE TRIGGER recursion
More precisely, correctly handle the ONLY flag indicating not to
recurse.  This was implemented in 86f575948c by recursing in
trigger.c, but that's the wrong place; use ATSimpleRecursion instead,
which behaves properly.  However, because legacy inheritance has never
recursed in that situation, make sure to do that only for new-style
partitioning.

I noticed this problem while testing a fix for another bug in the
vicinity.

This has been wrong all along, so backpatch to 11.

Discussion: https://postgr.es/m/20201016235925.GA29829@alvherre.pgsql
2020-10-20 19:22:09 -03:00
Peter Eisentraut
bd0677bb8a Avoid invalid alloc size error in shm_mq
In shm_mq_receive(), a huge payload could trigger an unjustified
"invalid memory alloc request size" error due to the way the buffer
size is increased.

Add error checks (documenting the upper limit) and avoid the error by
limiting the allocation size to MaxAllocSize.

Author: Markus Wanner <markus.wanner@2ndquadrant.com>
Discussion: https://www.postgresql.org/message-id/flat/3bb363e7-ac04-0ac4-9fe8-db1148755bfa%402ndquadrant.com
2020-10-20 14:31:52 +02:00
Tom Lane
c6d0b9b166 Fix connection string handling in src/bin/scripts/ programs.
When told to process all databases, clusterdb, reindexdb, and vacuumdb
would reconnect by replacing their --maintenance-db parameter with the
name of the target database.  If that parameter is a connstring (which
has been allowed for a long time, though we failed to document that
before this patch), we'd lose any other options it might specify, for
example SSL or GSS parameters, possibly resulting in failure to connect.
Thus, this is the same bug as commit a45bc8a4f fixed in pg_dump and
pg_restore.  We can fix it in the same way, by using libpq's rules for
handling multiple "dbname" parameters to add the target database name
separately.  I chose to apply the same refactoring approach as in that
patch, with a struct to handle the command line parameters that need to
be passed through to connectDatabase.  (Maybe someday we can unify the
very similar functions here and in pg_dump/pg_restore.)

Per Peter Eisentraut's comments on bug #16604.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
2020-10-19 19:03:46 -04:00
Tom Lane
407580aabb In libpq for Windows, call WSAStartup once and WSACleanup not at all.
The Windows documentation insists that every WSAStartup call should
have a matching WSACleanup call.  However, if that ever had actual
relevance, it wasn't in this century.  Every remotely-modern Windows
kernel is capable of cleaning up when a process exits without doing
that, and must be so to avoid resource leaks in case of a process
crash.  Moreover, Postgres backends have done WSAStartup without
WSACleanup since commit 4cdf51e64 in 2004, and we've never seen any
indication of a problem with that.

libpq's habit of doing WSAStartup during connection start and
WSACleanup during shutdown is also rather inefficient, since a
series of non-overlapping connection requests leads to repeated,
quite expensive DLL unload/reload cycles.  We document a workaround
for that (having the application call WSAStartup for itself), but
that's just a kluge.  It's also worth noting that it's far from
uncommon for applications to exit without doing PQfinish, and
we've not heard reports of trouble from that either.

However, the real reason for acting on this is that recent
experiments by Alexander Lakhin show that calling WSACleanup
during PQfinish is triggering the symptom we occasionally see
that a process using libpq fails to emit expected stdio output.

Therefore, let's change libpq so that it calls WSAStartup only
once per process, during the first connection attempt, and never
calls WSACleanup at all.

While at it, get rid of the only other WSACleanup call in our code
tree, in pg_dump/parallel.c; that presumably is equally useless.

Back-patch of HEAD commit 7d00a6b2d.

Discussion: https://postgr.es/m/ac976d8c-03df-d6b8-025c-15a2de8d9af1@postgrespro.ru
2020-10-19 11:23:51 -04:00
David Rowley
77ca44b764 Relax some asserts in merge join costing code
In the planner, it was possible, given an extreme enough case containing a
large number of joins for the number of estimated rows to become infinite.
This could cause problems in initial_cost_mergejoin() where we perform
some calculations based on those row estimates.

A problem case, presented by Onder Kalaci showed an Assert failure from
an Assert checking outerstartsel <= outerendsel.  In his test case this
was effectively NaN <= Inf, which is false.  The NaN outerstartsel came
from multiplying the infinite outer_path_rows by 0.0.

In master, this problem was fixed by a90c950fc, however, that fix was too
invasive for the backbranches.  Here we just relax the Asserts to allow
them to pass.  The worst that appears to happen from this is that we show
NaN cost values and infinite row estimates in EXPLAIN.  add_path() would
have had a hard time doing anything useful with such costs, but that does
not really matter as if the row estimates were even close to accurate,
such plan would not complete this side of the heat death of the universe.

Reported-by: Onder Kalaci
Backpatch: 9.5 to 13
Discussion: https://postgr.es/m/DM6PR21MB1211FF360183BCA901B27F04D80B0@DM6PR21MB1211.namprd21.prod.outlook.com
2020-10-20 00:05:41 +13:00
Tom Lane
b39c94097d Update time zone data files to tzdata release 2020c.
DST law changes in Morocco, Canadian Yukon, Fiji, Macquarie Island,
Casey Station (Antarctica).  Historical corrections for France,
Hungary, Monaco.
2020-10-16 21:53:50 -04:00
Tom Lane
3d13a83079 Sync our copy of the timezone library with IANA release tzcode2020c.
This changes zic's default output format from "-b fat" to "-b slim".
We were already using "slim" in v13/HEAD, so those branches drop
the explicit -b switch in the Makefiles.  Instead, add an explicit
"-b fat" in v12 and before, so that we don't change the output file
format in those branches.  (This is perhaps excessively conservative,
but we decided not to do so in a12079109, and I'll stick with that.)

Other non-cosmetic changes are to drop support for zic's long-obsolete
"-y" switch, and to ensure that strftime() does not change errno
unless it fails.

As usual with tzcode changes, back-patch to all supported branches.
2020-10-16 21:40:16 -04:00
Andres Freund
c835c7ffe2 llvmjit: Work around bug in LLVM 3.9 causing crashes after 72559438f9.
Unfortunately in LLVM 3.9 LLVMGetAttributeCountAtIndex(func, index)
crashes when called with an index that has 0 attributes. Since there's
no way to work around this in the C API, add a small C++ wrapper doing
so.

The only reason this didn't fail before 72559438f9 is that there
always are function attributes...

Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20201016001254.w2nfj7gd74jmb5in@alap3.anarazel.de
Backpatch: 11-, like 72559438f9
2020-10-15 18:17:01 -07:00
Bruce Momjian
0ab7ca98a1 pg_upgrade: remove C99 compiler req. from commit 3c0471b5fd
This commit required support for inline variable definition, which is
not a requirement.

RELEASE NOTE AUTHOR:  the author of commit 3c0471b5fd
(pg_upgrade/tablespaces) was Justin Pryzby, not me.

Reported-by: Andres Freund

Discussion: https://postgr.es/m/20201016001959.h24fkywfubkv2pc5@alap3.anarazel.de

Backpatch-through: 9.5
2020-10-15 20:37:20 -04:00
Bruce Momjian
a106236d82 pg_upgrade: generate check error for left-over new tablespace
Previously, if pg_upgrade failed, and the user recreated the cluster but
did not remove the new cluster tablespace directory, a later pg_upgrade
would fail since the new tablespace directory would already exists.
This adds error reporting for this during check.

Reported-by: Justin Pryzby

Discussion: https://postgr.es/m/20200925005531.GJ23631@telsasoft.com

Backpatch-through: 9.5
2020-10-15 19:33:43 -04:00
Andres Freund
c8a2bb0f1a llvmjit: Also copy parameter / return value attributes from template functions.
Previously we only copied the function attributes. That caused problems at
least on s390x: Because we didn't copy the 'zeroext' attribute for
ExecAggTransReparent()'s *IsNull parameters, expressions invoking it didn't
ensure that the upper bytes of the registers were zeroed. In the - relatively
rare - cases where not, ExecAggTransReparent() wrongly ended up in the
newValueIsNull branch due to the register not being zero. Subsequently causing
a crash.

It's quite possible that this would cause problems on other platforms, and in
other places than just ExecAggTransReparent() on s390x.

Thanks to Christoph (and the Debian project) for providing me with access to a
s390x machine, allowing me to debug this.

Reported-By: Christoph Berg
Author: Andres Freund
Discussion: https://postgr.es/m/20201015083246.kie5726xerdt3ael@alap3.anarazel.de
Backpatch: 11-, where JIT was added
2020-10-15 14:31:32 -07:00
Bruce Momjian
f915453c5a doc: improve description of synchronous_commit modes
Previously it wasn't clear exactly what each of the synchronous_commit
modes accomplished.  This clarifies that, and adds a table describing it.
Only backpatched through 9.6 since 9.5 doesn't have all the options.

Reported-by: kghost0@gmail.com

Discussion: https://postgr.es/m/159741195522.14321.13812604195366728976@wrigleys.postgresql.org

Backpatch-through: 9.6
2020-10-15 15:15:29 -04:00
Tom Lane
8b53dbada4 In the postmaster, rely on the signal infrastructure to block signals.
POSIX sigaction(2) can be told to block a set of signals while a
signal handler executes.  Make use of that instead of manually
blocking and unblocking signals in the postmaster's signal handlers.
This should save a few cycles, but more importantly it prevents
recursive invocation of signal handlers when many signals arrive in
close succession.  (Assuming that the platform's signal infrastructure
is designed to avoid consuming stack space in that case, but this is
demonstrably true at least on Linux.)  The existing code has been seen
to recurse to the point of stack overflow, either in the postmaster
or in a forked-off child.

Back-patch of commit 9abb2bfc0.  At the time, we'd only seen excess
postmaster stack consumption in the buildfarm; but we now have a
user report of it, and that commit has aged enough to have a fair
amount of confidence that it doesn't break anything.

This still doesn't change anything about the way that it works on
Windows.  Perhaps someone else would like to fix that?

Per bug #16673 from David Geier.  Back-patch to 9.6.  Although
the problem exists in principle before that, we've only seen it
actually materialize in connection with heavy use of parallel
workers, so it doesn't seem necessary to do anything in 9.5;
and the relevant code is different there, too.

Discussion: https://postgr.es/m/16673-d278c604f8e34ec0@postgresql.org
Discussion: https://postgr.es/m/14878.1570820201@sss.pgh.pa.us
2020-10-15 12:50:57 -04:00
Tom Lane
c7e2364a5f Paper over regression failures in infinite_recurse() on PPC64 Linux.
Our infinite_recurse() test to verify sane stack-overrun behavior
is affected by a bug of the Linux kernel on PPC64: it will get SIGSEGV
if it receives a signal when the stack depth is (a) over 1MB and
(b) within a few kB of filling the current physical stack allocation.
See https://bugzilla.kernel.org/show_bug.cgi?id=205183.

Since this test is a bit time-consuming and we run it in parallel with
test scripts that do a lot of DDL, it can be expected to get an sinval
catchup interrupt at some point, leading to failure if the timing is
wrong.  This has caused more than 100 buildfarm failures over the
past year or so.

While a fix exists for the kernel bug, it might be years before that
propagates into all production kernels, particularly in some of the
older distros we have in the buildfarm.  For now, let's just back off
and not run this test on Linux PPC64; that loses nothing in test
coverage so far as our own code is concerned.

To do that, split this test into a new script infinite_recurse.sql
and skip the test when the platform name is powerpc64...-linux-gnu.

Back-patch to v12.  Branches before that have not been seen to get
this failure.  No doubt that's because the "errors" test was not
run in parallel with other tests before commit 798070ec0, greatly
reducing the odds of an sinval catchup being necessary.

I also back-patched 3c8553547 into v12, just so the new regression
script would look the same in all branches having it.

Discussion: https://postgr.es/m/3479046.1602607848@sss.pgh.pa.us
Discussion: https://postgr.es/m/20190723162703.GM22387%40telsasoft.com
2020-10-13 17:44:56 -04:00
Tom Lane
12945874eb Fix GiST buffering build to work when there are included columns.
gistRelocateBuildBuffersOnSplit did not get the memo about which
attribute count to use.  This could lead to a crash if there were
included columns and buffering build was chosen.  (Because there
are random page-split decisions elsewhere in GiST index build,
the crashes are not entirely deterministic.)

Back-patch to v12 where GiST gained support for included columns.

Pavel Borisov

Discussion: https://postgr.es/m/CALT9ZEECCV5m7wvxg46PC-7x-EybUmnpupBGhSFMoAAay+r6HQ@mail.gmail.com
2020-10-12 18:01:34 -04:00
Tom Lane
f35c117700 Fix memory leak when guc.c decides a setting can't be applied now.
The prohibitValueChange code paths in set_config_option(), which
are executed whenever we re-read a PGC_POSTMASTER variable from
postgresql.conf, neglected to free anything before exiting.  Thus
we'd leak the proposed new value of a PGC_STRING variable, as noted
by BoChen in bug #16666.  For all variable types, if the check hook
creates an "extra" chunk, we'd also leak that.

These are malloc not palloc chunks, so there is no mechanism for
recovering the leaks before process exit.  Fortunately, the values
are typically not very large, meaning you'd have to go through an
awful lot of SIGHUP configuration-reload cycles to make the leakage
amount to anything.  Still, for a long-lived postmaster process it
could potentially be a problem.

Oversight in commit 2594cf0e8.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/16666-2c41a4eec61b03e1@postgresql.org
2020-10-12 13:31:24 -04:00
Tom Lane
8b231d9753 Fix optimization hazard in gram.y's makeOrderedSetArgs(), redux.
It appears that commit cf63c641c, which intended to prevent
misoptimization of the result-building step in makeOrderedSetArgs,
didn't go far enough: buildfarm member hornet's version of xlc
is now optimizing back to the old, broken behavior in which
list_length(directargs) is fetched only after list_concat() has
changed that value.  I'm not entirely convinced whether that's
an undeniable compiler bug or whether it can be justified by a
sufficiently aggressive interpretation of C sequence points.
So let's just change the code to make it harder to misinterpret.

Back-patch to all supported versions, just in case.

Discussion: https://postgr.es/m/1830491.1601944935@sss.pgh.pa.us
2020-10-07 18:42:12 -04:00
Tom Lane
d8c2a21998 Rethink recent fix for pg_dump's handling of extension config tables.
Commit 3eb3d3e78 was a few bricks shy of a load: while it correctly
set the table's "interesting" flag when deciding to dump the data of
an extension config table, it was not correct to clear that flag
if we concluded we shouldn't dump the data.  This led to the crash
reported in bug #16655, because in fact we'll traverse dumpTableSchema
anyway for all extension tables (to see if they have user-added
seclabels or RLS policies).

The right thing to do is to force "interesting" true in makeTableDataInfo,
and otherwise leave the flag alone.  (Doing it there is more future-proof
in case additional calls are added, and it also avoids setting the flag
unnecessarily if that function decides the table is non-dumpable.)

This investigation also showed that while only the --inserts code path
had an obvious failure in the case considered by 3eb3d3e78, the COPY
code path also has a problem with not having loaded table subsidiary
data.  That causes fmtCopyColumnList to silently return an empty string
instead of the correct column list.  That accidentally mostly works,
which perhaps is why we didn't notice this before.  It would only fail
if the restore column order is different from the dump column order,
which only happens in weird inheritance cases, so it's not surprising
nobody had hit the case with an extension config table.  Nonetheless,
it's a bug, and it goes a long way back, not just to v12 where the
--inserts code path started to have a problem with this.

In hopes of catching such cases a bit sooner in future, add some
Asserts that "interesting" has been set in both dumpTableData and
dumpTableSchema.  Adjust the test case added by 3eb3d3e78 so that it
checks the COPY rather than INSERT form of that bug, allowing it to
detect the longer-standing symptom.

Per bug #16655 from Cameron Daniel.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/16655-5c92d6b3a9438137@postgresql.org
Discussion: https://postgr.es/m/18048b44-3414-b983-8c7c-9165b177900d@2ndQuadrant.com
2020-10-07 12:51:04 -04:00
Bruce Momjian
77971bc4f2 pg_upgrade: remove pre-8.4 code and >= 8.4 check
We only support upgrading from >= 8.4 so no need for this code or tests.

Reported-by: Magnus Hagander

Discussion: https://postgr.es/m/CABUevEx-D0PNVe00tkeQRGennZQwDtBJn=493MJt-x6sppbUxA@mail.gmail.com

Backpatch-through: 9.5
2020-10-06 14:31:21 -04:00
Bruce Momjian
dc3953421b pg_upgrade; change major version comparisons to use <=, not <
This makes checking for older major versions more consistent.

Backpatch-through: 9.5
2020-10-06 12:12:09 -04:00
Tom Lane
3d69efc4f0 Build EC members for child join rels in the right memory context.
This patch prevents crashes or wrong plans when partition-wise joins
are considered during GEQO planning, as a consequence of the
EquivalenceClass data structures becoming corrupt after a GEQO
context reset.

A remaining problem is that successive GEQO cycles will make multiple
copies of the required EC members, since add_child_join_rel_equivalences
has no idea that such members might exist already.  For now we'll just
live with that.  The lack of field complaints of crashes suggests that
this is a mighty little-used situation.

Back-patch to v12 where this code was introduced.

Discussion: https://postgr.es/m/1683100.1601860653@sss.pgh.pa.us
2020-10-06 11:43:54 -04:00
Tom Lane
1f94d76856 Fix two latent(?) bugs in equivclass.c.
get_eclass_for_sort_expr() computes expr_relids and nullable_relids
early on, even though they won't be needed unless we make a new
EquivalenceClass, which we often don't.  Aside from the probably-minor
inefficiency, there's a memory management problem: these bitmapsets will
be built in the caller's context, leading to dangling pointers if that
is shorter-lived than root->planner_cxt.  This would be a live bug if
get_eclass_for_sort_expr() could be called with create_it = true during
GEQO join planning.  So far as I can find, the core code never does
that, but it's hard to be sure that no extensions do, especially since
the comments make it clear that that's supposed to be a supported case.
Fix by not computing these values until we've switched into planner_cxt
to build the new EquivalenceClass.

generate_join_implied_equalities() uses inner_rel->relids to look up
relevant eclasses, but it ought to be using nominal_inner_relids.
This is presently harmless because a child RelOptInfo will always have
exactly the same eclass_indexes as its topmost parent; but that might
not be true forever, and anyway it makes the code confusing.

The first of these is old (introduced by me in f3b3b8d5b), so back-patch
to all supported branches.  The second only dates to v13, but we might
as well back-patch it to keep the code looking similar across branches.

Discussion: https://postgr.es/m/1508010.1601832581@sss.pgh.pa.us
2020-10-05 13:15:39 -04:00
Tom Lane
5856ed1099 Improve stability of identity.sql regression test.
I noticed while trying to run the regression tests under a low
geqo_threshold that one query on information_schema.columns had
unstable (as in, variable from one run to the next) output order.
This is pretty unsurprising given the complexity of the underlying
plan.  Interestingly, of this test's three nigh-identical queries on
information_schema.columns, the other two already had ORDER BY clauses
guaranteeing stable output.  Let's make this one look the same.

Back-patch to v10 where this test was added.  We've not heard field
reports of the test failing, but this experience shows that it can
happen when testing under even slightly unusual conditions.
2020-10-04 20:45:36 -04:00
Tom Lane
6854c45b36 Put back explicit setting of replication values within TAP tests.
Commit 151c0c5f7 neglected the possibility that a TEMP_CONFIG file
would explicitly set max_wal_senders=0; as indeed buildfarm member
thorntail does, so that it can test wal_level=minimal in other test
suites.  Hence, rather than assuming that max_wal_senders=10 will
prevail if we say nothing, set it explicitly.

Set max_replication_slots=10 explicitly too, just to be safe.

Back-patch to v10, like the previous patch.

Discussion: https://postgr.es/m/723911.1601417626@sss.pgh.pa.us
2020-10-01 10:59:20 -04:00
Heikki Linnakangas
fb35798a88 Fix incorrect assertion on number of array dimensions.
This has been wrong ever since the support for multi-dimensional
arrays as PL/python function arguments and return values was
introduced in commit 94aceed317.

Backpatch-through: 10
Discussion: https://www.postgresql.org/message-id/61647b8e-961c-0362-d5d3-c8a18f4a7ec6%40iki.fi
2020-10-01 11:50:38 +03:00
Alvaro Herrera
f669ba7bdb
Reword partitioning error message
The error message about columns in the primary key not including all of
the partition key was unclear; reword it.

Backpatch all the way to pg11, where it appeared.

Reported-by: Nagaraj Raj <nagaraj.sf@yahoo.com>
Discussion: https://postgr.es/m/64062533.78364.1601415362244@mail.yahoo.com
2020-09-30 18:25:23 -03:00
Tom Lane
c5232dca8d Fix handling of BC years in to_date/to_timestamp.
Previously, a conversion such as
	to_date('-44-02-01','YYYY-MM-DD')
would result in '0045-02-01 BC', as the code attempted to interpret
the negative year as BC, but failed to apply the correction needed
for our internal handling of BC years.  Fix the off-by-one problem.

Also, arrange for the combination of a negative year and an
explicit "BC" marker to cancel out and produce AD.  This is how
the negative-century case works, so it seems sane to do likewise.

Continue to read "year 0000" as 1 BC.  Oracle would throw an error,
but we've accepted that case for a long time so I'm hesitant to
change it in a back-patch.

Per bug #16419 from Saeed Hubaishan.  Back-patch to all supported
branches.

Dar Alathar-Yemen and Tom Lane

Discussion: https://postgr.es/m/16419-d8d9db0a7553f01b@postgresql.org
2020-09-30 15:40:23 -04:00
Tom Lane
09b29ca82b Remove obsolete replication settings within TAP tests.
PostgresNode.pm set "max_wal_senders = 5" for replication testing,
but this seems to be slightly too low for our current test suite.
Slower buildfarm members frequently report "number of requested standby
connections exceeds max_wal_senders" failures, due to old walsenders
not exiting instantaneously.  Usually, the test does not fail overall
because of automatic walreceiver restart, but sometimes the failure
becomes visible; and in any case such retries slow down the test.

That value came in with commit 89ac7004d, but was soon obsoleted by
f6d6d2920, which raised the built-in default from zero to 10; so that
PostgresNode.pm is actually setting it to less than the conservative
built-in default.  That seems pretty pointless, so let's remove the
special setting and let the default prevail, in hopes of making
the TAP tests more robust.

Likewise, the setting "max_replication_slots = 5" is obsolete and
can be removed.

While here, reverse-engineer a comment about why we're choosing
less-than-default values for some other settings.

(Note: before v12, max_wal_senders counted against max_connections
so that the latter setting also needs some fiddling with.)

Back-patch to v10 where the subscription tests were added.
It's likely that the older branches aren't pushing the boundaries
of max_wal_senders, but I'm disinclined to spend time trying to
figure out exactly when it started to be a problem.

Discussion: https://postgr.es/m/723911.1601417626@sss.pgh.pa.us
2020-09-29 20:02:58 -04:00
Tom Lane
c1e044bb30 Fix memory leak in plpgsql's CALL processing.
When executing a CALL or DO in a non-atomic context (i.e., not inside
a function or query), plpgsql creates a new plan each time through,
as a rather hacky solution to some resource management issues.  But
it failed to free this plan until exit of the current procedure or DO
block, resulting in serious memory bloat in procedures that called
other procedures many times.  Fix by remembering to free the plan,
and by being more honest about restoring the previous state (otherwise,
recursive procedure calls have a problem).

There was also a smaller leak associated with recalculation of the
"target" list of output variables.  Fix that by using the statement-
lifespan context to hold non-permanent values.

Back-patch to v11 where procedures were introduced.

Pavel Stehule and Tom Lane

Discussion: https://postgr.es/m/CAFj8pRDiiU1dqym+_P4_GuTWm76knJu7z9opWayBJTC0nQGUUA@mail.gmail.com
2020-09-29 11:18:31 -04:00
Fujii Masao
4d342b9d41 Archive timeline history files in standby if archive_mode is set to "always".
Previously the standby server didn't archive timeline history files
streamed from the primary even when archive_mode is set to "always",
while it archives the streamed WAL files. This could cause the PITR to
fail because there was no required timeline history file in the archive.
The cause of this issue was that walreceiver didn't mark those files as
ready for archiving.

This commit makes walreceiver mark those streamed timeline history
files as ready for archiving if archive_mode=always. Then the archiver
process archives the marked timeline history files.

Back-patch to all supported versions.

Reported-by: Grigory Smolkin
Author: Grigory Smolkin, Fujii Masao
Reviewed-by: David Zhang, Anastasia Lubennikova
Discussion: https://postgr.es/m/54b059d4-2b48-13a4-6f43-95a087c92367@postgrespro.ru
2020-09-29 16:24:20 +09:00
Michael Paquier
8aa4496dd3 Fix progress reporting of REINDEX CONCURRENTLY
This addresses a couple of issues with the so-said subject:
- Report the correct parent relation with the index actually being
rebuilt or validated.  Previously, the command status remained set to
the last index created for the progress of the index build and
validation, which would be incorrect when working on a table that has
more than one index.
- Use the correct phase when waiting before the drop of the old
indexes.  Previously, this was reported with the same status as when
waiting before the old indexes are marked as dead.

Author: Matthias van de Meent, Michael Paquier
Discussion: https://postgr.es/m/CAEze2WhqFgcwe1_tv=sFYhLWV2AdpfukumotJ6JNcAOQs3jufg@mail.gmail.com
Backpatch-through: 12
2020-09-29 14:16:18 +09:00
Tom Lane
29f20db85e Assign collations in partition bound expressions.
Failure to do this can result in errors during evaluation of
the bound expression, as illustrated by the new regression test.

Back-patch to v12 where the ability for partition bounds to be
expressions was added.

Discussion: https://postgr.es/m/CAJV4CdrZ5mKuaEsRSbLf2URQ3h6iMtKD=hik8MaF5WwdmC9uZw@mail.gmail.com
2020-09-28 14:12:38 -04:00
Tom Lane
bda32733ce Revise RelationBuildRowSecurity() to avoid memory leaks.
This function leaked some memory while loading qual clauses for
an RLS policy.  While ordinarily negligible, that could build up
in some repeated-reload cases, as reported by Konstantin Knizhnik.
We can improve matters by borrowing the coding long used in
RelationBuildRuleLock: build stringToNode's result directly in
the target context, and remember to explicitly pfree the
input string.

This patch by no means completely guarantees zero leaks within
this function, since we have no real guarantee that the catalog-
reading subroutines it calls don't leak anything.  However,
practical tests suggest that this is enough to resolve the issue.
In any case, any remaining leaks are similar to those risked by
RelationBuildRuleLock and other relcache-loading subroutines.
If we need to fix them, we should adopt a more global approach
such as that used by the RECOVER_RELATION_BUILD_MEMORY hack.

While here, let's remove the need for an expensive PG_TRY block by
using MemoryContextSetParent to reparent an initially-short-lived
context for the RLS data.

Back-patch to all supported branches.

Discussion: https://postgr.es/m/21356c12-8917-8249-b35f-1c447231922b@postgrespro.ru
2020-09-26 16:04:06 -04:00
Tom Lane
fb93f784fc Fix handling of -d "connection string" in pg_dump/pg_restore.
Parallel pg_dump failed if its -d parameter was a connection string
containing any essential information other than host, port, or username.
The same was true for pg_restore with --create.

The reason is that these scenarios failed to preserve the connection
string from the command line; the code felt free to replace that with
just the database name when reconnecting from a pg_dump parallel worker
or after creating the target database.  By chance, parallel pg_restore
did not suffer this defect, as long as you didn't say --create.

In practice it seems that the error would be obvious only if the
connstring included essential, non-default SSL or GSS parameters.
This may explain why it took us so long to notice.  (It also makes
it very difficult to craft a regression test case illustrating the
problem, since the test would fail in builds without those options.)

Fix by refactoring so that ConnectDatabase always receives all the
relevant options directly from the command line, rather than
reconstructed values.  Inject a different database name, when necessary,
by relying on libpq's rules for handling multiple "dbname" parameters.

While here, let's get rid of the essentially duplicate _connectDB
function, as well as some obsolete nearby cruft.

Per bug #16604 from Zsolt Ero.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/16604-933f4b8791227b15@postgresql.org
2020-09-24 18:19:38 -04:00
Thomas Munro
7664cc869a Fix missing fsync of SLRU directories.
Harmonize behavior by moving reponsibility for fsyncing directories down
into slru.c.  In 10 and later, only the multixact directories were
missed (see commit 1b02be21), and in older branches all SLRUs were
missed.

Back-patch to all supported releases.

Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CA%2BhUKGLtsTUOScnNoSMZ-2ZLv%2BwGh01J6kAo_DM8mTRq1sKdSQ%40mail.gmail.com
2020-09-24 10:30:47 +12:00
Tom Lane
5d0c806583 Avoid possible dangling-pointer access in tsearch_readline_callback.
tsearch_readline() saves the string pointer it returns to the caller
for possible use in the associated error context callback.  However,
the caller will usually pfree that string sometime before it next
calls tsearch_readline(), so that there is a window where an ereport
will try to print an already-freed string.

The built-in users of tsearch_readline() happen to all do that pfree
at the bottoms of their loops, so that the window is effectively
empty for them.  However, this is not documented as a requirement,
and contrib/dict_xsyn doesn't do it like that, so it seems likely
that third-party dictionaries might have live bugs here.

The practical consequences of this seem pretty limited in any case,
since production builds wouldn't clobber the freed string immediately,
besides which you'd not expect syntax errors in dictionary files
being used in production.  Still, it's clearly a bug waiting to bite
somebody.

Fix by pstrdup'ing the string to be saved for the error callback,
and then pfree'ing it next time through.  It's been like this for
a long time, so back-patch to all supported branches.

Discussion: https://postgr.es/m/48A4FA71-524E-41B9-953A-FD04EF36E2E7@yesql.se
2020-09-23 11:36:13 -04:00
Peter Eisentraut
11071da39e Fix whitespace 2020-09-20 14:41:28 +02:00
Tom Lane
1af91dc032 Use factorial rather than numeric_fac in create_operator.sql.
These two SQL functions are aliases for the same C function, so this
change has no semantic effect.  However, because we dropped the
numeric_fac alias in HEAD (commit 76f412ab3), operator definitions
based on that one don't port forward, causing problems for cross-version
upgrade tests based on the regression database.

Patch all active back branches to dodge the problem.

Discussion: https://postgr.es/m/449144.1600439950@sss.pgh.pa.us
2020-09-18 18:03:44 -04:00
Amit Kapila
4bc63462d9 Update parallel BTree scan state when the scan keys can't be satisfied.
For parallel btree scan to work for array of scan keys, it should reach
BTPARALLEL_DONE state once for every distinct combination of array keys.
This is required to ensure that the parallel workers don't try to seize
blocks at the same time for different scan keys. We missed to update this
state when we discovered that the scan keys can't be satisfied.

Author: James Hunter
Reviewed-by: Amit Kapila
Tested-by: Justin Pryzby
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/4248CABC-25E3-4809-B4D0-128E1BAABC3C@amazon.com
2020-09-17 15:38:47 +05:30
Tom Lane
511690ec5d Avoid unnecessary recursion to child tables in ALTER TABLE SET NOT NULL.
If a partitioned table's column is already marked NOT NULL, there is
no need to examine its partitions, because we can rely on previous
DDL to have enforced that the child columns are NOT NULL as well.
(Unfortunately, the same cannot be said for traditional inheritance,
so for now we have to restrict the optimization to partitioned tables.)
Hence, we may skip recursing to child tables in this situation.

The reason this case is worth worrying about is that when pg_dump dumps
a partitioned table having a primary key, it will include the requisite
NOT NULL markings in the CREATE TABLE commands, and then add the
primary key as a separate step.  The primary key addition generates a
SET NOT NULL as a subcommand, just to be sure.  So the situation where
a SET NOT NULL is redundant does arise in the real world.

Skipping the recursion does more than just save a few cycles: it means
that a command such as "ALTER TABLE ONLY partition_parent ADD PRIMARY
KEY" will take locks only on the partition parent table, not on the
partitions.  It turns out that parallel pg_restore is effectively
assuming that that's true, and has little choice but to do so because
the dependencies listed for such a TOC entry don't include the
partitions.  pg_restore could thus issue this ALTER while data restores
on the partitions are still in progress.  Taking unnecessary locks on
the partitions not only hurts concurrency, but can lead to actual
deadlock failures, as reported by Domagoj Smoljanovic.

(A contributing factor in the deadlock is that TRUNCATE on a child
partition wants a non-exclusive lock on the parent.  This seems
likewise unnecessary, but the fix for it is more invasive so we
won't consider back-patching it.  Fortunately, getting rid of one
of these two poor behaviors is enough to remove the deadlock.)

Although support for partitioned primary keys came in with v11,
this patch is dependent on the SET NOT NULL refactoring done by
commit f4a3fdfbd, so we can only patch back to v12.

Patch by me; thanks to Alvaro Herrera and Amit Langote for review.

Discussion: https://postgr.es/m/VI1PR03MB31670CA1BD9625C3A8C5DD05EB230@VI1PR03MB3167.eurprd03.prod.outlook.com
2020-09-16 13:38:26 -04:00
Tom Lane
8580631ff7 Fix bogus cache-invalidation logic in logical replication worker.
The code recorded cache invalidation events by zeroing the "localreloid"
field of affected cache entries.  However, it's possible for an inval
event to occur even while we have the entry open and locked.  So an
ill-timed inval could result in "cache lookup failed for relation 0"
errors, if the worker's code tried to use the cleared field.  We can
fix that by creating a separate bool field to record whether the entry
needs to be revalidated.  (In the back branches, cram the bool into
what had been padding space, to avoid an ABI break in the somewhat
unlikely event that any extension is looking at this struct.)

Also, rearrange the logic in logicalrep_rel_open so that it
does the right thing in cases where table_open would fail.
We should retry the lookup by name in that case, but we didn't.

The real-world impact of this is probably small.  In the first place,
the error conditions are very low probability, and in the second place,
the worker would just exit and get restarted.  We only noticed because
in a CLOBBER_CACHE_ALWAYS build, the failure can occur repeatedly,
preventing the worker from making progress.  Nonetheless, it's clearly
a bug, and it impedes a useful type of testing; so back-patch to v10
where this code was introduced.

Discussion: https://postgr.es/m/1032727.1600096803@sss.pgh.pa.us
2020-09-16 12:07:31 -04:00
Noah Misch
af2d09fa1c Fix interpolation in test name.
A pre-commit review had reported the problem, but the fix reached only
v10 and earlier.  Back-patch to v11.

Discussion: https://postgr.es/m/20200423.140546.1055476118690602079.horikyota.ntt@gmail.com
2020-09-13 23:29:55 -07:00
Tom Lane
1371a1e416 Use the properly transformed RangeVar for expandTableLikeClause().
transformCreateStmt() adjusts the transformed statement's RangeVar
to specify the target schema explicitly, for the express reason
of making sure that auxiliary statements derived by parse
transformation operate on the right table.  But the refactoring
I did in commit 502898192 got this wrong and passed the untransformed
RangeVar to expandTableLikeClause().  This could lead to assertion
failures or weird misbehavior if the wrong table was accessed.

Per report from Alexander Lakhin.  Like the previous patch, back-patch
to all supported branches.

Discussion: https://postgr.es/m/05051f9d-b32b-cb35-6735-0e9f2ab86b5f@gmail.com
2020-09-13 12:51:21 -04:00
Tom Lane
4e10c0c8a6 Use _exit(2) for SIGQUIT during ProcessStartupPacket, too.
Bring the signal handling for startup-packet collection into line
with the policy established in commits bedadc732 and 8e19a8264,
namely don't risk running atexit callbacks when handling SIGQUIT.

Ideally, we'd not do so for SIGTERM or timeout interrupts either,
but that change seems a bit too risky for the back branches.
For now, just improve the comments in this area to describe the risk.

Also relocate where BackendInitialize re-disables these interrupts,
to minimize the code span where they're active.  This doesn't buy
a whole lot of safety, but it can't hurt.

In passing, rename startup_die() to remove confusion about whether
it is for the startup process.

Like the previous commits, back-patch to all supported branches.

Discussion: https://postgr.es/m/1850884.1599601164@sss.pgh.pa.us
2020-09-10 12:06:26 -04:00
Michael Paquier
25ff747721 doc: Fix some grammar and inconsistencies
Some comments are fixed while on it.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20200818171702.GK17022@telsasoft.com
Backpatch-through: 9.6
2020-09-10 15:50:46 +09:00
Tom Lane
d038c6c631 Make archiver's SIGQUIT handler exit via _exit().
Commit 8e19a8264 changed the SIGQUIT handlers of almost all server
processes not to run atexit callbacks.  The archiver process was
skipped, perhaps because it's not connected to shared memory; but
it's just as true here that running atexit callbacks in a signal
handler is unsafe.  So let's make it work like the rest.

In HEAD and v13, we can use the common SignalHandlerForCrashExit
handler.  Before that, just tweak pgarch_exit to use _exit(2)
explicitly.

Like the previous commit, back-patch to all supported branches.

Kyotaro Horiguchi, back-patching by me

Discussion: https://postgr.es/m/1850884.1599601164@sss.pgh.pa.us
2020-09-09 15:32:34 -04:00
Alvaro Herrera
ef1e1250e7
Check default partitions constraints while descending
Partitioning tuple route code assumes that the partition chosen while
descending the partition hierarchy is always the correct one.  This is
true except when the partition is the default partition and another
partition has been added concurrently: the partition constraint changes
and we don't recheck it.  This can lead to tuples mistakenly being added
to the default partition that should have been rejected.

Fix by rechecking the default partition constraint while descending the
hierarchy.

An isolation test based on the reproduction steps described by Hao Wu
(with tweaks for extra coverage) is included.

Backpatch to 12, where this bug came in with 898e5e3290.

Reported by: Hao Wu <hawu@vmware.com>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CA+HiwqFqBmcSSap4sFnCBUEL_VfOMmEKaQ3gwUhyfa4c7J_-nA@mail.gmail.com
Discussion: https://postgr.es/m/DM5PR0501MB3910E97A9EDFB4C775CF3D75A42F0@DM5PR0501MB3910.namprd05.prod.outlook.com
2020-09-08 19:35:15 -03:00
Tom Lane
f45dd3fed9 Fix misleading error message about inconsistent moving-aggregate types.
We reported the wrong types when complaining that an aggregate's
moving-aggregate implementation is inconsistent with its regular
implementation.

This was wrong since the feature was introduced, so back-patch
to all supported branches.

Jeff Janes

Discussion: https://postgr.es/m/CAMkU=1x808LH=LPhZp9mNSP0Xd1xDqEd+XeGcvEe48dfE6xV=A@mail.gmail.com
2020-09-06 12:55:37 -04:00
Tom Lane
a7bcf391f3 Remove useless lstat() call in pg_rewind.
This is duplicative of an lstat that was just done by the calling
function (traverse_datadir), besides which we weren't really doing
anything with the results.  There's not much point in checking to
see if someone removed the file since the previous lstat, since the
FILE_ACTION_REMOVE code would have to deal with missing-file cases
anyway.  Moreover, the "exists = false" assignment was a dead store;
nothing was done with that value later.

A syscall saved is a syscall earned, so back-patch to 9.5
where this code was introduced.

Discussion: https://postgr.es/m/1221796.1599329320@sss.pgh.pa.us
2020-09-06 11:50:41 -04:00
Tom Lane
fc37c6f610 Make new authentication test case more robust.
I happened to notice that the new test case I added in b55b4dad9
falls over if one runs "make check" repeatedly; though not in branches
after v10.  That's because it was assuming that tmp_check/pgpass
wouldn't exist already.  However, it's only been since v11 that the
Makefiles forcibly remove all of tmp_check/ before starting a TAP run.
This fix to unlink the file is therefore strictly necessary only in
v10 ... but it seems wisest to do it across the board, rather than
let the test rely on external logic to get the conditions right.
2020-09-04 21:01:59 -04:00
Tom Lane
9b47ee6e7c Fix over-eager ping'ing in logical replication receiver.
Commit 3f60f690f only partially fixed the broken-status-tracking
issue in LogicalRepApplyLoop: we need ping_sent to have the same
lifetime as last_recv_timestamp.  The effects are much less serious
than what that commit fixed, though.  AFAICS this would just lead to
extra ping requests being sent, once per second until the sender
responds.  Still, it's a bug, so backpatch to v10 as before.

Discussion: https://postgr.es/m/959627.1599248476@sss.pgh.pa.us
2020-09-04 20:20:05 -04:00
Andrew Dunstan
616110eac3 Collect attribute data on extension owned tables being dumped
If this data is not collected, pg_dump segfaults if asked for column
inserts.

Fix by Fabrízio de Royes Mello

Backpatch to release 12 where the bug was introduced.
2020-09-04 13:55:03 -04:00
Bruce Momjian
56f42cf906 C comment: correct use of 64-"byte" cache line size
Reported-by: Kelly Min

Discussion: https://postgr.es/m/CAPSbxatOiQO90LYpSC3+svAU9-sHgDfEP4oFhcEUt_X=DqFA9g@mail.gmail.com

Backpatch-through: 9.5
2020-09-04 13:27:52 -04:00
Tom Lane
aa4eeb38f3 Fix rare deadlock failure in create_am regression test.
The "DROP ACCESS METHOD gist2" test will require locking the index
to be dropped and then its table; while most ordinary operations
lock a table first then its index.  While no concurrent test scripts
should be touching fast_emp4000, autovacuum might chance to be
processing that table when the DROP runs, resulting in a deadlock
failure.  This is pretty rare but we see it in the buildfarm from
time to time.

To fix, acquire a lock on fast_emp4000 before issuing the DROP.

Since the point of the exercise is mostly to prevent buildfarm
failures, back-patch to 9.6 where this test was introduced.

Discussion: https://postgr.es/m/839004.1599185607@sss.pgh.pa.us
2020-09-04 12:40:28 -04:00
Tom Lane
82dd373f2c Avoid lockup of a parallel worker when reporting a long error message.
Because sigsetjmp() will restore the initial state with signals blocked,
the code path in bgworker.c for reporting an error and exiting would
execute that way.  Usually this is fairly harmless; but if a parallel
worker had an error message exceeding the shared-memory communication
buffer size (16K) it would lock up, because it would wait for a
resume-sending signal from its parallel leader which it would never
detect.

To fix, just unblock signals at the appropriate point.

This can be shown to fail back to 9.6.  The lack of parallel query
infrastructure makes it difficult to provide a simple test case for
9.5; but I'm pretty sure the issue exists in some form there as well,
so apply the code change there too.

Vignesh C, reviewed by Bharath Rupireddy, Robert Haas, and myself

Discussion: https://postgr.es/m/CALDaNm1d1hHPZUg3xU4XjtWBOLCrA+-2cJcLpw-cePZ=GgDVfA@mail.gmail.com
2020-09-03 16:52:09 -04:00
Alvaro Herrera
fcc4275681
Fix typo in comment
Introduced by 8b08f7d4820f; backpatch to 11.

Discussion: https://postgr.es/m/20200812214918.GA30353@alvherre.pgsql
2020-09-01 20:43:23 -04:00
Alvaro Herrera
7067ba1b4b
Raise error on concurrent drop of partitioned index
We were already raising an error for DROP INDEX CONCURRENTLY on a
partitioned table, albeit a different and confusing one:
  ERROR:  DROP INDEX CONCURRENTLY must be first action in transaction

Change that to throw a more comprehensible error:
  ERROR:  cannot drop partitioned index \"%s\" concurrently

Michael Paquier authored the test case for indexes on temporary
partitioned tables.

Backpatch to 11, where indexes on partitioned tables were added.

Reported-by: Jan Mussler <jan.mussler@zalando.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/16594-d2956ca909585067@postgresql.org
2020-09-01 13:40:43 -04:00
Tom Lane
55aea0c706 Teach libpq to handle arbitrary-length lines in .pgpass files.
Historically there's been a hard-wired assumption here that no line of
a .pgpass file could be as long as NAMEDATALEN*5 bytes.  That's a bit
shaky to start off with, because (a) there's no reason to suppose that
host names fit in NAMEDATALEN, and (b) this figure fails to allow for
backslash escape characters.  However, it fails completely if someone
wants to use a very long password, and we're now hearing reports of
people wanting to use "security tokens" that can run up to several
hundred bytes.  Another angle is that the file is specified to allow
comment lines, but there's no reason to assume that long comment lines
aren't possible.

Rather than guessing at what might be a more suitable limit, let's
replace the fixed-size buffer with an expansible PQExpBuffer.  That
adds one malloc/free cycle to the typical use-case, but that's surely
pretty cheap relative to the I/O this code has to do.

Also, add TAP test cases to exercise this code, because there was no
test coverage before.

This reverts most of commit 2eb3bc588, as there's no longer a need for
a warning message about overlength .pgpass lines.  (I kept the explicit
check for comment lines, though.)

In HEAD and v13, this also fixes an oversight in 74a308cf5: there's not
much point in explicit_bzero'ing the line buffer if we only do so in two
of the three exit paths.

Back-patch to all supported branches, except that the test case only
goes back to v10 where src/test/authentication/ was added.

Discussion: https://postgr.es/m/4187382.1598909041@sss.pgh.pa.us
2020-09-01 13:14:44 -04:00
Bruce Momjian
b1f52817bc C comment: remove mention of use of t_hoff WAL structure member
Reported-by: Antonin Houska

Discussion: https://postgr.es/m/21643.1595353537@antos

Backpatch-through: 9.5
2020-08-31 17:51:31 -04:00
Tom Lane
ff3c16d1e7 Fix code for re-finding scan position in a multicolumn GIN index.
collectMatchBitmap() needs to re-find the index tuple it was previously
looking at, after transiently dropping lock on the index page it's on.
The tuple should still exist and be at its prior position or somewhere
to the right of that, since ginvacuum never removes tuples but
concurrent insertions could add one.  However, there was a thinko in
that logic, to the effect of expecting any inserted tuples to have the
same index "attnum" as what we'd been scanning.  Since there's no
physical separation of tuples with different attnums, it's not terribly
hard to devise scenarios where this fails, leading to transient "lost
saved point in index" errors.  (While I've duplicated this with manual
testing, it seems impossible to make a reproducible test case with our
available testing technology.)

Fix by just continuing the scan when the attnum doesn't match.

While here, improve the error message used if we do fail, so that it
matches the wording used in btree for a similar case.

collectMatchBitmap()'s posting-tree code path was previously not
exercised at all by our regression tests.  While I can't make
a regression test that exhibits the bug, I can at least improve
the code coverage here, so do that.  The test case I made for this
is an extension of one added by 4b754d6c1, so it only works in
HEAD and v13; didn't seem worth trying hard to back-patch it.

Per bug #16595 from Jesse Kinkead.  This has been broken since
multicolumn capability was added to GIN (commit 27cb66fdf),
so back-patch to all supported branches.

Discussion: https://postgr.es/m/16595-633118be8eef9ce2@postgresql.org
2020-08-27 17:36:13 -04:00
Tom Lane
6b701eaaa9 Avoid pushing quals down into sub-queries that have grouping sets.
The trouble with doing this is that an apparently-constant subquery
output column isn't really constant if it is a grouping column that
appears in only some of the grouping sets.  A qual using such a
column would be subject to incorrect const-folding after push-down,
as seen in bug #16585 from Paul Sivash.

To fix, just disable qual pushdown altogether if the sub-query has
nonempty groupingSets.  While we could imagine far less restrictive
solutions, there is not much point in working harder right now,
because subquery_planner() won't move HAVING clauses to WHERE within
such a subquery.  If the qual stays in HAVING it's not going to be
a lot more useful than if we'd kept it at the outer level.

Having said that, this restriction could be removed if we used a
parsetree representation that distinguished such outputs from actual
constants, which is something I hope to do in future.  Hence, make
the patch a minimal addition rather than integrating it more tightly
(e.g. by renumbering the existing items in subquery_is_pushdown_safe's
comment).

Back-patch to 9.5 where grouping sets were introduced.

Discussion: https://postgr.es/m/16585-9d8c340d23ade8c1@postgresql.org
2020-08-22 14:46:40 -04:00
Tom Lane
d9253df12e Fix handling of CREATE TABLE LIKE with inheritance.
If a CREATE TABLE command uses both LIKE and traditional inheritance,
Vars in CHECK constraints and expression indexes that are absorbed
from a LIKE parent table tended to get mis-numbered, resulting in
wrong answers and/or bizarre error messages (though probably not any
actual crashes, thanks to validation occurring in the executor).

In v12 and up, the same could happen to Vars in GENERATED expressions,
even in cases with no LIKE clause but multiple traditional-inheritance
parents.

The cause of the problem for LIKE is that parse_utilcmd.c supposed
it could renumber such Vars correctly during transformCreateStmt(),
which it cannot since we have not yet accounted for columns added via
inheritance.  Fix that by postponing processing of LIKE INCLUDING
CONSTRAINTS, DEFAULTS, GENERATED, INDEXES till after we've performed
DefineRelation().

The error with GENERATED and multiple inheritance is a simple oversight
in MergeAttributes(); it knows it has to renumber Vars in inherited
CHECK constraints, but forgot to apply the same processing to inherited
GENERATED expressions (a/k/a defaults).

Per bug #16272 from Tom Gottfried.  The non-GENERATED variants of the
issue are ancient, presumably dating right back to the addition of
CREATE TABLE LIKE; hence back-patch to all supported branches.

Discussion: https://postgr.es/m/16272-6e32da020e9a9381@postgresql.org
2020-08-21 15:00:43 -04:00
David Rowley
3248633579 Fix a few typos in JIT comments and README
Reviewed-by: Abhijit Menon-Sen
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAApHDvobgmCs6CohqhKTUf7D8vffoZXQTCBTERo9gbOeZmvLTw%40mail.gmail.com
Backpatch-through: 11, where JIT was added
2020-08-21 09:35:24 +12:00
Tom Lane
814a57065e Suppress unnecessary RelabelType nodes in yet more cases.
Commit a477bfc1d fixed eval_const_expressions() to ensure that it
didn't generate unnecessary RelabelType nodes, but I failed to notice
that some other places in the planner had the same issue.  Really
noplace in the planner should be using plain makeRelabelType(), for
fear of generating expressions that should be equal() to semantically
equivalent trees, but aren't.

An example is that because canonicalize_ec_expression() failed
to be careful about this, we could end up with an equivalence class
containing both a plain Const, and a Const-with-RelabelType
representing exactly the same value.  So far as I can tell this led to
no visible misbehavior, but we did waste a bunch of cycles generating
and evaluating "Const = Const-with-RelabelType" to prove such entries
are redundant.

Hence, move the support function added by a477bfc1d to where it can
be more generally useful, and use it in the places where planner code
previously used makeRelabelType.

Back-patch to v12, like the previous patch.  While I have no concrete
evidence of any real misbehavior here, it's certainly possible that
I overlooked a case where equivalent expressions that aren't equal()
could cause a user-visible problem.  In any case carrying extra
RelabelType nodes through planning to execution isn't very desirable.

Discussion: https://postgr.es/m/1311836.1597781384@sss.pgh.pa.us
2020-08-19 14:07:49 -04:00
Heikki Linnakangas
aecefffc3f Avoid non-constant format string argument to fprintf().
As Tom Lane pointed out, it could defeat the compiler's printf() format
string verification.

Backpatch to v12, like that patch that introduced it.

Discussion: https://www.postgresql.org/message-id/1069283.1597672779%40sss.pgh.pa.us
2020-08-18 13:13:28 +03:00
Alvaro Herrera
4f47c8e7d4
Disable autovacuum for BRIN test table
This should improve stability in the tests.

Per buildfarm member hyrax (CLOBBER_CACHE_ALWAYS) via Tom Lane.

Discussion: https://postgr.es/m/871534.1597503261@sss.pgh.pa.us
2020-08-17 16:20:06 -04:00
Heikki Linnakangas
9f44d71b06 Fix printing last progress report line in client programs.
A number of client programs have a "--progress" option that when printing
to a TTY, updates the current line by printing a '\r' and overwriting it.
After the last line, '\n' needs to be printed to move the cursor to the
next line. pg_basebackup and pgbench got this right, but pg_rewind and
pg_checksums were slightly wrong. pg_rewind printed the newline to stdout
instead of stderr, and pg_checksums printed the newline even when not
printing to a TTY. Fix them, and also add a 'finished' argument to
pg_basebackup's progress_report() function, to keep it consistent with
the other programs.

Backpatch to v12. pg_rewind's newline was broken with the logging changes
in commit cc8d415117 in v12, and pg_checksums was introduced in v12.

Discussion: https://www.postgresql.org/message-id/82b539e5-ae33-34b0-1aee-22b3379fd3eb@iki.fi
2020-08-17 09:33:10 +03:00
Noah Misch
06e50d8f7e Move new LOCKTAG_DATABASE_FROZEN_IDS to end of enum LockTagType.
Several PGXN modules reference LockTagType values; renumbering would
force a recompile of those modules.  Oversight in back-patch of today's
commit 566372b3d6.  Back-patch to released
branches, v12 through 9.5.

Reported by Tom Lane.

Discussion: https://postgr.es/m/921383.1597523945@sss.pgh.pa.us
2020-08-15 16:15:59 -07:00
Noah Misch
30e68a2abb Prevent concurrent SimpleLruTruncate() for any given SLRU.
The SimpleLruTruncate() header comment states the new coding rule.  To
achieve this, add locktype "frozenid" and two LWLocks.  This closes a
rare opportunity for data loss, which manifested as "apparent
wraparound" or "could not access status of transaction" errors.  Data
loss is more likely in pg_multixact, due to released branches' thin
margin between multiStopLimit and multiWrapLimit.  If a user's physical
replication primary logged ":  apparent wraparound" messages, the user
should rebuild standbys of that primary regardless of symptoms.  At less
risk is a cluster having emitted "not accepting commands" errors or
"must be vacuumed" warnings at some point.  One can test a cluster for
this data loss by running VACUUM FREEZE in every database.  Back-patch
to 9.5 (all supported versions).

Discussion: https://postgr.es/m/20190218073103.GA1434723@rfd.leadboat.com
2020-08-15 10:15:56 -07:00
Tom Lane
912fb290c5 Be more careful about the shape of hashable subplan clauses.
nodeSubplan.c expects that the testexpr for a hashable ANY SubPlan
has the form of one or more OpExprs whose LHS is an expression of the
outer query's, while the RHS is an expression over Params representing
output columns of the subquery.  However, the planner only went as far
as verifying that the clauses were all binary OpExprs.  This works
99.99% of the time, because the clauses have the right shape when
emitted by the parser --- but it's possible for function inlining to
break that, as reported by PegoraroF10.  To fix, teach the planner
to check that the LHS and RHS contain the right things, or more
accurately don't contain the wrong things.  Given that this has been
broken for years without anyone noticing, it seems sufficient to just
give up hashing when it happens, rather than go to the trouble of
commuting the clauses back again (which wouldn't necessarily work
anyway).

While poking at that, I also noticed that nodeSubplan.c had a baked-in
assumption that the number of hash clauses is identical to the number
of subquery output columns.  Again, that's fine as far as parser output
goes, but it's not hard to break it via function inlining.  There seems
little reason for that assumption though --- AFAICS, the only thing
it's buying us is not having to store the number of hash clauses
explicitly.  Adding code to the planner to reject such cases would take
more code than getting nodeSubplan.c to cope, so I fixed it that way.

This has been broken for as long as we've had hashable SubPlans,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/1549209182255-0.post@n3.nabble.com
2020-08-14 22:14:03 -04:00
Alvaro Herrera
3dadcb2b31
pg_dump: fix dependencies on FKs to partitioned tables
Parallel-restoring a foreign key that references a partitioned table
with several levels of partitions can fail:

pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 6684; 2606 29166 FK CONSTRAINT fk fk_a_fkey postgres
pg_restore: error: could not execute query: ERROR:  there is no unique constraint matching given keys for referenced table "pk"
Command was: ALTER TABLE fkpart3.fk
    ADD CONSTRAINT fk_a_fkey FOREIGN KEY (a) REFERENCES fkpart3.pk(a);

This happens in parallel restore mode because some index partitions
aren't yet attached to the topmost partitioned index that the FK uses,
and so the index is still invalid.  The current code marks the FK as
dependent on the first level of index-attach dump objects; the bug is
fixed by recursively marking the FK on their children.

Backpatch to 12, where FKs to partitioned tables were introduced.

Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/3170626.1594842723@sss.pgh.pa.us
Backpatch: 12-master
2020-08-14 17:33:31 -04:00
Tom Lane
42566a2500 Fix postmaster's behavior during smart shutdown.
Up to now, upon receipt of a SIGTERM ("smart shutdown" command), the
postmaster has immediately killed all "optional" background processes,
and subsequently refused to launch new ones while it's waiting for
foreground client processes to exit.  No doubt this seemed like an OK
policy at some point; but it's a pretty bad one now, because it makes
for a seriously degraded environment for the remaining clients:

* Parallel queries are killed, and new ones fail to launch. (And our
parallel-query infrastructure utterly fails to deal with the case
in a reasonable way --- it just hangs waiting for workers that are
not going to arrive.  There is more work needed in that area IMO.)

* Autovacuum ceases to function.  We can tolerate that for awhile,
but if bulk-update queries continue to run in the surviving client
sessions, there's eventually going to be a mess.  In the worst case
the system could reach a forced shutdown to prevent XID wraparound.

* The bgwriter and walwriter are also stopped immediately, likely
resulting in performance degradation.

Hence, let's rearrange things so that the only immediate change in
behavior is refusing to let in new normal connections.  Once the last
normal connection is gone, shut everything down as though we'd received
a "fast" shutdown.  To implement this, remove the PM_WAIT_BACKUP and
PM_WAIT_READONLY states, instead staying in PM_RUN or PM_HOT_STANDBY
while normal connections remain.  A subsidiary state variable tracks
whether or not we're letting in new connections in those states.

This also allows having just one copy of the logic for killing child
processes in smart and fast shutdown modes.  I moved that logic into
PostmasterStateMachine() by inventing a new state PM_STOP_BACKENDS.

Back-patch to 9.6 where parallel query was added.  In principle
this'd be a good idea in 9.5 as well, but the risk/reward ratio
is not as good there, since lack of autovacuum is not a problem
during typical uses of smart shutdown.

Per report from Bharath Rupireddy.

Patch by me, reviewed by Thomas Munro

Discussion: https://postgr.es/m/CALj2ACXAZ5vKxT9P7P89D87i3MDO9bfS+_bjMHgnWJs8uwUOOw@mail.gmail.com
2020-08-14 13:26:57 -04:00
Heikki Linnakangas
f81167adbf Fix typo in test comment. 2020-08-14 10:41:19 +03:00
Alvaro Herrera
1122a903e9
Handle new HOT chains in index-build table scans
When a table is scanned by heapam_index_build_range_scan (née
IndexBuildHeapScan) and the table lock being held allows concurrent data
changes, it is possible for new HOT chains to sprout in a page that were
unknown when the scan of a page happened.  This leads to an error such
as
  ERROR:  failed to find parent tuple for heap-only tuple at (X,Y) in table "tbl"
because the root tuple was not present when we first obtained the list
of the page's root tuples.  This can be fixed by re-obtaining the list
of root tuples, if we see that a heap-only tuple appears to point to a
non-existing root.

This was reported by Anastasia as occurring for BRIN summarization
(which exists since 9.5), but I think it could theoretically also happen
with CREATE INDEX CONCURRENTLY (much older) or REINDEX CONCURRENTLY
(very recent).  It seems a happy coincidence that BRIN forces us to
backpatch this all the way to 9.5.

Reported-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Diagnosed-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Co-authored-by: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/602d8487-f0b2-5486-0088-0f372b2549fa@postgrespro.ru
Backpatch: 9.5 - master
2020-08-13 17:33:49 -04:00
Alvaro Herrera
0426c75e7d
BRIN: Handle concurrent desummarization properly
If a page range is desummarized at just the right time concurrently with
an index walk, BRIN would raise an error indicating index corruption.
This is scary and unhelpful; silently returning that the page range is
not summarized is sufficient reaction.

This bug was introduced by commit 975ad4e602 as additional protection
against a bug whose actual fix was elsewhere.  Backpatch equally.

Reported-By: Anastasia Lubennikova <a.lubennikova@postgrespro.ru>
Diagnosed-By: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/2588667e-d07d-7e10-74e2-7e1e46194491@postgrespro.ru
Backpatch: 9.5 - master
2020-08-12 15:33:36 -04:00
Tom Lane
0ad348f38e Stamp 12.4. 2020-08-10 17:15:53 -04:00
Noah Misch
64a71062e0 Empty search_path in logical replication apply worker and walsender.
This is like CVE-2018-1058 commit
582edc369c.  Today, a malicious user of a
publisher or subscriber database can invoke arbitrary SQL functions
under an identity running replication, often a superuser.  This fix may
cause "does not exist" or "no schema has been selected to create in"
errors in a replication process.  After upgrading, consider watching
server logs for these errors.  Objects accruing schema qualification in
the wake of the earlier commit are unlikely to need further correction.
Back-patch to v10, which introduced logical replication.

Security: CVE-2020-14349
2020-08-10 09:22:58 -07:00
Noah Misch
d4d0ec9e79 Move connect.h from fe_utils to src/include/common.
Any libpq client can use the header.  Clients include backend components
postgres_fdw, dblink, and logical replication apply worker.  Back-patch
to v10, because another fix needs this.  In released branches, just copy
the header and keep the original.
2020-08-10 09:22:58 -07:00
Tom Lane
3ba9670847 Make contrib modules' installation scripts more secure.
Hostile objects located within the installation-time search_path could
capture references in an extension's installation or upgrade script.
If the extension is being installed with superuser privileges, this
opens the door to privilege escalation.  While such hazards have existed
all along, their urgency increases with the v13 "trusted extensions"
feature, because that lets a non-superuser control the installation path
for a superuser-privileged script.  Therefore, make a number of changes
to make such situations more secure:

* Tweak the construction of the installation-time search_path to ensure
that references to objects in pg_catalog can't be subverted; and
explicitly add pg_temp to the end of the path to prevent attacks using
temporary objects.

* Disable check_function_bodies within installation/upgrade scripts,
so that any security gaps in SQL-language or PL-language function bodies
cannot create a risk of unwanted installation-time code execution.

* Adjust lookup of type input/receive functions and join estimator
functions to complain if there are multiple candidate functions.  This
prevents capture of references to functions whose signature is not the
first one checked; and it's arguably more user-friendly anyway.

* Modify various contrib upgrade scripts to ensure that catalog
modification queries are executed with secure search paths.  (These
are in-place modifications with no extension version changes, since
it is the update process itself that is at issue, not the end result.)

Extensions that depend on other extensions cannot be made fully secure
by these methods alone; therefore, revert the "trusted" marking that
commit eb67623c9 applied to earthdistance and hstore_plperl, pending
some better solution to that set of issues.

Also add documentation around these issues, to help extension authors
write secure installation scripts.

Patch by me, following an observation by Andres Freund; thanks
to Noah Misch for review.

Security: CVE-2020-14350
2020-08-10 10:44:42 -04:00
Peter Eisentraut
6346761154 Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 444a6779aafc552ac452715caa65cfca0e723073
2020-08-10 15:21:18 +02:00
Tom Lane
418414daaa Check for fseeko() failure in pg_dump's _tarAddFile().
Coverity pointed out, not unreasonably, that we checked fseeko's
result at every other call site but these.  Failure to seek in the
temp file (note this is NOT pg_dump's output file) seems quite
unlikely, and even if it did happen the file length cross-check
further down would probably detect the problem.  Still, that's a
poor excuse for not checking the result of a system call.
2020-08-09 12:39:08 -04:00
Alvaro Herrera
85cb4ec509
walsnd: Don't set waiting_for_ping_response spuriously
Ashutosh Bapat noticed that when logical walsender needs to wait for
WAL, and it realizes that it must send a keepalive message to
walreceiver to update the sent-LSN, which *does not* request a reply
from walreceiver, it wrongly sets the flag that it's going to wait for
that reply.  That means that any future would-be sender of feedback
messages ends up not sending a feedback message, because they all
believe that a reply is expected.

With built-in logical replication there's not much harm in this, because
WalReceiverMain will send a ping-back every wal_receiver_timeout/2
anyway; but with other logical replication systems (e.g. pglogical) it
can cause significant pain.

This problem was introduced in commit 41d5f8ad73, where the
request-reply flag was changed from true to false to WalSndKeepalive,
without at the same time removing the line that sets
waiting_for_ping_response.

Just removing that line would be a sufficient fix, but it seems better
to shift the responsibility of setting the flag to WalSndKeepalive
itself instead of requiring caller to do it; this is clearly less
error-prone.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>
Backpatch: 9.5 and up
Discussion: https://postgr.es/m/20200806225558.GA22401@alvherre.pgsql
2020-08-08 12:31:55 -04:00
Etsuro Fujita
4f26932296 Fix yet another issue with step generation in partition pruning.
Commit 13838740f fixed some issues with step generation in partition
pruning, but there was yet another one: get_steps_using_prefix() assumes
that clauses in the passed-in prefix list are sorted in ascending order
of their partition key numbers, but the caller failed to ensure this for
range partitioning, which led to an assertion failure in debug builds.
Adjust the caller function to arrange the clauses in the prefix list in
the required order for range partitioning.

Back-patch to v11, like the previous commit.

Patch by me, reviewed by Amit Langote.

Discussion: https://postgr.es/m/CAPmGK16jkXiFG0YqMbU66wte-oJTfW6D1HaNvQf%3D%2B5o9%3Dm55wQ%40mail.gmail.com
2020-08-07 14:45:02 +09:00
Robert Haas
7c78040f6c Fix typo.
Per report from Tom Lane. Previously fixed in master by
commit f057980149.
2020-08-06 14:55:00 -04:00
Robert Haas
bcbc27251d Fix minor problems with non-exclusive backup cleanup.
The previous coding imagined that it could call before_shmem_exit()
when a non-exclusive backup began and then remove the previously-added
handler by calling cancel_before_shmem_exit() when that backup
ended. However, this only works provided that nothing else in the
system has registered a before_shmem_exit() hook in the interim,
because cancel_before_shmem_exit() is documented to remove a callback
only if it is the latest callback registered. It also only works
if nothing can ERROR out between the time that sessionBackupState
is reset and the time that cancel_before_shmem_exit(), which doesn't
seem to be strictly true.

To fix, leave the handler installed for the lifetime of the session,
arrange to install it just once, and teach it to quietly do nothing if
there isn't a non-exclusive backup in process.

This was originally committed to master as
303640199d, but I did not back-patch
at the time because the consequences were minor. However, now
there's been a second report of this causing trouble with a slightly
different test case than the one I reported originally, so now
I'm back-patching as far as v11 where JIT was introduced.

Patch by me, reviewed by Kyotaro Horiguchi, Michael Paquier (who
preferred a different approach, but got outvoted), Fujii Masao,
and Tom Lane, and with comments by various others. New problem
report from Bharath Rupireddy.

Discussion: http://postgr.es/m/CA+TgmobMjnyBfNhGTKQEDbqXYE3_rXWpc4CM63fhyerNCes3mA@mail.gmail.com
Discussion: http://postgr.es/m/CALj2ACWk7j4F2v2fxxYfrroOF=AdFNPr1WsV+AGtHAFQOqm_pw@mail.gmail.com
2020-08-06 13:58:23 -04:00
Tom Lane
f992da210f Fix matching of sub-partitions when a partitioned plan is stale.
Since we no longer require AccessExclusiveLock to add a partition,
the executor may see that a partitioned table has more partitions
than the planner saw.  ExecCreatePartitionPruneState's code for
matching up the partition lists in such cases was faulty, and would
misbehave if the planner had successfully pruned any partitions from
the query.  (Thus, trouble would occur only if a partition addition
happens concurrently with a query that uses both static and dynamic
partition pruning.)  This led to an Assert failure in debug builds,
and probably to crashes or query misbehavior in production builds.

To repair the bug, just explicitly skip zeroes in the plan's
relid_map[] list.  I also made some cosmetic changes to make the code
more readable (IMO anyway).  Also, convert the cross-checking Assert
to a regular test-and-elog, since it's now apparent that this logic
is more fragile than one would like.

Currently, there's no way to repeatably exercise this code, except
with manual use of a debugger to stop the backend between planning
and execution.  Hence, no test case in this patch.  We oughta do
something about that testability gap, but that's for another day.

Amit Langote and Tom Lane, per report from Justin Pryzby.  Oversight
in commit 898e5e329; backpatch to v12 where that appeared.

Discussion: https://postgr.es/m/20200802181131.GA27754@telsasoft.com
2020-08-05 15:38:55 -04:00
Tom Lane
55ffd6140c Increase hard-wired timeout values in ecpg regression tests.
A couple of test cases had connect_timeout=14, a value that seems
to have been plucked from a hat.  While it's more than sufficient
for normal cases, slow/overloaded buildfarm machines can get a timeout
failure here, as per recent report from "sungazer".  Increase to 180
seconds, which is in line with our typical timeouts elsewhere in
the regression tests.

Back-patch to 9.6; the code looks different in 9.5, and this doesn't
seem to be quite worth the effort to adapt to that.

Report: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2020-08-04%2007%3A12%3A22
2020-08-04 15:20:31 -04:00