Commit Graph

36297 Commits

Author SHA1 Message Date
Michael Paquier
194e535a07 Fix snapshot builds during promotion of hot standby node with 2PC
Some specific logic is done at the end of recovery when involving 2PC
transactions:
1) Call RecoverPreparedTransactions(), to recover the state of 2PC
transactions into memory (re-acquire locks, etc.).
2) ShutdownRecoveryTransactionEnvironment(), to move back to normal
operations, mainly cleaning up recovery locks and KnownAssignedXids
(including any 2PC transaction tracked previously).
3) Switch XLogCtl->SharedRecoveryState to RECOVERY_STATE_DONE, which is
the tipping point for any process calling RecoveryInProgress() to check
if the cluster is still in recovery or not.

Any snapshot taken between steps 2) and 3) would be empty, causing any
transaction relying on a snapshot at this point to potentially corrupt
data as there could still be some 2PC transactions to track, with
RecentXmin moving backwards on successive calls to GetSnapshotData() in
the same transaction.

As SharedRecoveryState is the point to take into account to know if it
is safe to discard KnownAssignedXids, this commit moves step 2) after
step 3), so as we can never finish with empty snapshots.

This exists since the introduction of hot standby, so backpatch all the
way down.  The window with incorrect snapshots is extremely small, but I
have seen it when running 023_pitr_prepared_xact.pl, as did buildfarm
member fairywren.  Thomas Munro also found it independently.  Special
thanks to Andres Freund for taking the time to analyze this issue.

Reported-by: Thomas Munro, Michael Paquier
Analyzed-by: Andres Freund
Discussion: https://postgr.es/m/20210422203603.fdnh3fu2mmfp2iov@alap3.anarazel.de
Backpatch-through: 9.6
2021-10-04 14:05:52 +09:00
Tom Lane
9c76689de4 Update our mapping of Windows time zone names using CLDR info.
This corrects a bunch of entries in win32_tzmap[], and adds a few
new ones, based on the CLDR project's windowsZones.xml file.
Non-cosmetic changes fall into four main categories:

* Flat-out errors:

US/Aleutan doesn't exist
America/Salvador doesn't exist
Asia/Baku is wrong for Yerevan
Asia/Dhaka (Bangladesh) is wrong for Astana (Kazakhstan)
Europe/Bucharest is wrong for Chisinau
America/Mexico_City is wrong for Chetumal
America/Buenos_Aires is wrong for Cayenne
America/Caracas has its own zone, so poor fit for La Paz
US/Eastern is wrong for Haiti
US/Eastern is wrong for Indiana (East)
Asia/Karachi is wrong for Tashkent
Etc/UTC+12 doesn't exist
Signs of Etc/GMT zones were backwards

* Judgment calls:

(These changes follow CLDR's choices, except for the first one)

Use Europe/London for "Greenwich Standard Time", since that seems much
more likely than Africa/Casablanca to be what people will think that
zone name means.  CLDR has Atlantic/Reykjavik here, but that's no better.

Asia/Shanghai seems a better fit than Hong Kong for "China Standard
Time".

Europe/Sarajevo is now a link to Belgrade, ie "Central Europe Standard
Time"; so use Warsaw for "Central European Standard Time".

America/Sao_Paulo seems more representative than Araguaina for
"E. South America Standard Time".

Africa/Johannesburg seems more representative than Harare for
"South Africa Standard Time".

* New Windows zone names:

"Israel Standard Time"
"Kaliningrad Standard Time"
"Russia Time Zone N" for various N
"Singapore Standard Time"
"South Sudan Standard Time"
"W. Central Africa Standard Time"
"West Bank Standard Time"
"Yukon Standard Time"

Some of these replace older spellings, but I kept the older spellings
too in case our code runs on a machine with the older data.

* Replace aliases (tzdb Links) with underlying city-named zones:

(This tracks tzdb's longstanding practice, and reduces inconsistency
with the rest of the entries, as well as with CLDR.)

US/Alaska
Asia/Kuwait
Asia/Muscat
Canada/Atlantic
Australia/Canberra
Canada/Saskatchewan
US/Central
US/Eastern
US/Hawaii
US/Mountain
Canada/Newfoundland
US/Pacific

Back-patch to all supported branches, as is our usual practice for
time zone data updates.

Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
2021-10-02 16:06:23 -04:00
Tom Lane
7ba8eb81f6 Re-alphabetize the win32_tzmap[] array.
The original intent seems to have been to sort case-insensitively
by the Windows zone name, but various changes over the years did
not get that memo.  This commit just moves a few entries to
restore exact alphabetic order, to ease comparison to the outputs
of processing scripts.

Back-patch to all supported branches, as is our usual practice for
time zone data updates.

Discussion: https://postgr.es/m/3266414.1633045628@sss.pgh.pa.us
2021-10-02 16:06:23 -04:00
Alvaro Herrera
170206e458
Error out if SKIP LOCKED and WITH TIES are both specified
Both bugs #16676[1] and #17141[2] illustrate that the combination of
SKIP LOCKED and FETCH FIRST WITH TIES break expectations when it comes
to rows returned to other sessions accessing the same row.  Since this
situation is detectable from the syntax and hard to fix otherwise,
forbid for now, with the potential to fix in the future.

[1] https://postgr.es/m/16676-fd62c3c835880da6@postgresql.org
[2] https://postgr.es/m/17141-913d78b9675aac8e@postgresql.org

Backpatch-through: 13, where WITH TIES was introduced
Author: David Christensen <david.christensen@crunchydata.com>
Discussion: https://postgr.es/m/CAOxo6XLPccCKru3xPMaYDpa+AXyPeWFs+SskrrL+HKwDjJnLhg@mail.gmail.com
2021-10-01 18:29:18 -03:00
Tom Lane
7adbe186f7 Avoid believing incomplete MCV-only stats in get_variable_range().
get_variable_range() would incautiously believe that statistics
containing only an MCV list are sufficient to derive a range estimate.
That's okay for an enum-like column that contains only MCVs, but
otherwise the estimate could be pretty bad.  Make it report that the
range is indeterminate unless the MCVs plus nullfrac account for
the whole table.

I don't think this needs a dedicated test case, since a quick code
coverage check verifies that the existing regression tests traverse
all the alternatives.  There is room to doubt that a future-proof
test case could be built anyway, given that the submitted example
accidentally doesn't fail before v11.

Per bug #17207 from Simon Perepelitsa.  Back-patch to v10.
In principle this has been broken all along, but I'm hesitant to
make such changes in 9.6, since if anyone is unhappy with 9.6.24's
behavior there will be no second chance to fix it.

Discussion: https://postgr.es/m/17207-5265aefa79e333b4@postgresql.org
2021-10-01 14:59:35 -04:00
Tom Lane
04ef2021e3 Fix Portal snapshot tracking to handle subtransactions properly.
Commit 84f5c2908 forgot to consider the possibility that
EnsurePortalSnapshotExists could run inside a subtransaction with
lifespan shorter than the Portal's.  In that case, the new active
snapshot would be popped at the end of the subtransaction, leaving
a dangling pointer in the Portal, with mayhem ensuing.

To fix, make sure the ActiveSnapshot stack entry is marked with
the same subtransaction nesting level as the associated Portal.
It's certainly safe to do so since we won't be here at all unless
the stack is empty; hence we can't create an out-of-order stack.

Let's also apply this logic in the case where PortalRunUtility
sets portalSnapshot, just to be sure that path can't cause similar
problems.  It's slightly less clear that that path can't create
an out-of-order stack, so add an assertion guarding it.

Report and patch by Bertrand Drouvot (with kibitzing by me).
Back-patch to v11, like the previous commit.

Discussion: https://postgr.es/m/ff82b8c5-77f4-3fe7-6028-fcf3303e82dd@amazon.com
2021-10-01 11:10:12 -04:00
Tom Lane
649e561f65 Remove gratuitous environment dependency in 002_types.pl test.
Computing related timestamps by subtracting "N days" is sensitive
to the prevailing timezone, since we interpret that as "same local
time on the N'th prior day".  Even though the intervals in question
are only two to four days, through remarkable bad luck they managed
to cross the end of Ramadan in 2014, causing the test's output to
change if timezone is set to Africa/Casablanca.  (Maybe in other
Muslim areas as well; I didn't check.)  There's absolutely no reason
for this test to exercise interval subtraction, so just get rid of
that and use plain timestamptz constants representing the intended
values.

Per report from Andres Freund.  Back-patch to v10 where this test
script came in.

Discussion: https://postgr.es/m/20210930183641.7lh4jhvpipvromca@alap3.anarazel.de
2021-09-30 16:23:26 -04:00
Alvaro Herrera
1d97d3d086
Fix WAL replay in presence of an incomplete record
Physical replication always ships WAL segment files to replicas once
they are complete.  This is a problem if one WAL record is split across
a segment boundary and the primary server crashes before writing down
the segment with the next portion of the WAL record: WAL writing after
crash recovery would happily resume at the point where the broken record
started, overwriting that record ... but any standby or backup may have
already received a copy of that segment, and they are not rewinding.
This causes standbys to stop following the primary after the latter
crashes:
  LOG:  invalid contrecord length 7262 at A8/D9FFFBC8
because the standby is still trying to read the continuation record
(contrecord) for the original long WAL record, but it is not there and
it will never be.  A workaround is to stop the replica, delete the WAL
file, and restart it -- at which point a fresh copy is brought over from
the primary.  But that's pretty labor intensive, and I bet many users
would just give up and re-clone the standby instead.

A fix for this problem was already attempted in commit 515e3d84a0, but
it only addressed the case for the scenario of WAL archiving, so
streaming replication would still be a problem (as well as other things
such as taking a filesystem-level backup while the server is down after
having crashed), and it had performance scalability problems too; so it
had to be reverted.

This commit fixes the problem using an approach suggested by Andres
Freund, whereby the initial portion(s) of the split-up WAL record are
kept, and a special type of WAL record is written where the contrecord
was lost, so that WAL replay in the replica knows to skip the broken
parts.  With this approach, we can continue to stream/archive segment
files as soon as they are complete, and replay of the broken records
will proceed across the crash point without a hitch.

Because a new type of WAL record is added, users should be careful to
upgrade standbys first, primaries later. Otherwise they risk the standby
being unable to start if the primary happens to write such a record.

A new TAP test that exercises this is added, but the portability of it
is yet to be seen.

This has been wrong since the introduction of physical replication, so
backpatch all the way back.  In stable branches, keep the new
XLogReaderState members at the end of the struct, to avoid an ABI
break.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Nathan Bossart <bossartn@amazon.com>
Discussion: https://postgr.es/m/202108232252.dh7uxf6oxwcy@alvherre.pgsql
2021-09-29 11:21:51 -03:00
Fujii Masao
8cf4f71185 pgbench: Fix handling of socket errors during benchmark.
Previously socket errors such as invalid socket or socket wait method failures
during benchmark caused pgbench to exit with status 0. Instead, errors during
the run should result in exit status 2.

Back-patch to v12 where pgbench started reporting exit status.

Original complaint and patch by Hayato Kuroda.

Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/TYCPR01MB5870057375ACA8A73099C649F5349@TYCPR01MB5870.jpnprd01.prod.outlook.com
2021-09-29 21:49:36 +09:00
Fujii Masao
3cc85d7d53 pgbench: Correct log level of message output when socket wait method fails.
The failure of socket wait method like "select()" doesn't terminate pgbench.
So the log level of error message when that failure happens should be ERROR.
But previously FATAL was used in that case.

Back-patch to v13 where pgbench started using common logging API.

Author: Yugo Nagata, Fabien COELHO
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/20210617005934.8bd37bf72efd5f1b38e6f482@sraoss.co.jp
2021-09-29 21:47:31 +09:00
Alexander Korotkov
fd22aec631 Split macros from visibilitymap.h into a separate header
That allows to include just visibilitymapdefs.h from file.c, and in turn,
remove include of postgres.h from relcache.h.

Reported-by: Andres Freund
Discussion: https://postgr.es/m/20210913232614.czafiubr435l6egi%40alap3.anarazel.de
Author: Alexander Korotkov
Reviewed-by: Andres Freund, Tom Lane, Alvaro Herrera
Backpatch-through: 13
2021-09-23 20:12:25 +03:00
Tomas Vondra
c0386f403a Release memory allocated by dependency_degree
Calculating degree of a functional dependency may allocate a lot of
memory - we have released mot of the explicitly allocated memory, but
e.g. detoasted varlena values were left behind. That may be an issue,
because we consider a lot of dependencies (all combinations), and the
detoasting may happen for each one again.

Fixed by calling dependency_degree() in a dedicated context, and
resetting it after each call. We only need the calculated dependency
degree, so we don't need to copy anything.

Backpatch to PostgreSQL 10, where extended statistics were introduced.

Backpatch-through: 10
Discussion: https://www.postgresql.org/message-id/20210915200928.GP831%40telsasoft.com
2021-09-23 18:34:01 +02:00
Tomas Vondra
b564eb0181 Free memory after building each statistics object
Until now, all extended statistics on a given relation were built in the
same memory context, without resetting. Some of the memory was released
explicitly, but not all of it - for example memory allocated while
detoasting values is hard to free. This is how it worked since extended
statistics were introduced in PostgreSQL 10, but adding support for
extended stats on expressions made the issue somewhat worse as it
increases the number of statistics to build.

Fixed by adding a memory context which gets reset after building each
statistics object (all the statistics kinds included in it). Resetting
it after building each statistics kind would be even better, but it
would require more invasive changes and copying of results, making it
harder to backpatch.

Backpatch to PostgreSQL 10, where extended statistics were introduced.

Author: Justin Pryzby
Reported-by: Justin Pryzby
Reviewed-by: Tomas Vondra
Backpatch-through: 10
Discussion: https://www.postgresql.org/message-id/20210915200928.GP831%40telsasoft.com
2021-09-23 18:33:59 +02:00
Amit Kapila
f09a81f1c0 Invalidate all partitions for a partitioned table in publication.
Updates/Deletes on a partition were allowed even without replica identity
after the parent table was added to a publication. This would later lead
to an error on subscribers. The reason was that we were not invalidating
the partition's relcache and the publication information for partitions
was not getting rebuilt. Similarly, we were not invalidating the
partitions' relcache after dropping a partitioned table from a publication
which will prohibit Updates/Deletes on its partition without replica
identity even without any publication.

Reported-by: Haiying Tang
Author: Hou Zhijie and Vignesh C
Reviewed-by: Vignesh C and Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/OS0PR01MB6113D77F583C922F1CEAA1C3FBD29@OS0PR01MB6113.jpnprd01.prod.outlook.com
2021-09-22 08:24:20 +05:30
Michael Paquier
583e15af99 Fix places in TestLib.pm in need of adaptation to the output of Msys perl
Contrary to the output of native perl, Msys perl generates outputs with
CRLFs characters.  There are already places in the TAP code where CRLFs
(\r\n) are automatically converted to LF (\n) on Msys, but we missed a
couple of places when running commands and using their output for
comparison, that would lead to failures.

This problem has been found thanks to the test added in 5adb067 using
TestLib::command_checks_all(), but after a closer look more code paths
were missing a filter.

This is backpatched all the way down to prevent any surprises if a new
test is introduced in stable branches.

Reviewed-by: Andrew Dunstan, Álvaro Herrera
Discussion: https://postgr.es/m/1252480.1631829409@sss.pgh.pa.us
Backpatch-through: 9.6
2021-09-22 08:43:10 +09:00
Tom Lane
5f0a073cbb Fix misevaluation of STABLE parameters in CALL within plpgsql.
Before commit 84f5c2908, a STABLE function in a plpgsql CALL
statement's argument list would see an up-to-date snapshot,
because exec_stmt_call would push a new snapshot.  I got rid of
that because the possibility of the snapshot disappearing within
COMMIT made it too hard to manage a snapshot across the CALL
statement.  That's fine so far as the procedure itself goes,
but I forgot to think about the possibility of STABLE functions
within the CALL argument list.  As things now stand, those'll
be executed with the Portal's snapshot as ActiveSnapshot,
keeping them from seeing updates more recent than Portal startup.

(VOLATILE functions don't have a problem because they take their
own snapshots; which indeed is also why the procedure itself
doesn't have a problem.  There are no STABLE procedures.)

We can fix this by pushing a new snapshot transiently within
ExecuteCallStmt itself.  Popping the snapshot before we get
into the procedure proper eliminates the management problem.
The possibly-useless extra snapshot-grab is slightly annoying,
but it's no worse than what happened before 84f5c2908.

Per bug #17199 from Alexander Nawratil.  Back-patch to v11,
like the previous patch.

Discussion: https://postgr.es/m/17199-1ab2561f0d94af92@postgresql.org
2021-09-21 19:06:33 -04:00
Peter Geoghegan
a1708ab652 Remove overzealous index deletion assertion.
A broken HOT chain is not an unexpected condition, even when the offset
number points past the end of the page's line pointer array.
heap_prune_chain() does not (and never has) treated this condition as
unexpected, so derivative code in heap_index_delete_tuples() shouldn't
do so either.

Oversight in commit 4228817449.

The assertion can probably only fail on Postgres 14 and master.  Earlier
releases don't have commit 3c3b8a4b, which taught VACUUM to truncate the
line pointer array of heap pages.  Backpatch all the same, just to be
consistent.

Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/17197-9438f31f46705182@postgresql.org
Backpatch: 12-, just like commit 4228817449.
2021-09-20 14:26:22 -07:00
Tom Lane
dede143997 Don't elide casting to typmod -1.
Casting a value that's already of a type with a specific typmod
to an unspecified typmod doesn't do anything so far as run-time
behavior is concerned.  However, it really ought to change the
exposed type of the expression to match.  Up to now,
coerce_type_typmod hasn't bothered with that, which creates gotchas
in contexts such as recursive unions.  If for example one side of
the union is numeric(18,3), but it needs to be plain numeric to
match the other side, there's no direct way to express that.

This is easy enough to fix, by inserting a RelabelType to update the
exposed type of the expression.  However, it's a bit nervous-making
to change this behavior, because it's stood for a really long time.
But no complaints have emerged about 14beta3, so go ahead and
back-patch.

Back-patch of 5c056b0c2 into previous supported branches.

Discussion: https://postgr.es/m/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ=uWWWfQ@mail.gmail.com
Discussion: https://postgr.es/m/1488389.1631984807@sss.pgh.pa.us
2021-09-20 11:48:52 -04:00
Tom Lane
e0b0d1eab4 Fix pull_varnos to cope with translated PlaceHolderVars.
Commit 55dc86eca changed pull_varnos to use (if possible) the associated
ph_eval_at for a PlaceHolderVar.  I missed a fine point though: we might
be looking at a PHV in the quals or tlist of a child appendrel, in which
case we need to compute a ph_eval_at value that's been translated in the
same way that the PHV itself has been (cf. adjust_appendrel_attrs).
Fortunately, enough info is available in the PlaceHolderInfo to make
such translation possible without additional outside data, so we don't
need another round of uglification of planner APIs.  This is a little
bit complicated, but since it's a hard-to-hit corner case, I'm not much
worried about adding cycles here.

Per report from Jaime Casanova.  Back-patch to v12, like the previous
commit.

Discussion: https://postgr.es/m/20210915230959.GB17635@ahch-to
2021-09-17 15:41:16 -04:00
Fujii Masao
8767a86fa2 Fix variable shadowing in procarray.c.
ProcArrayGroupClearXid function has a parameter named "proc",
but the same name was used for its local variables. This commit fixes
this variable shadowing, to improve code readability.

Back-patch to all supported versions, to make future back-patching
easy though this patch is classified as refactoring only.

Reported-by: Ranier Vilela
Author: Ranier Vilela, Aleksander Alekseev
https://postgr.es/m/CAEudQAqyoTZC670xWi6w-Oe2_Bk1bfu2JzXz6xRfiOUzm7xbyQ@mail.gmail.com
2021-09-16 13:07:29 +09:00
Tom Lane
e06cc024bd Disallow LISTEN in background workers.
It's possible to execute user-defined SQL in some background processes;
for example, logical replication workers can fire triggers.  This opens
the possibility that someone would try to execute LISTEN in such a
context.  But since only regular backends ever call
ProcessNotifyInterrupt, no messages would actually be received, and
thus the registered listener would simply prevent the message queue
from being cleaned.  Eventually NOTIFY would stop working, which is bad.

Perhaps someday somebody will invent infrastructure to make listening
in a background worker actually useful.  In the meantime, forbid it.

Back-patch to v13, which is where we introduced the MyBackendType
variable.  It'd be a lot harder to implement the check without that,
and it doesn't seem worth the trouble.

Discussion: https://postgr.es/m/153243441449.1404.2274116228506175596@wrigleys.postgresql.org
2021-09-15 12:31:56 -04:00
Tom Lane
63f28776cb Send NOTIFY signals during CommitTransaction.
Formerly, we sent signals for outgoing NOTIFY messages within
ProcessCompletedNotifies, which was also responsible for sending
relevant ones of those messages to our connected client.  It therefore
had to run during the main-loop processing that occurs just before
going idle.  This arrangement had two big disadvantages:

* Now that procedures allow intra-command COMMITs, it would be
useful to send NOTIFYs to other sessions immediately at COMMIT
(though, for reasons of wire-protocol stability, we still shouldn't
forward them to our client until end of command).

* Background processes such as replication workers would not send
NOTIFYs at all, since they never execute the client communication
loop.  We've had requests to allow triggers running in replication
workers to send NOTIFYs, so that's a problem.

To fix these things, move transmission of outgoing NOTIFY signals
into AtCommit_Notify, where it will happen during CommitTransaction.
Also move the possible call of asyncQueueAdvanceTail there, to
ensure we don't bloat the async SLRU if a background worker sends
many NOTIFYs with no one listening.

We can also drop the call of asyncQueueReadAllNotifications,
allowing ProcessCompletedNotifies to go away entirely.  That's
because commit 790026972 added a call of ProcessNotifyInterrupt
adjacent to PostgresMain's call of ProcessCompletedNotifies,
and that does its own call of asyncQueueReadAllNotifications,
meaning that we were uselessly doing two such calls (inside two
separate transactions) whenever inbound notify signals coincided
with an outbound notify.  We need only set notifyInterruptPending
to ensure that ProcessNotifyInterrupt runs, and we're done.

The existing documentation suggests that custom background workers
should call ProcessCompletedNotifies if they want to send NOTIFY
messages.  To avoid an ABI break in the back branches, reduce it
to an empty routine rather than removing it entirely.  Removal
will occur in v15.

Although the problems mentioned above have existed for awhile,
I don't feel comfortable back-patching this any further than v13.
There was quite a bit of churn in adjacent code between 12 and 13.
At minimum we'd have to also backpatch 51004c717, and a good deal
of other adjustment would also be needed, so the benefit-to-risk
ratio doesn't look attractive.

Per bug #15293 from Michael Powers (and similar gripes from others).

Artur Zakirov and Tom Lane

Discussion: https://postgr.es/m/153243441449.1404.2274116228506175596@wrigleys.postgresql.org
2021-09-14 17:18:25 -04:00
Andres Freund
c49e6f9d97 jit: Do not try to shut down LLVM state in case of LLVM triggered errors.
If an allocation failed within LLVM it is not safe to call back into LLVM as
LLVM is not generally safe against exceptions / stack-unwinding. Thus errors
while in LLVM code are promoted to FATAL. However llvm_shutdown() did call
back into LLVM even in such cases, while llvm_release_context() was careful
not to do so.

We cannot generally skip shutting down LLVM, as that can break profiling. But
it's OK to do so if there was an error from within LLVM.

Reported-By: Jelte Fennema <Jelte.Fennema@microsoft.com>
Author: Andres Freund <andres@anarazel.de>
Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/AM5PR83MB0178C52CCA0A8DEA0207DC14F7FF9@AM5PR83MB0178.EURPRD83.prod.outlook.com
Backpatch: 11-, where jit was introduced
2021-09-13 18:26:18 -07:00
Tom Lane
745abdd951 Fix EXIT out of outermost block in plpgsql.
Ordinarily, using EXIT this way would draw "control reached end of
function without RETURN".  However, if the function is one where we
don't require an explicit RETURN (such as a DO block), that should
not happen.  It did anyway, because add_dummy_return() neglected to
account for the case.

Per report from Herwig Goemans.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/868ae948-e3ca-c7ec-95a6-83cfc08ef750@gmail.com
2021-09-13 12:42:03 -04:00
Amit Kapila
58cf794ca6 Fix reorder buffer memory accounting for toast changes.
While processing toast changes in logical decoding, we rejigger the
tuple change to point to in-memory toast tuples instead to on-disk toast
tuples. And, to make sure the memory accounting is correct, we were
subtracting the old change size and then after re-computing the new tuple,
re-adding its size at the end. Now, if there is any error before we add
the new size, we will release the changes and that will update the
accounting info (subtracting the size from the counters). And we were
underflowing there which leads to an assertion failure in assert enabled
builds and wrong memory accounting in reorder buffer otherwise.

Author: Bertrand Drouvot
Reviewed-by: Amit Kapila
Backpatch-through: 13, where memory accounting was introduced
Discussion: https://postgr.es/m/92b0ee65-b8bd-e42d-c082-4f3f4bf12d34@amazon.com
2021-09-13 10:46:58 +05:30
Michael Paquier
b589d212fd Fix error handling with threads on OOM in ECPG connection logic
An out-of-memory failure happening when allocating the structures to
store the connection parameter keywords and values would mess up with
the set of connections saved, as on failure the pthread mutex would
still be hold with the new connection object listed but free()'d.

Rather than just unlocking the mutex, which would leave the static list
of connections into an inconsistent state, move the allocation for the
structures of the connection parameters before beginning the test
manipulation.  This ensures that the list of connections and the
connection mutex remain consistent all the time in this code path.

This error is unlikely going to happen, but this could mess up badly
with ECPG clients in surprising ways, so backpatch all the way down.

Reported-by: ryancaicse
Discussion: https://postgr.es/m/17186-b4cfd8f0eb4d1dee@postgresql.org
Backpatch-through: 9.6
2021-09-13 13:24:20 +09:00
Tom Lane
7e420072ea Make pg_regexec() robust against out-of-range search_start.
If search_start is greater than the length of the string, we should just
return REG_NOMATCH immediately.  (Note that the equality case should
*not* be rejected, since the pattern might be able to match zero
characters.)  This guards various internal assumptions that the min of a
range of string positions is not more than the max.  Violation of those
assumptions could allow an attempt to fetch string[search_start-1],
possibly causing a crash.

Jaime Casanova pointed out that this situation is reachable with the
new regexp_xxx functions that accept a user-specified start position.
I don't believe it's reachable via any in-core call site in v14 and
below.  However, extensions could possibly call pg_regexec with an
out-of-range search_start, so let's back-patch the fix anyway.

Discussion: https://postgr.es/m/20210911180357.GA6870@ahch-to
2021-09-11 15:19:49 -04:00
Tom Lane
fa5d0415f1 Fix some anomalies with NO SCROLL cursors.
We have long forbidden fetching backwards from a NO SCROLL cursor,
but the prohibition didn't extend to cases in which we rewind the
query altogether and then re-fetch forwards.  I think the reason is
that this logic was mainly meant to protect plan nodes that can't
be run in the reverse direction.  However, re-reading the query output
is problematic if the query is volatile (which includes SELECT FOR
UPDATE, not just queries with volatile functions): the re-read can
produce different results, which confuses the cursor navigation logic
completely.  Another reason for disliking this approach is that some
code paths will either fetch backwards or rewind-and-fetch-forwards
depending on the distance to the target row; so that seemingly
identical use-cases may or may not draw the "cursor can only scan
forward" error.  Hence, let's clean things up by disallowing rewind
as well as fetch-backwards in a NO SCROLL cursor.

Ordinarily we'd only make such a definitional change in HEAD, but
there is a third reason to consider this change now.  Commit ba2c6d6ce
created some new user-visible anomalies for non-scrollable cursors
WITH HOLD, in that navigation in the cursor result got confused if the
cursor had been partially read before committing.  The only good way
to resolve those anomalies is to forbid rewinding such a cursor, which
allows removal of the incorrect cursor state manipulations that
ba2c6d6ce added to PersistHoldablePortal.

To minimize the behavioral change in the back branches (including
v14), refuse to rewind a NO SCROLL cursor only when it has a holdStore,
ie has been held over from a previous transaction due to WITH HOLD.
This should avoid breaking most applications that have been sloppy
about whether to declare cursors as scrollable.  We'll enforce the
prohibition across-the-board beginning in v15.

Back-patch to v11, as ba2c6d6ce was.

Discussion: https://postgr.es/m/3712911.1631207435@sss.pgh.pa.us
2021-09-10 13:18:32 -04:00
Tom Lane
d8d93bc8ba Avoid fetching from an already-terminated plan.
Some plan node types don't react well to being called again after
they've already returned NULL.  PortalRunSelect() has long dealt
with this by calling the executor with NoMovementScanDirection
if it sees that we've already run the portal to the end.  However,
commit ba2c6d6ce overlooked this point, so that persisting an
already-fully-fetched cursor would fail if it had such a plan.

Per report from Tomas Barton.  Back-patch to v11, as the faulty
commit was.  (I've omitted a test case because the type of plan
that causes a problem isn't all that stable.)

Discussion: https://postgr.es/m/CAPV2KRjd=ErgVGbvO2Ty20tKTEZZr6cYsYLxgN_W3eAo9pf5sw@mail.gmail.com
2021-09-09 13:36:31 -04:00
Tom Lane
04118de78f Check for relation length overrun soon enough.
We don't allow relations to exceed 2^32-1 blocks, because block
numbers are 32 bits and the last possible block number is reserved
to mean InvalidBlockNumber.  There is a check for this in mdextend,
but that's really way too late, because the smgr API requires us to
create a buffer for the block-to-be-added, and we do not want to
have any buffer with blocknum InvalidBlockNumber.  (Such a case
can trigger assertions in bufmgr.c, plus I think it might confuse
ReadBuffer's logic for data-past-EOF later on.)  So put the check
into ReadBuffer.

Per report from Christoph Berg.  It's been like this forever,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/YTn1iTkUYBZfcODk@msg.credativ.de
2021-09-09 11:45:48 -04:00
Fujii Masao
dd9b3fced8 Fix issue with WAL archiving in standby.
Previously, walreceiver always closed the currently-opened WAL segment
and created its archive notification file, after it finished writing
the current segment up and received any WAL data that should be
written into the next segment. If walreceiver exited just before
any WAL data in the next segment arrived at standby, it did not
create the archive notification file of the current segment
even though that's known completed. This behavior could cause
WAL archiving of the segment to be delayed until subsequent
restartpoints or checkpoints created its notification file.

To fix the issue, this commit changes walreceiver so that it creates
an archive notification file of a current WAL segment immediately
if that's known completed before receiving next WAL data.

Back-patch to all supported branches.

Reported-by: Kyotaro Horiguchi
Author: Fujii Masao
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20200630.165503.1465894182551545886.horikyota.ntt@gmail.com
2021-09-09 23:58:26 +09:00
Tom Lane
da7d81dd05 Avoid useless malloc/free traffic around getFormattedTypeName().
Coverity complained that one caller of getFormattedTypeName() failed
to free the returned string.  Which is true, but rather than fixing
that one, let's get rid of this tedious and error-prone requirement.
Now that getFormattedTypeName() caches its result, strdup'ing that
result and expecting the caller to free it accomplishes little except
to waste cycles.  We do create a leak in the case where getTypes didn't
make a TypeInfo for the type, but that basically shouldn't ever happen.

Back-patch, as commit 6c450a861 was.  This isn't a particularly
interesting bug fix, but the API change seems like a hazard for
future back-patching activity if we don't back-patch it.
2021-09-08 15:09:42 -04:00
Tom Lane
cbba6ba3a0 Fix rewriter to set hasModifyingCTE correctly on rewritten queries.
If we copy data-modifying CTEs from the original query to a replacement
query (from a DO INSTEAD rule), we must set hasModifyingCTE properly
in the replacement query.  Failure to do this can cause various
unpleasantness, such as unsafe usage of parallel plans.  The code also
neglected to propagate hasRecursive, though that's only cosmetic at
the moment.

A difficulty arises if the rule action is an INSERT...SELECT.  We
attach the original query's RTEs and CTEs to the sub-SELECT Query, but
data-modifying CTEs are only allowed to appear in the topmost Query.
For the moment, throw an error in such cases.  It would probably be
possible to avoid this error by attaching the CTEs to the top INSERT
Query instead; but that would require a bunch of new code to adjust
ctelevelsup references.  Given the narrowness of the use-case, and
the need to back-patch this fix, it does not seem worth the trouble
for now.  We can revisit this if we get field complaints.

Per report from Greg Nancarrow.  Back-patch to all supported branches.
(The test case added here does not fail before v10, but there are
plenty of places checking top-level hasModifyingCTE in 9.6, so I have
no doubt that this code change is necessary there too.)

Greg Nancarrow and Tom Lane

Discussion: https://postgr.es/m/CAJcOf-f68DT=26YAMz_i0+Au3TcLO5oiHY5=fL6Sfuits6r+_w@mail.gmail.com
Discussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com
2021-09-08 12:05:43 -04:00
Amit Kapila
ddfc7299d0 Invalidate relcache for publications defined for all tables.
Updates/Deletes on a relation were allowed even without replica identity
after we define the publication for all tables. This would later lead to
an error on subscribers. The reason was that for such publications we were
not invalidating the relcache and the publication information for
relations was not getting rebuilt. Similarly, we were not invalidating the
relcache after dropping of such publications which will prohibit
Updates/Deletes without replica identity even without any publication.

Author: Vignesh C and Hou Zhijie
Reviewed-by: Hou Zhijie, Kyotaro Horiguchi, Amit Kapila
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/CALDaNm0pF6zeWqCA8TCe2sDuwFAy8fCqba=nHampCKag-qLixg@mail.gmail.com
2021-09-08 12:14:59 +05:30
Noah Misch
aae398a87c AIX: Fix missing libpq symbols by respecting SHLIB_EXPORTS.
We make each AIX shared library export all globals found in .o files
that originate in the library.  That doesn't include symbols acquired by
-lpgcommon_shlib.  That is good on average, but it became a problem for
libpq when commit e6afa8918c moved five
official libpq API symbols into src/common.  Fix this by implementing
the SHLIB_EXPORTS mechanism for AIX, so affected libraries export the
same symbols that they export on Linux.  This reintroduces symbols
pg_encoding_to_char, pg_utf_mblen, pg_char_to_encoding,
pg_valid_server_encoding, and pg_valid_server_encoding_id.  Back-patch
to v13, where the aforementioned commit first appeared.  While a minor
release is usually the wrong time to add or remove symbol exports in
libpq or libecpg, we should expect users to want each documented symbol.

Tony Reix

Discussion: https://postgr.es/m/PR3PR02MB6396742E2FC3E77D37A920BC86C79@PR3PR02MB6396.eurprd02.prod.outlook.com
2021-09-06 11:28:02 -07:00
Tom Lane
d8a266c5e1 Fix bogus timetz_zone() results for DYNTZ abbreviations.
timetz_zone() delivered completely wrong answers if the zone was
specified by a dynamic TZ abbreviation, because it failed to account
for the difference between the POSIX conventions for field values in
struct pg_tm and the conventions used in PG-specific datetime code.

As a stopgap fix, just adjust the tm_year and tm_mon fields to match
PG conventions.  This is fixed in a different way in HEAD (388e71af8)
but I don't want to back-patch the change of reference point.

Discussion: https://postgr.es/m/CAJ7c6TOMG8zSNEZtCn5SPe+cCk3Lfxb71ZaQwT2F4T7PJ_t=KA@mail.gmail.com
2021-09-06 11:29:52 -04:00
Peter Eisentraut
9f9ae019d1 Fix pkg-config files for static linking
Since ea53100d5 (PostgreSQL 12), the shipped pkg-config files have
been broken for statically linking libpq because libpgcommon and
libpgport are missing.  This patch adds those two missing private
dependencies (in a non-hardcoded way).

Reported-by: Filip Gospodinov <f@gospodinov.ch>
Discussion: https://www.postgresql.org/message-id/flat/c7108bde-e051-11d5-a234-99beec01ce2a@gospodinov.ch
2021-09-06 09:43:05 +02:00
Tom Lane
2c0dd669c3 Further portability tweaks for float4/float8 hash functions.
Attempting to make hashfloat4() look as much as possible like
hashfloat8(), I'd figured I could replace NaNs with get_float4_nan()
before widening to float8.  However, results from protosciurus
and topminnow show that on some platforms that produces a different
bit-pattern from get_float8_nan(), breaking the intent of ce773f230.
Rearrange so that we use the result of get_float8_nan() for all NaN
cases.  As before, back-patch.
2021-09-04 16:29:08 -04:00
Alvaro Herrera
518621c40b
Revert "Avoid creating archive status ".ready" files too early"
This reverts commit 515e3d84a0 and equivalent commits in back
branches.  This solution to the problem has a number of problems, so
we'll try again with a different approach.

Per note from Andres Freund

Discussion: https://postgr.es/m/20210831042949.52eqp5xwbxgrfank@alap3.anarazel.de
2021-09-04 12:14:30 -04:00
Tom Lane
742b30caee Remove arbitrary MAXPGPATH limit on command lengths in pg_ctl.
Replace fixed-length command buffers with psprintf() calls.  We didn't
have anything as convenient as psprintf() when this code was written,
but now that we do, there's little reason for the limitation to
stand.  Removing it eliminates some corner cases where (for example)
starting the postmaster with a whole lot of options fails.

Most individual file names that pg_ctl deals with are still restricted
to MAXPGPATH, but we've seldom had complaints about that limitation
so long as it only applies to one filename.

Back-patch to all supported branches.

Phil Krylov

Discussion: https://postgr.es/m/567e199c6b97ee19deee600311515b86@krylov.eu
2021-09-03 21:04:44 -04:00
Tom Lane
132be60006 Disallow creating an ICU collation if the DB encoding won't support it.
Previously this was allowed, but the collation effectively vanished
into the ether because of the way lookup_collation() works: you could
not use the collation, nor even drop it.  Seems better to give an
error up front than to leave the user wondering why it doesn't work.

(Because this test is in DefineCollation not CreateCollation, it does
not prevent pg_import_system_collations from creating ICU collations,
regardless of the initially-chosen encoding.)

Per bug #17170 from Andrew Bille.  Back-patch to v10 where ICU support
was added.

Discussion: https://postgr.es/m/17170-95845cf3f0a9c36d@postgresql.org
2021-09-03 16:38:55 -04:00
Tom Lane
9089f1543e Fix portability issue in tests from commit ce773f230.
Modern POSIX seems to require strtod() to accept "-NaN", but there's
nothing about NaN in SUSv2, and some of our oldest buildfarm members
don't like it.  Let's try writing it as -'NaN' instead; that seems
to produce the same result, at least on Intel hardware.

Per buildfarm.
2021-09-03 10:01:02 -04:00
Tom Lane
be2beadaff Fix float4/float8 hash functions to produce uniform results for NaNs.
The IEEE 754 standard allows a wide variety of bit patterns for NaNs,
of which at least two ("NaN" and "-NaN") are pretty easy to produce
from SQL on most machines.  This is problematic because our btree
comparison functions deem all NaNs to be equal, but our float hash
functions know nothing about NaNs and will happily produce varying
hash codes for them.  That causes unexpected results from queries
that hash a column containing different NaN values.  It could also
produce unexpected lookup failures when using a hash index on a
float column, i.e. "WHERE x = 'NaN'" will not find all the rows
it should.

To fix, special-case NaN in the float hash functions, not too much
unlike the existing special case that forces zero and minus zero
to hash the same.  I arranged for the most vanilla sort of NaN
(that coming from the C99 NAN constant) to still have the same
hash code as before, to reduce the risk to existing hash indexes.

I dithered about whether to back-patch this into stable branches,
but ultimately decided to do so.  It's a clear improvement for
queries that hash internally.  If there is anybody who has -NaN
in a hash index, they'd be well advised to re-index after applying
this patch ... but the misbehavior if they don't will not be much
worse than the misbehavior they had before.

Per bug #17172 from Ma Liangzhu.

Discussion: https://postgr.es/m/17172-7505bea9e04e230f@postgresql.org
2021-09-02 17:24:42 -04:00
Amit Kapila
b51985d8a0 Fix the random test failure in 001_rep_changes.
The check to test whether the subscription workers were restarting after a
change in the subscription was failing. The reason was that the test was
assuming the walsender started before it reaches the 'streaming' state and
the walsender was exiting due to an error before that. Now, the walsender
was erroring out before reaching the 'streaming' state because it tries to
acquire the slot before the previous walsender has exited.

In passing, improve the die messages so that it is easier to investigate
the failures in the future if any.

Reported-by: Michael Paquier, as per buildfarm
Author: Ajin Cherian
Reviewed-by: Masahiko Sawada, Amit Kapila
Backpatch-through: 10, where this test was introduced
Discussion: https://postgr.es/m/YRnhFxa9bo73wfpV@paquier.xyz
2021-09-01 09:16:35 +05:30
Tom Lane
db11b4a3db In pg_dump, avoid doing per-table queries for RLS policies.
For no particularly good reason, getPolicies() queried pg_policy
separately for each table.  We can collect all the policies in
a single query instead, and attach them to the correct TableInfo
objects using findTableByOid() lookups.  On the regression
database, this reduces the number of queries substantially, and
provides a visible savings even when running against a local
server.

Per complaint from Hubert Depesz Lubaczewski.  Since this is such
a simple fix and can have a visible performance benefit, back-patch
to all supported branches.

Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
2021-08-31 15:04:05 -04:00
Tom Lane
904ce45bfa Cache the results of format_type() queries in pg_dump.
There's long been a "TODO: there might be some value in caching
the results" annotation on pg_dump's getFormattedTypeName function;
but we hadn't gotten around to checking what it was costing us to
repetitively look up type names.  It turns out that when dumping the
current regression database, about 10% of the total number of queries
issued are duplicative format_type() queries.  However, Hubert Depesz
Lubaczewski reported a not-unusual case where these account for over
half of the queries issued by pg_dump.  Individually these queries
aren't expensive, but when network lag is a factor, they add up to a
problem.  We can very easily add some caching to getFormattedTypeName
to solve it.

Since this is such a simple fix and can have a visible performance
benefit, back-patch to all supported branches.

Discussion: https://postgr.es/m/20210826084430.GA26282@depesz.com
2021-08-31 13:53:50 -04:00
Tomas Vondra
c8213aa949 Rename the role in stats_ext to have regress_ prefix
Commit 5be8ce82e8 added a new role to the stats_ext regression suite,
but the role name did not start with regress_ causing failures when
running with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS. Fixed by
renaming the role to start with the expected regress_ prefix.

Backpatch-through: 10, same as the new regression test
Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
2021-08-31 19:36:03 +02:00
Tomas Vondra
1fe1a04af8 Fix lookup error in extended stats ownership check
When an ownership check on extended statistics object failed, the code
was calling aclcheck_error_type to report the failure, which is clearly
wrong, resulting in cache lookup errors. Fix by calling aclcheck_error.

This issue exists since the introduction of extended statistics, so
backpatch all the way back to PostgreSQL 10. It went unnoticed because
there were no tests triggering the error, so add one.

Reported-by: Mark Dilger
Backpatch-through: 10, where extended stats were introduced
Discussion: https://postgr.es/m/1F238937-7CC2-4703-A1B1-6DC225B8978A%40enterprisedb.com
2021-08-31 18:38:11 +02:00
Alvaro Herrera
6197d7b538
Report tuple address in data-corruption error message
Most data-corruption reports mention the location of the problem, but
this one failed to.  Add it.

Backpatch all the way back.  In 12 and older, also assign the
ERRCODE_DATA_CORRUPTED error code as was done in commit fd6ec93bf8 for
13 and later.

Discussion: https://postgr.es/m/202108191637.oqyzrdtnheir@alvherre.pgsql
2021-08-30 16:29:12 -04:00
Amit Kapila
8ba3bad4c3 Fix incorrect error code in StartupReplicationOrigin().
ERRCODE_CONFIGURATION_LIMIT_EXCEEDED was used for checksum failure, use
ERRCODE_DATA_CORRUPTED instead.

Reported-by: Tatsuhito Kasahara
Author: Tatsuhito Kasahara
Backpatch-through: 9.6, where it was introduced
Discussion: https://postgr.es/m/CAP0=ZVLHtYffs8SOWcFJWrBGoRzT9QQbk+_aP+E5AHLNXiOorA@mail.gmail.com
2021-08-30 09:26:49 +05:30
Alvaro Herrera
9a33ed8fa1
psql \dP: reference regclass with "pg_catalog." prefix
Strictly speaking this isn't a bug, but since all references to catalog
objects are schema-qualified, we might as well be consistent.  The
omission first appeared in commit 1c5d9270e3, so backpatch to 12.

Author: Justin Pryzby <pryzbyj@telsasoft.com>
Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
2021-08-28 11:45:47 -04:00
Noah Misch
b18669f5e6 Fix data loss in wal_level=minimal crash recovery of CREATE TABLESPACE.
If the system crashed between CREATE TABLESPACE and the next checkpoint,
the result could be some files in the tablespace unexpectedly containing
no rows.  Affected files would be those for which the system did not
write WAL; see the wal_skip_threshold documentation.  Before v13, a
different set of conditions governed the writing of WAL; see v12's
<sect2 id="populate-pitr">.  (The v12 conditions were broader in some
ways and narrower in others.)  Users may want to audit non-default
tablespaces for unexpected short files.  The bug could have truncated an
index without affecting the associated table, and reindexing the index
would fix that particular problem.

This fixes the bug by making create_tablespace_directories() more like
TablespaceCreateDbspace().  create_tablespace_directories() was
recursively removing tablespace contents, reasoning that WAL redo would
recreate everything removed that way.  That assumption holds for other
wal_level values.  Under wal_level=minimal, the old approach could
delete files for which no other copy existed.  Back-patch to 9.6 (all
supported versions).

Reviewed by Robert Haas and Prabhat Sahu.  Reported by Robert Haas.

Discussion: https://postgr.es/m/CA+TgmoaLO9ncuwvr2nN-J4VEP5XyAcy=zKiHxQzBbFRxxGxm0w@mail.gmail.com
2021-08-27 23:33:27 -07:00
Tom Lane
dbb239d518 Count SP-GiST index scans in pg_stat statistics.
Somehow, spgist overlooked the need to call pgstat_count_index_scan().
Hence, pg_stat_all_indexes.idx_scan and equivalent columns never
became nonzero for an SP-GiST index, although the related per-tuple
counters worked fine.

This fix works a bit differently from other index AMs, in that the
counter increment occurs in spgrescan not spggettuple/spggetbitmap.
It looks like this won't make the user-visible semantics noticeably
different, so I won't go to the trouble of introducing an is-this-
the-first-call flag just to make the counter bumps happen in the
same places.

Per bug #17163 from Christian Quest.  Back-patch to all supported
versions.

Discussion: https://postgr.es/m/17163-b8c5cc88322a5e92@postgresql.org
2021-08-27 19:42:42 -04:00
Robert Haas
bc062cb938 Fix broken snapshot handling in parallel workers.
Pengchengliu reported an assertion failure in a parallel woker while
performing a parallel scan using an overflowed snapshot. The proximate
cause is that TransactionXmin was set to an incorrect value.  The
underlying cause is incorrect snapshot handling in parallel.c.

In particular, InitializeParallelDSM() was unconditionally calling
GetTransactionSnapshot(), because I (rhaas) mistakenly thought that
was always retrieving an existing snapshot whereas, at isolation
levels less than REPEATABLE READ, it's actually taking a new one. So
instead do this only at higher isolation levels where there actually
is a single snapshot for the whole transaction.

By itself, this is not a sufficient fix, because we still need to
guarantee that TransactionXmin gets set properly in the workers. The
easiest way to do that seems to be to install the leader's active
snapshot as the transaction snapshot if the leader did not serialize a
transaction snapshot. This doesn't affect the results of future
GetTrasnactionSnapshot() calls since those have to take a new snapshot
anyway; what we care about is the side effect of setting TransactionXmin.

Report by Pengchengliu. Patch by Greg Nancarrow, except for some comment
text which I supplied.

Discussion: https://postgr.es/m/002f01d748ac$eaa781a0$bff684e0$@tju.edu.cn
2021-08-25 08:40:52 -04:00
Amit Kapila
794025eff0 Fix toast rewrites in logical decoding.
Commit 325f2ec555 introduced pg_class.relwrite to skip operations on
tables created as part of a heap rewrite during DDL. It links such
transient heaps to the original relation OID via this new field in
pg_class but forgot to do anything about toast tables. So, logical
decoding was not able to skip operations on internally created toast
tables. This leads to an error when we tried to decode the WAL for the
next operation for which it appeared that there is a toast data where
actually it didn't have any toast data.

To fix this, we set pg_class.relwrite for internally created toast tables
as well which allowed skipping operations on them during logical decoding.

Author: Bertrand Drouvot
Reviewed-by: David Zhang, Amit Kapila
Backpatch-through: 11, where it was introduced
Discussion: https://postgr.es/m/b5146fb1-ad9e-7d6e-f980-98ed68744a7c@amazon.com
2021-08-25 09:23:27 +05:30
Fujii Masao
7d9026cbfd Avoid using ambiguous word "positive" in error message.
There are two identical error messages about valid value of modulus for
hash partition, in PostgreSQL source code. Commit 0e1275fb07 improved
only one of them so that ambiguous word "positive" was avoided there,
and forgot to improve the other. This commit improves the other.
Which would reduce translator burden.

Back-pach to v11 where the error message exists.

Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
2021-08-25 11:47:46 +09:00
Fujii Masao
81fa1bce27 Improve error message about valid value for distance in phrase operator.
The distance in phrase operator must be an integer value between zero
and MAXENTRYPOS inclusive. But previously the error message about
its valid value included the information about its upper limit
but not lower limit (i.e., zero). This commit improves the error message
so that it also includes the information about its lower limit.

Back-patch to v9.6 where full-text phrase search was supported.

Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
2021-08-25 11:45:15 +09:00
Tom Lane
071146184a Fix regexp misbehavior with capturing parens inside "{0}".
Regexps like "(.){0}...\1" drew an "invalid backreference number".
That's not unreasonable on its face, since the capture group will
never be matched if it's iterated zero times.  However, other engines
such as Perl's don't complain about this, nor do we throw an error for
related cases such as "(.)|\1", even though that backref can never
succeed either.  Also, if the zero-iterations case happens at runtime
rather than compile time --- say, "(x)*...\1" when there's no "x" to
be found --- that's not an error, we just deem the backref to not
match.  Making this even less defensible, no error was thrown for
nested cases such as "((.)){0}...\2"; and to add insult to injury,
those cases could result in assertion failures instead.  (It seems
that nothing especially bad happened in non-assert builds, though.)

Let's just fix it so that no error is thrown and instead the backref
is deemed to never match, so that compile-time detection of no
iterations behaves the same as run-time detection.

Per report from Mark Dilger.  This appears to be an aboriginal error
in Spencer's library, so back-patch to all supported versions.

Pre-v14, it turns out to also be necessary to back-patch one aspect of
commits cb76fbd7e/00116dee5, namely to create capture-node subREs with
the begin/end states of their subexpressions, not the current lp/rp
of the outer parseqatom invocation.  Otherwise delsub complains that
we're trying to disconnect a state from itself.  This is a bit scary
but code examination shows that it's safe: in the pre-v14 code, if we
want to wrap iteration around the subexpression, the first thing we do
is overwrite the atom's begin/end fields with new states.  So the
bogus values didn't survive long enough to be used for anything, except
if no iteration is required, in which case it doesn't matter.

Discussion: https://postgr.es/m/A099E4A8-4377-4C64-A98C-3DEDDC075502@enterprisedb.com
2021-08-24 16:37:27 -04:00
Tom Lane
9a327179c8 Prevent regexp back-refs from sometimes matching when they shouldn't.
The recursion in cdissect() was careless about clearing match data
for capturing parentheses after rejecting a partial match.  This
could allow a later back-reference to succeed when by rights it
should fail for lack of a defined referent.

To fix, think a little more rigorously about what the contract
between different levels of cdissect's recursion needs to be.
With the right spec, we can fix this using fewer rather than more
resets of the match data; the key decision being that a failed
sub-match is now explicitly responsible for clearing any matches
it may have set.

There are enough other cross-checks and optimizations in the code
that it's not especially easy to exhibit this problem; usually, the
match will fail as-expected.  Plus, regexps that are even potentially
vulnerable are most likely user errors, since there's just not much
point in writing a back-ref that doesn't always have a referent.
These facts perhaps explain why the issue hasn't been detected,
even though it's almost certainly a couple of decades old.

Discussion: https://postgr.es/m/151435.1629733387@sss.pgh.pa.us
2021-08-23 17:41:07 -04:00
Alvaro Herrera
ad1231171f
Avoid creating archive status ".ready" files too early
WAL records may span multiple segments, but XLogWrite() does not
wait for the entire record to be written out to disk before
creating archive status files.  Instead, as soon as the last WAL page of
the segment is written, the archive status file is created, and the
archiver may process it.  If PostgreSQL crashes before it is able to
write and flush the rest of the record (in the next WAL segment), the
wrong version of the first segment file lingers in the archive, which
causes operations such as point-in-time restores to fail.

To fix this, keep track of records that span across segments and ensure
that segments are only marked ready-for-archival once such records have
been completely written to disk.

This has always been wrong, so backpatch all the way back.

Author: Nathan Bossart <bossartn@amazon.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Ryo Matsumura <matsumura.ryo@fujitsu.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505@amazon.com
2021-08-23 15:50:35 -04:00
Michael Paquier
29f9423251 Fix backup manifests to generate correct WAL-Ranges across timelines
In a backup manifest, WAL-Ranges stores the range of WAL that is
required for the backup to be valid.  pg_verifybackup would then
internally use pg_waldump for the checks based on this data.

When the timeline where the backup started was more than 1 with a
history file looked at for the manifest data generation, the calculation
of the WAL range for the first timeline to check was incorrect.  The
previous logic used as start LSN the start position of the first
timeline, but it needs to use the start LSN of the backup.  This would
cause failures with pg_verifybackup, or any tools making use of the
backup manifests.

This commit adds a test based on a logic using a self-promoted node,
making it rather cheap.

Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210818.143031.1867083699202617521.horikyota.ntt@gmail.com
Backpatch-through: 13
2021-08-23 11:09:57 +09:00
Tom Lane
b30f7f399e Fix performance bug in regexp's citerdissect/creviterdissect.
After detecting a sub-match "dissect" failure (i.e., a backref match
failure) in the i'th sub-match of an iteration node, we should proceed
by adjusting the attempted length of the i'th submatch.  As coded,
though, these functions changed the attempted length of the *last*
sub-match, and only after exhausting all possibilities for that would
they back up to adjust the next-to-last sub-match, and then the
second-from-last, etc; all of which is wasted effort, since only
changing the start or length of the i'th sub-match can possibly make
it succeed.  This oversight creates the possibility for exponentially
bad performance.  Fortunately the problem is masked in most cases by
optimizations or constraints applied elsewhere; which explains why
we'd not noticed it before.  But it is possible to reach the problem
with fairly simple, if contrived, regexps.

Oversight in my commit 173e29aa5.  That's pretty ancient now,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/1808998.1629412269@sss.pgh.pa.us
2021-08-20 14:19:04 -04:00
Tom Lane
7fa367d96b Avoid trying to lock OLD/NEW in a rule with FOR UPDATE.
transformLockingClause neglected to exclude the pseudo-RTEs for
OLD/NEW when processing a rule's query.  This led to odd errors
or even crashes later on.  This bug is very ancient, but it's
not terribly surprising that nobody noticed, since the use-case
for SELECT FOR UPDATE in a non-view rule is somewhere between
thin and non-existent.  Still, crashing is not OK.

Per bug #17151 from Zhiyong Wu.  Thanks to Masahiko Sawada
for analysis of the problem.

Discussion: https://postgr.es/m/17151-c03a3e6e4ec9aadb@postgresql.org
2021-08-19 12:12:35 -04:00
Tom Lane
ecd4dd9f1d Fix check_agg_arguments' examination of aggregate FILTER clauses.
Recursion into the FILTER clause was mis-implemented, such that a
relevant Var or Aggref at the very top of the FILTER clause would
be ignored.  (Of course, that'd have to be a plain boolean Var or
boolean-returning aggregate.)  The consequence would be
mis-identification of the correct semantic level of the aggregate,
which could lead to not-per-spec query behavior.  If the FILTER
expression is an aggregate, this could also lead to failure to issue
an expected "aggregate function calls cannot be nested" error, which
would likely result in a core dump later on, since the planner and
executor aren't expecting such cases to appear.

The root cause is that commit b560ec1b0 blindly copied some code
that assumed it's recursing into a List, and thus didn't examine the
top-level node.  To forestall questions about why this call doesn't
look like the others, as well as possible future copy-and-paste
mistakes, let's change all three check_agg_arguments_walker calls in
check_agg_arguments, even though only the one for the filter clause
is really broken.

Per bug #17152 from Zhiyong Wu.  This has been wrong since we
implemented FILTER, so back-patch to all supported versions.
(Testing suggests that pre-v11 branches manage to avoid crashing
in the bad-Aggref case, thanks to "redundant" checks in ExecInitAgg.
But I'm not sure how thorough that protection is, and anyway the
wrong-behavior issue remains, so fix 9.6 and 10 too.)

Discussion: https://postgr.es/m/17152-c7f906cc1a88e61b@postgresql.org
2021-08-18 18:12:51 -04:00
Tom Lane
7b01246e1d Prevent ALTER TYPE/DOMAIN/OPERATOR from changing extension membership.
If recordDependencyOnCurrentExtension is invoked on a pre-existing,
free-standing object during an extension update script, that object
will become owned by the extension.  In our current code this is
possible in three cases:

* Replacing a "shell" type or operator.
* CREATE OR REPLACE overwriting an existing object.
* ALTER TYPE SET, ALTER DOMAIN SET, and ALTER OPERATOR SET.

The first of these cases is intentional behavior, as noted by the
existing comments for GenerateTypeDependencies.  It seems like
appropriate behavior for CREATE OR REPLACE too; at least, the obvious
alternatives are not better.  However, the fact that it happens during
ALTER is an artifact of trying to share code (GenerateTypeDependencies
and makeOperatorDependencies) between the CREATE and ALTER cases.
Since an extension script would be unlikely to ALTER an object that
didn't already belong to the extension, this behavior is not very
troubling for the direct target object ... but ALTER TYPE SET will
recurse to dependent domains, and it is very uncool for those to
become owned by the extension if they were not already.

Let's fix this by redefining the ALTER cases to never change extension
membership, full stop.  We could minimize the behavioral change by
only changing the behavior when ALTER TYPE SET is recursing to a
domain, but that would complicate the code and it does not seem like
a better definition.

Per bug #17144 from Alex Kozhemyakin.  Back-patch to v13 where ALTER
TYPE SET was added.  (The other cases are older, but since they only
affect the directly-named object, there's not enough of a problem to
justify changing the behavior further back.)

Discussion: https://postgr.es/m/17144-e67d7a8f049de9af@postgresql.org
2021-08-17 14:29:22 -04:00
Daniel Gustafsson
e15f32f0ed Set type identifier on BIO
In OpenSSL there are two types of BIO's (I/O abstractions):
source/sink and filters. A source/sink BIO is a source and/or
sink of data, ie one acting on a socket or a file. A filter
BIO takes a stream of input from another BIO and transforms it.
In order for BIO_find_type() to be able to traverse the chain
of BIO's and correctly find all BIO's of a certain type they
shall have the type bit set accordingly, source/sink BIO's
(what PostgreSQL implements) use BIO_TYPE_SOURCE_SINK and
filter BIO's use BIO_TYPE_FILTER. In addition to these, file
descriptor based BIO's should have the descriptor bit set,
BIO_TYPE_DESCRIPTOR.

The PostgreSQL implementation didn't set the type bits, which
went unnoticed for a long time as it's only really relevant
for code auditing the OpenSSL installation, or doing similar
tasks. It is required by the API though, so this fixes it.

Backpatch through 9.6 as this has been wrong for a long time.

Author: Itamar Gafni
Discussion: https://postgr.es/m/SN6PR06MB39665EC10C34BB20956AE4578AF39@SN6PR06MB3966.namprd06.prod.outlook.com
Backpatch-through: 9.6
2021-08-17 14:31:00 +02:00
Michael Paquier
7f0873f328 Refresh apply delay on reload of recovery_min_apply_delay at recovery
This commit ensures that the wait interval in the replay delay loop
waiting for an amount of time defined by recovery_min_apply_delay is
correctly handled on reload, recalculating the delay if this GUC value
is updated, based on the timestamp of the commit record being replayed.

The previous behavior would be problematic for example with replay
still waiting even if the delay got reduced or just cancelled.  If the
apply delay was increased to a larger value, the wait would have just
respected the old value set, finishing earlier.

Author: Soumyadeep Chakraborty, Ashwin Agrawal
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/CAE-ML+93zfr-HLN8OuxF0BjpWJ17O5dv1eMvSE5jsj9jpnAXZA@mail.gmail.com
Backpatch-through: 9.6
2021-08-16 12:11:53 +09:00
Tom Lane
48695decc2 Add RISC-V spinlock support in s_lock.h.
Like the ARM case, just use gcc's __sync_lock_test_and_set();
that will compile into AMOSWAP.W.AQ which does what we need.

At some point it might be worth doing some work on atomic ops
for RISC-V, but this should be enough for a creditable port.

Back-patch to all supported branches, just in case somebody
wants to try them on RISC-V.

Marek Szuba

Discussion: https://postgr.es/m/dea97b6d-f55f-1f6d-9109-504aa7dfa421@gentoo.org
2021-08-13 13:59:06 -04:00
David Rowley
4873da79da Fix incorrect hash table resizing code in simplehash.h
This fixes a bug in simplehash.h which caused an incorrect size mask to be
used when the hash table grew to SH_MAX_SIZE (2^32).  The code was
incorrectly setting the size mask to 0 when the hash tables reached the
maximum possible number of buckets.  This would result always trying to
use the 0th bucket causing an  infinite loop of trying to grow the hash
table due to there being too many collisions.

Seemingly it's not that common for simplehash tables to ever grow this big
as this bug dates back to v10 and nobody seems to have noticed it before.
However, probably the most likely place that people would notice it would
be doing a large in-memory Hash Aggregate with something close to at least
2^31 groups.

After this fix, the code now works correctly with up to within 98% of 2^32
groups and will fail with the following error when trying to insert any
more items into the hash table:

ERROR:  hash table size exceeded

However, the work_mem (or hash_mem_multiplier in newer versions) settings
will generally cause Hash Aggregates to spill to disk long before reaching
that many groups.  The minimal test case I did took a work_mem setting of
over 192GB to hit the bug.

simplehash hash tables are used in a few other places such as Bitmap Index
Scans, however, again the size that the hash table can become there is
also limited to work_mem and it would take a relation of around 16TB
(2^31) pages and a very large work_mem setting to hit this.  With smaller
work_mem values the table would become lossy and never grow large enough
to hit the problem.

Author: Yura Sokolov
Reviewed-by: David Rowley, Ranier Vilela
Discussion: https://postgr.es/m/b1f7f32737c3438136f64b26f4852b96@postgrespro.ru
Backpatch-through: 10, where simplehash.h was added
2021-08-13 16:42:35 +12:00
Thomas Munro
2c62754235 Make EXEC_BACKEND more convenient on macOS.
It's hard to disable ASLR on current macOS releases, for testing with
-DEXEC_BACKEND.  You could already set the environment variable
PG_SHMEM_ADDR to something not likely to collide with mappings created
earlier in process startup.  Let's also provide a default value that
works on current releases and architectures, for developer convenience.

As noted in the pre-existing comment, this is a horrible hack, but
-DEXEC_BACKEND is only used by Unix-based PostgreSQL developers for
testing some otherwise Windows-only code paths, so it seems excusable.

Back-patch to all supported branches.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20210806032944.m4tz7j2w47mant26%40alap3.anarazel.de
2021-08-13 11:11:38 +12:00
Peter Eisentraut
dc10035ecc Translation updates
Source-Git-URL: git://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 9bb123c161ac8f773572e112ced524b99e81c1d9
2021-08-09 11:56:40 +02:00
Tom Lane
ba9f665a44 Really fix the ambiguity in REFRESH MATERIALIZED VIEW CONCURRENTLY.
Rather than trying to pick table aliases that won't conflict with
any possible user-defined matview column name, adjust the queries'
syntax so that the aliases are only used in places where they can't be
mistaken for column names.  Mostly this consists of writing "alias.*"
not just "alias", which adds clarity for humans as well as machines.
We do have the issue that "SELECT alias.*" acts differently from
"SELECT alias", but we can use the same hack ruleutils.c uses for
whole-row variables in SELECT lists: write "alias.*::compositetype".

We might as well revert to the original aliases after doing this;
they're a bit easier to read.

Like 75d66d10e, back-patch to all supported branches.

Discussion: https://postgr.es/m/2488325.1628261320@sss.pgh.pa.us
2021-08-07 13:29:32 -04:00
Dean Rasheed
da188b9934 Adjust the integer overflow tests in the numeric code.
Formerly, the numeric code tested whether an integer value of a larger
type would fit in a smaller type by casting it to the smaller type and
then testing if the reverse conversion produced the original value.
That's perfectly fine, except that it caused a test failure on
buildfarm animal castoroides, most likely due to a compiler bug.

Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That
matches existing code in other places, such as int84(), which is more
widely tested, and so is less likely to go wrong.

While at it, add regression tests covering the numeric-to-int8/4/2
conversions, and adjust the recently added tests to the style of
434ddfb79a (on the v11 branch) to make failures easier to diagnose.

Per buildfarm via Tom Lane, reviewed by Tom Lane.

Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us
2021-08-06 21:31:20 +01:00
Peter Eisentraut
d3ad6566a1 Fix wording 2021-08-06 22:05:41 +02:00
Dean Rasheed
a72ad63154 Fix division-by-zero error in to_char() with 'EEEE' format.
This fixes a long-standing bug when using to_char() to format a
numeric value in scientific notation -- if the value's exponent is
less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a
division-by-zero error.

The reason for this error was that get_str_from_var_sci() divides its
input by 10^exp, which it produced using power_var_int(). However, the
underflow test in power_var_int() causes it to return zero if the
result scale is too small. That's not a problem for power_var_int()'s
only other caller, power_var(), since that limits the rscale to 1000,
but in get_str_from_var_sci() the exponent can be much smaller,
requiring a much larger rscale. Fix by introducing a new function to
compute 10^exp directly, with no rscale limit. This also allows 10^exp
to be computed more efficiently, without any numeric multiplication,
division or rounding.

Discussion: https://postgr.es/m/CAEZATCWhojfH4whaqgUKBe8D5jNHB8ytzemL-PnRx+KCTyMXmg@mail.gmail.com
2021-08-05 09:29:13 +01:00
Bruce Momjian
47a573d911 C comment: correct heading of extension query
Reported-by: Justin Pryzby

Discussion: https://postgr.es/m/20210803161345.GZ12533@telsasoft.com

Backpatch-through: 9.6
2021-08-03 12:26:08 -04:00
Bruce Momjian
a81c71e3a8 pg_upgrade: warn about extensions that need updating
Also create a script that can be run to update them.

Reported-by: Dave Cramer

Discussion: https://postgr.es/m/CADK3HHKawwbOcGwMGnDuAf3-U8YfvTcS8jqDv3UM=niijs3MMA@mail.gmail.com

Backpatch-through: 9.6
2021-08-03 11:58:15 -04:00
Tom Lane
93f99693f9 Use elog, not Assert, to report failure to provide an outer snapshot.
As of commit 84f5c2908, executing SQL commands (via SPI or otherwise)
requires having either an active Portal, or a caller-established
active snapshot.  We were simply Assert'ing that that's the case.
But we've now had a couple different reports of people testing
extensions that didn't meet this requirement, and were confused by
the resulting crash.  Let's convert the Assert to a test-and-elog,
in hopes of making the issue clearer for extension authors.

Per gripes from Liu Huailing and RekGRpth.  Back-patch to v11,
like the prior commit.

Discussion: https://postgr.es/m/OSZPR01MB6215671E3C5956A034A080DFBEEC9@OSZPR01MB6215.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/17035-14607d308ac8643c@postgresql.org
2021-07-31 11:50:14 -04:00
Dean Rasheed
053ec4e0c4 Fix corner-case errors and loss of precision in numeric_power().
This fixes a couple of related problems that arise when raising
numbers to very large powers.

Firstly, when raising a negative number to a very large integer power,
the result should be well-defined, but the previous code would only
cope if the exponent was small enough to go through power_var_int().
Otherwise it would throw an internal error, attempting to take the
logarithm of a negative number. Fix this by adding suitable handling
to the general case in power_var() to cope with negative bases,
checking for integer powers there.

Next, when raising a (positive or negative) number whose absolute
value is slightly less than 1 to a very large power, the result should
approach zero as the power is increased. However, in some cases, for
sufficiently large powers, this would lose all precision and return 1
instead of 0. This was due to the way that the local_rscale was being
calculated for the final full-precision calculation:

  local_rscale = rscale + (int) val - ln_dweight + 8

The first two terms on the right hand side are meant to give the
number of significant digits required in the result ("val" being the
estimated result weight). However, this failed to account for the fact
that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE
(1000), and the result weight might be less then -1000, causing their
sum to be negative, leading to a loss of precision. Fix this by
forcing the number of significant digits calculated to be nonnegative.
It's OK for it to be zero (when the result weight is less than -1000),
since the local_rscale value then includes a few extra digits to
ensure an accurate result.

Finally, add additional underflow checks to exp_var() and power_var(),
so that they consistently return zero for cases like this where the
result is indistinguishable from zero. Some paths through this code
already returned zero in such cases, but others were throwing overflow
errors.

Dean Rasheed, reviewed by Yugo Nagata.

Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6Su0bCR0w@mail.gmail.com
2021-07-31 11:25:39 +01:00
John Naylor
171bf1cea5 Fix range check in ECPG numeric to int conversion
The previous coding guarded against -INT_MAX instead of INT_MIN,
leading to -2147483648 being rejected as out of range.

Per bug #17128 from Kevin Sweet

Discussion: https://www.postgresql.org/message-id/flat/17128-55a8a879727a3e3a%40postgresql.org
Reviewed-by: Tom Lane
Backpatch to all supported branches
2021-07-30 16:18:59 -04:00
Alvaro Herrera
41d27ee7b8
Close yet another race condition in replication slot test code
Buildfarm shows that this test has a further failure mode when a
checkpoint starts earlier than expected, so we detect a "checkpoint
completed" line that's not the one we want.  Change the config to try
and prevent this.

Per buildfarm

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20210729.162038.534808353849568395.horikyota.ntt@gmail.com
2021-07-29 17:26:25 -04:00
Michael Paquier
efe169c900 Add missing exit() in pg_verifybackup when failing to find pg_waldump
pg_verifybackup needs by default pg_waldump to check after a range of
WAL segments required for a backup, except if --no-parse-wal is
specified.  The code checked for the presence of the binary pg_waldump
in an installation and reported an error, but it forgot to properly
exit().  This could lead to confusing errors reported.

Reviewed-by: Robert Haas, Fabien Coelho
Discussion: https://postgr.es/m/YQDMdB+B68yePFeT@paquier.xyz
Backpatch-through: 13
2021-07-29 11:00:00 +09:00
Fujii Masao
a66b05b422 Update minimum recovery point on truncation during WAL replay of abort record.
If a file is truncated, we must update minRecoveryPoint. Once a file is
truncated, there's no going back; it would not be safe to stop recovery
at a point earlier than that anymore.

Commit 7bffc9b7bf changed xact_redo_commit() so that it updates
minRecoveryPoint on truncation, but forgot to change xact_redo_abort().

Back-patch to all supported versions.

Reported-by: mengjuan.cmj@alibaba-inc.com
Author: Fujii Masao
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/b029fce3-4fac-4265-968e-16f36ff4d075.mengjuan.cmj@alibaba-inc.com
2021-07-29 01:34:13 +09:00
Alvaro Herrera
b8f91d7f92
Set pg_setting.pending_restart when pertinent config lines are removed
This changes the behavior of examining the pg_file_settings view after
changing a config option that requires restart.  The user needs to know
that any change of such options does not take effect until a restart,
and this worked correctly if the line is edited without removing it.
However, for the case where the line is removed altogether, the flag
doesn't get set, because a flag was only set in set_config_option, but
that's not called for lines removed.  Repair.

(Ref.: commits 62d16c7fc5 and a486e35706)

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/202107262302.xsfdfc5sb7sh@alvherre.pgsql
2021-07-27 15:44:12 -04:00
Fujii Masao
92913fc290 Avoid using ambiguous word "non-negative" in error messages.
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".

Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.

When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.

Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
2021-07-28 01:21:52 +09:00
Bruce Momjian
0a5e708e2b pg_resetxlog: add option to set oldest xid & use by pg_upgrade
Add pg_resetxlog -u option to set the oldest xid in pg_control.
Previously -x set this value be -2 billion less than the -x value.
However, this causes the server to immediately scan all relation's
relfrozenxid so it can advance pg_control's oldest xid to be inside the
autovacuum_freeze_max_age range, which is inefficient and might disrupt
diagnostic recovery.  pg_upgrade will use this option to better create
the new cluster to match the old cluster.

Reported-by: Jason Harvey, Floris Van Nee

Discussion: https://postgr.es/m/20190615183759.GB239428@rfd.leadboat.com, 87da83168c644fd9aae38f546cc70295@opammb0562.comp.optiver.com

Author: Bertrand Drouvot

Backpatch-through: 9.6
2021-07-26 22:38:14 -04:00
Michael Paquier
2c7395aad7 Fix a couple of memory leaks in src/bin/pg_basebackup/
These have been introduced by 7fbe0c8, and could happen for
pg_basebackup and pg_receivewal.

Per report from Coverity for the ones in walmethods.c, I have spotted
the ones in receivelog.c after more review.

Backpatch-through: 10
2021-07-26 11:14:11 +09:00
Tom Lane
2b8f3f5a7c Get rid of artificial restriction on hash table sizes on Windows.
The point of introducing the hash_mem_multiplier GUC was to let users
reproduce the old behavior of hash aggregation, i.e. that it could use
more than work_mem at need.  However, the implementation failed to get
the job done on Win64, where work_mem is clamped to 2GB to protect
various places that calculate memory sizes using "long int".  As
written, the same clamp was applied to hash_mem.  This resulted in
severe performance regressions for queries requiring a bit more than
2GB for hash aggregation, as they now spill to disk and there's no
way to stop that.

Getting rid of the work_mem restriction seems like a good idea, but
it's a big job and could not conceivably be back-patched.  However,
there's only a fairly small number of places that are concerned with
the hash_mem value, and it turns out to be possible to remove the
restriction there without too much code churn or any ABI breaks.
So, let's do that for now to fix the regression, and leave the
larger task for another day.

This patch does introduce a bit more infrastructure that should help
with the larger task, namely pg_bitutils.h support for working with
size_t values.

Per gripe from Laurent Hasson.  Back-patch to v13 where the
behavior change came in.

Discussion: https://postgr.es/m/997817.1627074924@sss.pgh.pa.us
Discussion: https://postgr.es/m/MN2PR15MB25601E80A9B6D1BA6F592B1985E39@MN2PR15MB2560.namprd15.prod.outlook.com
2021-07-25 14:02:27 -04:00
Fujii Masao
8d091922ff Make the standby server promptly handle interrupt signals.
This commit changes the startup process in the standby server so that
it handles the interrupt signals after waiting for wal_retrieve_retry_interval
on the latch and resetting it, before entering another wait on the latch.
This change causes the standby server to promptly handle interrupt signals.

Otherwise, previously, there was the case where the standby needs to
wait extra five seconds to shutdown when the shutdown request arrived
while the startup process was waiting for wal_retrieve_retry_interval
on the latch.

Author: Fujii Masao, but implementation idea is from Soumyadeep Chakraborty
Reviewed-by: Soumyadeep Chakraborty
Discussion: https://postgr.es/m/9d7e6ab0-8a53-ddb9-63cd-289bcb25fe0e@oss.nttdata.com

Per discussion of BUG #17073, back-patch to all supported versions.
Discussion: https://postgr.es/m/17073-1a5fdaed0fa5d4d0@postgresql.org
2021-07-25 11:15:30 +09:00
Tom Lane
f47408cdc1 Fix check for conflicting session- vs transaction-level locks.
We have an implementation restriction that PREPARE TRANSACTION can't
handle cases where both session-lifespan and transaction-lifespan locks
are held on the same lockable object.  (That's because we'd otherwise
need to acquire a new PROCLOCK entry during post-prepare cleanup, which
is an operation that might fail.  The situation can only arise with odd
usages of advisory locks, so removing the restriction is probably not
worth the amount of effort it would take.)  AtPrepare_Locks attempted
to enforce this, but its logic was many bricks shy of a load, because
it only detected cases where the session and transaction locks had the
same lockmode.  Locks of different modes on the same object would lead
to the rather unhelpful message "PANIC: we seem to have dropped a bit
somewhere".

To fix, build a transient hashtable with one entry per locktag,
not one per locktag + mode, and use that to detect conflicts.

Per bug #17122 from Alexander Pyhalov.  This bug is ancient,
so back-patch to all supported branches.

Discussion: https://postgr.es/m/17122-04f3c32098a62233@postgresql.org
2021-07-24 18:35:52 -04:00
Tom Lane
c0a6f83deb Make printf("%s", NULL) print "(null)" instead of crashing.
We previously took a hard-line attitude that callers should never print
a null string pointer, and doing so is worthy of an assertion failure
or crash.  However, we've long since flushed out any easy-to-find bugs
of that nature.  What remains is a lot of code that perhaps could fail
that way in hard-to-reach corner cases.  For example, in something as
simple as
    ereport(ERROR,
            (errcode(ERRCODE_UNDEFINED_OBJECT),
             errmsg("constraint \"%s\" for table \"%s\" does not exist",
                    conname, get_rel_name(relid))));
one must wonder whether it's completely guaranteed that get_rel_name
cannot return NULL in this context.  If such a situation did occur,
the existing policy converts what might be a pretty minor bug into
a server crash condition.  This is not good for robustness.

Hence, let's follow the lead of glibc and print "(null)" instead
of failing.  We should, of course, still consider it a bug if that
behavior is reachable in ordinary use; but crashing seems less
desirable than not crashing.

This fix works across-the-board in v12 and up, where we always use
src/port/snprintf.c.  Before that, on most platforms we're at the mercy
of the local libc, but it appears that Solaris 10 is the only supported
platform where we'd still get a crash.  Most other platforms such as
*BSD, macOS, and Solaris 11 have adopted glibc's behavior at some
point.  (AIX and HPUX just print "" not "(null)", but that's close
enough.)  I've not checked what Windows' native printf would do, but
it doesn't matter because we've long used snprintf.c on that platform.

In v12 and up, also const-ify related code so that we're not casting
away const on the constant string.  This is just neatnik-ism, since
next to no compilers will warn about that.

Discussion: https://postgr.es/m/17098-b960f3616c861f83@postgresql.org
2021-07-24 13:41:17 -04:00
Thomas Munro
1cfc9f2ef6 jit: Don't inline functions that access thread-locals.
Code inlined by LLVM can crash or fail with "Relocation type not
implemented yet!" if it tries to access thread local variables.  Don't
inline such code.

Back-patch to 11, where LLVM arrived.  Bug #16696.

Author: Dmitry Marakasov <amdmi3@amdmi3.ru>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/16696-29d944a33801fbfe@postgresql.org
2021-07-22 15:07:52 +12:00
John Naylor
0b5dc1f171 Document "B" and "us" as accepted units in postgres.conf.sample
In postgresql.conf, memory and file size GUCs can be specified with "B"
(bytes) as of b06d8e58b. Likewise, time GUCs can be specified with "us"
(microseconds) as of caf626b2c. Update postgres.conf.sample to reflect
that fact.

Pavel Luzanov

Backpatch to v12, which is the earliest version that allows both of
these units. A separate commit will document the "B" case for v11.

Discussion: https://www.postgresql.org/message-id/flat/f10d16fc-8fa0-1b3c-7371-cb3a35a13b7a%40postgrespro.ru
2021-07-21 10:18:42 -04:00
Tom Lane
0fce76b99d Fix corner-case uninitialized-variable issues in plpgsql.
If an error was raised during our initial attempt to check whether
a successfully-compiled expression is "simple", subsequent calls of
exec_stmt_execsql would suppose that stmt->mod_stmt was already computed
when it had not been.  This could lead to assertion failures in debug
builds; in production builds the effect would typically be to act as
if INTO STRICT had been specified even when it had not been.  Of course
that only matters if the subsequent attempt to execute the expression
succeeds, so that the problem can only be reached by fixing a failure
in some referenced, inline-able SQL function and then retrying the
calling plpgsql function in the same session.

(There might be even-more-obscure ways to change the expression's
behavior without changing the plpgsql function, but that one seems
like the only one people would be likely to hit in practice.)

The most foolproof way to fix this would be to arrange for
exec_prepare_plan to not set expr->plan until we've finished the
subsidiary simple-expression check.  But it seems hard to do that
without creating reference-count leak issues.  So settle for documenting
the hazard in a comment and fixing exec_stmt_execsql to test separately
for whether it's computed stmt->mod_stmt.  (That adds a test-and-branch
per execution, but hopefully that's negligible in context.)  In v11 and
up, also fix exec_stmt_call which had a variant of the same issue.

Per bug #17113 from Alexander Lakhin.  Back-patch to all
supported branches.

Discussion: https://postgr.es/m/17113-077605ce00e0e7ec@postgresql.org
2021-07-20 13:01:48 -04:00
Michael Paquier
fb2b86015a Fix some issues with WAL segment opening for pg_receivewal --compress
The logic handling the opening of new WAL segments was fuzzy when using
--compress if a partial, non-compressed, segment with the same base name
existed in the repository storing those files.  In this case, using
--compress would cause the code to first check for the existence and the
size of a non-compressed segment, followed by the opening of a new
compressed, partial, segment.  The code was accidentally working
correctly on most platforms as the buildfarm has proved, except
bowerbird where gzflush() could fail in this code path.  It is wrong
anyway to take the code path used pre-padding when creating a new
partial, non-compressed, segment, so let's fix it.

Note that this issue exists when users mix successive runs of
pg_receivewal with or without compression, as discovered with the tests
introduced by ffc9dda.

While on it, this refactors the code so as code paths that need to know
about the ".gz" suffix are down from four to one in walmethods.c, easing
a bit the introduction of new compression methods.  This addresses a
second issue where log messages generated for an unexpected failure
would not show the compressed segment name involved, which was
confusing, printing instead the name of the non-compressed equivalent.

Reported-by: Georgios Kokolatos
Discussion: https://postgr.es/m/YPDLz2x3o1aX2wRh@paquier.xyz
Backpatch-through: 10
2021-07-20 12:12:51 +09:00
Alvaro Herrera
ce413eba41
Make new replication slot test code even less racy
Further fix the test code in ead9e51e82, this time by waiting until
the checkpoint has completed before moving on; this ensures that the
WAL segment removal has already happened when we create the next slot.

Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20210719.111318.2042379313472032754.horikyota.ntt@gmail.com
2021-07-19 17:21:07 -04:00
Amit Kapila
bfa2a926da Don't allow to set replication slot_name as ''.
We don't allow to create replication slot_name as an empty string ('') via
SQL API pg_create_logical_replication_slot() but it is allowed to be set
via Alter Subscription command. This will lead to apply worker repeatedly
keep trying to stream data via slot_name '' and the user is not allowed to
create the slot with that name.

Author: Japin Li
Reviewed-By: Ranier Vilela, Amit Kapila
Backpatch-through: 10, where it was introduced
Discussion: https://postgr.es/m/MEYP282MB1669CBD98E721C77CA696499B61A9@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2021-07-19 11:04:21 +05:30
Alvaro Herrera
7099ba0580
Make new replication slot test code less racy
The new test code added in ead9e51e82 is racy -- it hinges on
shared-memory state, which changes before the WARNING message is logged.
Put it the other way around.

Backpatch to 13.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/202107161809.zclasccpfcg3@alvherre.pgsql
2021-07-17 13:19:17 -04:00
Alvaro Herrera
cc340af334
Fix pg_dump for disabled triggers on partitioned tables
pg_dump failed to preserve the 'enabled' flag (which can be not only
disabled, but also REPLICA or ALWAYS) for partitions which had it
changed from their respective parents.  Attempt to handle that by
including a definition for such triggers in the dump, but replace the
standard CREATE TRIGGER line with an ALTER TRIGGER line.

Backpatch to 11, where these triggers can exist.  In branches 11 and 12,
pick up a few test lines from commit b9b408c487 to verify that
pg_upgrade is okay with these arrangements.

Co-authored-by: Justin Pryzby <pryzby@telsasoft.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com
2021-07-16 17:29:22 -04:00
Alvaro Herrera
c31516ae5b
Preserve firing-on state when cloning row triggers to partitions
When triggers are cloned from partitioned tables to their partitions,
the 'tgenabled' flag (origin/replica/always/disable) was not propagated.
Make it so that the flag on the trigger on partition is initially set to
the same value as on the partitioned table.

Add a test case to verify the behavior.

Backpatch to 11, where this appeared in commit 86f575948c.

Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20200930223450.GA14848@telsasoft.com
2021-07-16 13:01:43 -04:00