Commit Graph

53062 Commits

Author SHA1 Message Date
Tom Lane fd2abeb7c2 Delete contrib/xml2's legacy implementation of xml_is_well_formed().
This function is unreferenced in modern usage; it was superseded in 9.1
by a core function of the same name.  It has been left in place in the C
code only so that pre-9.1 SQL definitions of the contrib/xml2 functions
would continue to work.  Eleven years seems like enough time for people
to have updated to the extension-style version of the xml2 module, so
let's drop this.

We did this once before, in 20540710e, and then reverted it because
the intended change of PGDLLEXPORT markings didn't happen.  This
time the reason is to suppress link-time duplicate-symbol warnings
on AIX.  That's not worth a lot perhaps, but the value of keeping
this function has surely dropped to about zero by now.

Discussion: https://postgr.es/m/2717731.1644778752@sss.pgh.pa.us
2022-02-14 10:56:19 -05:00
Heikki Linnakangas 50e5bc582a Add test case for an archive recovery corner case.
While I was working on a patch to refactor things around xlog.c, I mixed
up EndOfLogTLI and replayTLI at the end of recovery. As a result, if you
recovered to a point with a lower-numbered timeline in a WAL segment
that has a higher TLI in the filename, the end-of-recovery WAL record
was created with invalid PrevTimeLineId. I noticed that while
self-reviewing, but no tests failed. So add a test to cover that corner
case.

Thanks to Amul Sul who also submitted a test case for the same corner
case, although this patch is different from that.

Reviewed-by: Amul Sul, Michael Paquier
Discussion: https://www.postgresql.org/message-id/52bc9ccd-8591-431b-0086-15d9acf25a3f@iki.fi
Discussion: https://www.postgresql.org/message-id/CAAJ_b94Vjt5cXGza_1MkjLQWciNdEemsmiWuQj0d%3DM7JfjAa1g%40mail.gmail.com
2022-02-14 11:33:57 +02:00
Peter Eisentraut 1383d52faa Add missing node support functions
forgotten in 37851a8b83
2022-02-14 09:11:13 +01:00
Peter Eisentraut 37851a8b83 Database-level collation version tracking
This adds to database objects the same version tracking that collation
objects have.  There is a new pg_database column datcollversion that
stores the version, a new function
pg_database_collation_actual_version() to get the version from the
operating system, and a new subcommand ALTER DATABASE ... REFRESH
COLLATION VERSION.

This was not originally added together with pg_collation.collversion,
since originally version tracking was only supported for ICU, and ICU
on a database-level is not currently supported.  But we now have
version tracking for glibc (since PG13), FreeBSD (since PG14), and
Windows (since PG13), so this is useful to have now.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/f0ff3190-29a3-5b39-a179-fa32eee57db6%40enterprisedb.com
2022-02-14 08:27:26 +01:00
Peter Eisentraut 9898c5e03c Improve correlation names in sanity tests
Some of the queries in the "sanity" tests in the regression test suite
(opr_sanity, type_sanity) are very confusing.  One main stumbling
block is that for some probably ancient reason many of the older
queries are written with correlation names p1, p2, etc. independent of
the name of the catalog. This one is a good example:

SELECT p1.oid, p1.oprname, p2.oid, p2.proname
FROM pg_operator AS p1, pg_proc AS p2          <-- HERE
WHERE p1.oprcode = p2.oid AND
    p1.oprkind = 'l' AND
    (p2.pronargs != 1
     OR NOT binary_coercible(p2.prorettype, p1.oprresult)
     OR NOT binary_coercible(p1.oprright, p2.proargtypes[0])
     OR p1.oprleft != 0);

This is better written as

SELECT o1.oid, o1.oprname, p1.oid, p1.proname
FROM pg_operator AS o1, pg_proc AS p1
WHERE o1.oprcode = p1.oid AND
    o1.oprkind = 'l' AND
    (p1.pronargs != 1
     OR NOT binary_coercible(p1.prorettype, o1.oprresult)
     OR NOT binary_coercible(o1.oprright, p1.proargtypes[0])
     OR o1.oprleft != 0);

This patch cleans up all the queries in this manner.

(As in the above case, I kept the digits like o1 and p1 even in cases
where only one of each letter is used in a query.  This is mainly to
keep the style consistent.)

Discussion: https://www.postgresql.org/message-id/flat/c538308b-319c-8784-e250-1284d12d5411%40enterprisedb.com
2022-02-14 07:11:51 +01:00
Thomas Munro cba5b994c9 Use WL_SOCKET_CLOSED for client_connection_check_interval.
Previously we used poll() directly to check for a POLLRDHUP event.
Instead, use the WaitEventSet API to poll the socket for
WL_SOCKET_CLOSED, which knows how to detect this condition on many more
operating systems.

Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: Maksim Milyutin <milyutinma@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/77def86b27e41f0efcba411460e929ae%40postgrespro.ru
2022-02-14 16:52:23 +13:00
Thomas Munro 50e570a59e Add WL_SOCKET_CLOSED for socket shutdown events.
Provide a way for WaitEventSet to report that the remote peer has shut
down its socket, independently of whether there is any buffered data
remaining to be read.  This works only on systems where the kernel
exposes that information, namely:

* WAIT_USE_POLL builds using POLLRDHUP, if available
* WAIT_USE_EPOLL builds using EPOLLRDHUP
* WAIT_USE_KQUEUE builds using EV_EOF

Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: Maksim Milyutin <milyutinma@gmail.com>
Discussion: https://postgr.es/m/77def86b27e41f0efcba411460e929ae%40postgrespro.ru
2022-02-14 16:52:23 +13:00
Amit Kapila 5e01001ffb WAL log unchanged toasted replica identity key attributes.
Currently, during UPDATE, the unchanged replica identity key attributes
are not logged separately because they are getting logged as part of the
new tuple. But if they are stored externally then the untoasted values are
not getting logged as part of the new tuple and logical replication won't
be able to replicate such UPDATEs. So we need to log such attributes as
part of the old_key_tuple during UPDATE.

Reported-by: Haiying Tang
Author: Dilip Kumar and Amit Kapila
Reviewed-by: Alvaro Herrera, Haiying Tang, Andres Freund
Backpatch-through: 10
Discussion: https://postgr.es/m/OS0PR01MB611342D0A92D4F4BF26C0F47FB229@OS0PR01MB6113.jpnprd01.prod.outlook.com
2022-02-14 08:55:58 +05:30
Thomas Munro 0052fb4890 Track LLVM 15 changes.
This isn't an API change, it's just a missing #include that we got away
with before.  Per buildfarm animal seawasp.
2022-02-14 15:51:43 +13:00
John Naylor b19a7e392a Correct Makefile dependencies for catalog scripts
At some point, Gen_fmgrtab.pl stopped needing the value of defined symbols
from access/transam.h, while genbki.pl starting doing so. The Makefiles
didn't get the memo, so update the relevant dependencies.
2022-02-14 09:07:09 +07:00
Michael Paquier 1d084fba1b Add ./configure check for "lz4" command
Some environments may compile with --with-lz4 while the command "lz4"
goes missing, causing two failures in the TAP tests of pg_verifybackup
(008_untar.pl and 010_client_untar.pl) as the code assumed that the
command always existed with a hardcoded value in src/Makefile.global.
Rather than this method, this adds a ./configure check based on
PGAC_PATH_PROGS() to find automatically the command and get an absolute
path to it.

Both tests need to be adjusted for the case where the command does not
exist, actually, as Makefile.global would set now LZ4 to an empty value
in this case.  The TAP tests of pg_receivewal already do that.

Per report from buildfarm member copperhead, as an effect of dab2984.
The origin of the failure is actually babbbb5 that did not centralize
the check for the existence of a "lz4" command at ./configure to shave a
few cycles.  Note that one just needs to tweak an environment to move
"lz4" out of the way to reproduce the problem, which is what I did to
test this change.

Per discussion with Robert Haas, Tom Lane, Andres Freund and myself.

Discussion: https://postgr.es/m/Ygc51WVAFGocSu4h@paquier.xyz
2022-02-14 10:40:34 +09:00
Alexander Korotkov 3f74daa8df Fix memory leak in IndexScan node with reordering
Fix ExecReScanIndexScan() to free the referenced tuples while emptying the
priority queue.  Backpatch to all supported versions.

Discussion: https://postgr.es/m/CAHqSB9gECMENBQmpbv5rvmT3HTaORmMK3Ukg73DsX5H7EJV7jw%40mail.gmail.com
Author: Aliaksandr Kalenik
Reviewed-by: Tom Lane, Alexander Korotkov
Backpatch-through: 10
2022-02-14 04:17:04 +03:00
Michael Paquier c963e84fb8 Make origin data initialization consistent other fields in 2PC header
As of 1eb6d65, the origin data is optionally stored in a 2PC file
header, with the data filled in EndPrepare() even in the default case
where there is no origin data to add.  This was inconsistent with all
the other fields of TwoPhaseFileHeader which are initialized in
StartPrepare(), so move the initialization of origin_lsn and
origin_timestamp there instead.  The effect of missing the
initialization at this early stage is only cosmetic based on the current
logic of the code, but could have led to issues in the long-term, and it
is more consistent done this way.

Reported-by: Ranier Vilela
Discussion: https://postgr.es/m/CAEudQAooECJ+gU_RZB-yhioPOV94R4ucoHAf68PiJhLpgpVpBw@mail.gmail.com
2022-02-14 09:30:35 +09:00
Tom Lane 994d76707a Fix misuse of "const" qualifier.
"const foo *" is quite different from "foo * const".
This code was evidently trying to avoid casting away
const from the arguments, but entirely failed to do so.

Per study of some buildfarm warnings from anole
(which unfortunately are mostly ignorable, since it
seems not to understand "restrict" very well).
I'm surprised though that nothing else has complained.
2022-02-13 19:20:56 -05:00
Thomas Munro 7e6124ca7d Remove REGRESS_OUTPUTDIR environment variable.
Andres Freund points out that the tmp_check path is already available as
perl variable PostgreSQL::Test::Utils::tmp_check, so we can drop the new
environment variable introduced by commit f47ed79cc.

Discussion: https://postgr.es/m/20220213052955.dh7lheehit7bsemf%40alap3.anarazel.de
2022-02-14 12:52:57 +13:00
Tom Lane 302612a6c7 Silence minor compiler warnings.
Depending on compiler version and optimization level, we might
get a complaint that lazy_scan_heap's "freespace" is used
uninitialized.

Compilers not aware that ereport(ERROR) doesn't return complained
about bbsink_lz4_new().

Assigning "-1" to a uint64 value has unportable results; fortunately,
the value of xlogreadsegno is unimportant when xlogreadfd is -1.
(It looks to me like there is no need for xlogreadsegno to be static
in the first place, but I didn't venture to change that.)
2022-02-13 13:06:55 -05:00
Andres Freund f79c94f39d ci: fix copy-paste mistake in 16eb8231d1.
Reported-By: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220213024704.GN31460@telsasoft.com
2022-02-12 19:38:06 -08:00
Andres Freund 0a156058aa ci: macos: align sysinfo_script to other tasks.
Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220203035827.GG23027@telsasoft.com
2022-02-12 15:53:30 -08:00
Andres Freund 16eb8231d1 ci: Only use one artifact instruction for logs.
Having the different types of logs in artifacts instruction makes it quicker
to navigate between them. Either that didn't use to be possible, or I just did
it wrong. I saw that / how it works in a patch by Justin Pryzby.

Also remove a few unnecessary uses of **, suggested by Justin Pryzby.

Discussion: https://postgr.es/m/20220203195718.smqo5xg4ygp5qktq@alap3.anarazel.de
2022-02-12 15:52:36 -08:00
Andres Freund 79b79dd57a ci: s/CCACHE_SIZE/CCACHE_MAXSIZE/.
The wrong environment variable was used in one place.

Author: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20220203035827.GG23027@telsasoft.com
2022-02-12 15:52:36 -08:00
Tomas Vondra b779d7d8fd Fix skip-empty-xacts with sequences in test_decoding
Regression tests need to use skip-empty-xacts = false, because there
might be accidental concurrent activity (like autovacuum), particularly
on slow machines. The tests added by 80901b3291 failed to do that in a
couple places, triggering occasional failures on buildfarm.

Fixing the tests however uncovered a bug in the code, because sequence
callbacks did not handle skip-empty-xacts properly. For trasactional
increments we need to check/update the xact_wrote_changes flag, and emit
the BEGIN if it's the first change in the transaction.

Reported-by: Andres Freund
Discussion: https://postgr.es/m/20220212220413.b25amklo7t4xb7ni%40alap3.anarazel.de
2022-02-12 23:50:42 +01:00
Tom Lane faa189c932 Move libpq's write_failed mechanism down to pqsecure_raw_write().
Commit 1f39a1c06 implemented write-failure postponement in pqSendSome,
which is above SSL/GSS processing.  However, we've now seen failures
indicating that (some versions of?) OpenSSL have a tendency to report
write failures prematurely too.  Hence, move the primary responsibility
for postponing write failures down to pqsecure_raw_write(), below
SSL/GSS processing.  pqSendSome now sets write_failed only in corner
cases where we'd lost the connection already.

A side-effect of this change is that errors detected in the SSL/GSS
layer itself will be reported immediately (as if they were read
errors) rather than being postponed like write errors.  That's
reverting an effect of 1f39a1c06, and I think it's fine: if there's
not a socket-level error, it's hard to be sure whether an OpenSSL
error ought to be considered a read or write failure anyway.

Another important point is that write-failure postponement is now
effective during connection setup.  OpenSSL's misbehavior of this
sort occurs during SSL_connect(), so that's a change we want.

Per bug #17391 from Nazir Bilal Yavuz.  Possibly this should be
back-patched, but I think it prudent to let it age awhile in HEAD
first.

Discussion: https://postgr.es/m/17391-304f81bcf724b58b@postgresql.org
2022-02-12 14:00:09 -05:00
Tom Lane 335fa5a260 Fix thinko in PQisBusy().
In commit 1f39a1c06 I made PQisBusy consider conn->write_failed, but
that is now looking like complete brain fade.  In the first place, the
logic is quite wrong: it ought to be like "and not" rather than "or".
This meant that once we'd gotten into a write_failed state, PQisBusy
would always return true, probably causing the calling application to
iterate its loop until PQconsumeInput returns a hard failure thanks
to connection loss.  That's not what we want: the intended behavior
is to return an error PGresult, which the application probably has
much cleaner support for.

But in the second place, checking write_failed here seems like the
wrong thing anyway.  The idea of the write_failed mechanism is to
postpone handling of a write failure until we've read all we can from
the server; so that flag should not interfere with input-processing
behavior.  (Compare 7247e243a.)  What we *should* check for is
status = CONNECTION_BAD, ie, socket already closed.  (Most places that
close the socket don't touch asyncStatus, but they do reset status.)
This primarily ensures that if PQisBusy() returns true then there is
an open socket, which is assumed by several call sites in our own
code, and probably other applications too.

While at it, fix a nearby thinko in libpq's my_sock_write: we should
only consult errno for res < 0, not res == 0.  This is harmless since
pqsecure_raw_write would force errno to zero in such a case, but it
still could confuse readers.

Noted by Andres Freund.  Backpatch to v12 where 1f39a1c06 came in.

Discussion: https://postgr.es/m/20220211011025.ek7exh6owpzjyudn@alap3.anarazel.de
2022-02-12 13:23:20 -05:00
Michael Paquier 52377bb81a Revert "Add TAP test to automate the equivalent of check_guc"
This reverts commit b0a55f4, to remove for now the TAP test that did the
equivalent of check_guc.  The test has been using pg_config --sharedir
to find the location of postgresql.conf.sample.  While the buildfarm and
normal build environments rather liked that, this proves to be an issue
for Debian where pg_config is patched to not be relocatable, causing the
test to fail.

Rather than relying on pg_config, we'd better find the sample file based
on its location from the source directory.  However, this is also an
issue as a TAP test only offers the build directory as of TESTDIR in the
environment context, so this would fail with VPATH builds.  Perhaps the
source path could be provided additionally when running the TAP tests.
Or perhaps we may be able to get away by just switching to a SQL
approach, by using PG_ABS_SRCDIR but this is going to require some extra
loops to get the sample file from the correct path in src/backend/.  In
any case, this needs more thoughts, so just revert the test case until
something better is done about this relocation problem.

Reported-by: Christopher Berg
Discussion: https://postgr.es/m/YgYw25OXV5men8Fj@msg.df7cb.de
2022-02-12 12:53:59 +09:00
Peter Geoghegan efa4a9462a Consolidate VACUUM xid cutoff logic.
Push the logic for determining whether or not a VACUUM operation will be
aggressive down into vacuum_set_xid_limits().  This makes the function's
signature significantly simpler, and seems clearer overall.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzkymFbz6D_vL+jmqSn_5q1wsFvFrE+37yLgL_Rkfd6Gzg@mail.gmail.com
2022-02-11 18:26:15 -08:00
Peter Geoghegan 872770fd6c Add VACUUM instrumentation for scanned pages, relfrozenxid.
Report on scanned pages within VACUUM VERBOSE and autovacuum logging.
These are pages that were physically examined during the VACUUM
operation.  Note that this can include a small number of pages that were
marked all-visible in the visibility map by some earlier VACUUM
operation.  VACUUM won't skip all-visible pages that aren't part of a
range of all-visible pages that's at least 32 blocks in length (partly
to avoid missing out on opportunities to advance relfrozenxid during
non-aggressive VACUUMs).

Commit 44fa8488 simplified the definition of scanned pages.  It became
the complement of the pages (of those pages from rel_pages) that were
skipped using the visibility map.  And so scanned pages precisely
indicates how effective the visibility map was at saving work.  (Before
now we displayed the number of pages skipped via the visibility map when
happened to be frozen pages, but not when they were merely all-visible,
which was less useful to users.)

Rename the user-visible OldestXmin output field to "removal cutoff", and
show some supplementary information: how far behind the cutoff is
(number of XIDs behind) by the time the VACUUM operation finished.  This
will help users to figure out what's _not_ working in extreme cases
where VACUUM is fundamentally unable to remove dead tuples or freeze
older tuples (e.g., due to a leaked replication slot).  Also report when
relfrozenxid is advanced by VACUUM in output that immediately follows
"removal cutoff".  This structure is intended to highlight the
relationship between the new relfrozenxid value for the table, and the
VACUUM operation's removal cutoff.

Finally, add instrumentation of "missed dead tuples", and the number of
pages that had at least one such tuple.  These are fully DEAD (not just
RECENTLY_DEAD) tuples with storage that could not be pruned due to
failure to acquire a cleanup lock on a heap page.  This is a replacement
for the "skipped due to pin" instrumentation removed by commit 44fa8488.
It shows more details than before for pages where failing to get a
cleanup lock actually resulted in VACUUM missing out on useful work, but
usually shows nothing at all instead (the mere fact that we couldn't get
a cleanup lock is usually of no consequence whatsoever now).

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-Wznp=c=Opj8Z7RMR3G=ec3_JfGYMN_YvmCEjoPCHzWbx0g@mail.gmail.com
2022-02-11 16:48:40 -08:00
Tomas Vondra 80901b3291 Add decoding of sequences to test_decoding
Commit 0da92dc530 improved the logical decoding infrastructure to handle
sequences, and did various changes to related parts (WAL logging etc.).
But it did not include any implementation of the new callbacks added to
OutputPluginCallbacks.

This extends test_decoding with two callbacks to decode sequences. The
decoding of sequences may be disabled using 'include-sequences', a new
option of the output plugin.

Author: Tomas Vondra, Cary Huang
Reviewed-by: Peter Eisentraut, Hannu Krosing, Andres Freund
Discussion: https://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com
Discussion: https://postgr.es/m/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
2022-02-12 00:51:46 +01:00
Peter Geoghegan 44fa84881f Simplify lazy_scan_heap's handling of scanned pages.
Redefine a scanned page as any heap page that actually gets pinned by
VACUUM's first pass over the heap, regardless of whether or not the page
was cleanup locked.  Although it's fundamentally impossible to prune a
heap page without a cleanup lock (since we cannot safely defragment the
page), we can do just about everything else.  The only notable further
exception is freezing tuples, though even that is arguably a consequence
of not being able to prune (not a separate issue).

VACUUM now does as much of the same processing as possible for pages
that could not be cleanup locked.  Any failure to do specific required
processing is treated as a special case exception, which will be rare in
practice.  We now collect any preexisting LP_DEAD items (left behind by
earlier opportunistic pruning) in the dead_items array for these heap
pages, and count their tuples in the usual way.  Steps used to decide if
we'll attempt relation truncation are performed in the usual way for
no-cleanup-lock scanned pages, too.

Although eliminating these special cases is intrinsically useful, it's
even more useful as an enabler of further simplifications.  The only
essential difference between aggressive and non-aggressive is that only
aggressive is _guaranteed_ to be able to advance relfrozenxid up to
FreezeLimit.  Advancing relfrozenxid is always useful, but before now
non-aggressive VACUUMs threw away the opportunity to do so whenever a
cleanup lock could not be acquired on any page, no matter what the
details were.  This was very pessimistic.

It isn't actually necessary to "behave aggressively" to maintain the
ability to advance relfrozenxid when a cleanup lock isn't immediately
available (most of the time).  The non-aggressive case will now make
sure that it isn't safe to advance relfrozenxid (without waiting) using
only a share lock.  It will usually notice that there are no tuples that
need to be frozen anyway, just like in the aggressive case -- and so it
no longer wastes an opportunity to advance relfrozenxid over nothing.
(The non-aggressive case still won't wait for a cleanup lock when there
really are tuples on the page that need to be frozen, since that really
would amount to "behaving aggressively".)

VACUUM currently has a tendency to set heap pages to all-visible in the
visibility map before it freezes all of the tuples on the page.  Only a
subsequent aggressive VACUUM will visit these pages to freeze their
tuples, usually only when the tuple XIDs are much older than the
vacuum_freeze_min_age GUC (FreezeLimit cutoff) is supposed to allow.
And so non-aggressive VACUUMs are still far less likely to be able to
advance relfrozenxid in practice, even with the enhancements from this
commit.  This remaining issue will be addressed by future work that
overhauls the criteria for freezing tuples.  Once that's in place,
almost every VACUUM operation will be able to advance relfrozenxid in
practice.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wznp=c=Opj8Z7RMR3G=ec3_JfGYMN_YvmCEjoPCHzWbx0g@mail.gmail.com
2022-02-11 14:32:17 -08:00
Thomas Munro 4eb2176318 Fix DROP {DATABASE,TABLESPACE} on Windows.
Previously, it was possible for DROP DATABASE, DROP TABLESPACE and ALTER
DATABASE SET TABLESPACE to fail because other backends still had file
handles open for dropped tables.  Windows won't allow a directory
containing unlinked-but-still-open files to be unlinked.  Tackle this
problem by forcing all backends to close all smgr fds.  No change for
Unix systems, which don't suffer from the problem, but the new code path
can be tested by Unix-based developers by defining
USE_BARRIER_SMGRRELEASE explicitly.

It's possible that PROCSIGNAL_BARRIER_SMGRRELEASE will have more
bug-fixing applications soon (under discussion).  Note that this is the
first user of the ProcSignalBarrier mechanism from commit 16a4e4aec.  It
could in principle be back-patched as far as 14, but since field
complaints are rare and ProcSignalBarrier hasn't been battle-tested,
that seems like a bad idea.  Fix in master only, where these failures
have started to show up in automated testing due to new tests.

Suggested-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA+hUKGLdemy2gBm80kz20GTe6hNVwoErE8KwcJk6-U56oStjtg@mail.gmail.com
2022-02-12 10:21:23 +13:00
Tom Lane e5691cc917 Don't use_physical_tlist for an IOS with non-returnable columns.
createplan.c tries to save a runtime projection step by specifying
a scan plan node's output as being exactly the table's columns, or
index's columns in the case of an index-only scan, if there is not a
reason to do otherwise.  This logic did not previously pay attention
to whether an index's columns are returnable.  That worked, sort of
accidentally, until commit 9a3ddeb51 taught setrefs.c to reject plans
that try to read a non-returnable column.  I have no desire to loosen
setrefs.c's new check, so instead adjust use_physical_tlist() to not
try to optimize this way when there are non-returnable column(s).

Per report from Ryan Kelly.  Like the previous patch, back-patch
to all supported branches.

Discussion: https://postgr.es/m/CAHUie24ddN+pDNw7fkhNrjrwAX=fXXfGZZEHhRuofV_N_ftaSg@mail.gmail.com
2022-02-11 15:24:02 -05:00
Daniel Gustafsson 549ec201d6 Replace Test::More plans with done_testing
Rather than doing manual book keeping to plan the number of tests to run
in each TAP suite, conclude each run with done_testing() summing up the
the number of tests that ran. This removes the need for maintaning and
updating the plan count at the expense of an accurate count of remaining
during the test suite runtime.

This patch has been discussed a number of times, often in the context of
other patches which updates tests, so a larger number of discussions can
be found in the archives.

Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/DD399313-3D56-4666-8079-88949DAC870F@yesql.se
2022-02-11 20:54:44 +01:00
Robert Haas 751b8d23b7 pg_basebackup: Allow client-side LZ4 (de)compression.
LZ4 compression can now be performed on the client using
pg_basebackup -Ft --compress client-lz4, and LZ4 decompression of
a backup compressed on the server can be performed on the client
using pg_basebackup -Fp --compress server-lz4.

Dipesh Pandit, reviewed and tested by Jeevan Ladhe and Tushar Ahuja,
with a few corrections - and some documentation - by me.

Discussion: http://postgr.es/m/CAN1g5_FeDmiA9D8wdG2W6Lkq5CpubxOAqTmd2et9hsinTJtsMQ@mail.gmail.com
2022-02-11 09:41:42 -05:00
Robert Haas dab298471f Add suport for server-side LZ4 base backup compression.
LZ4 compression can be a lot faster than gzip compression, so users
may prefer it even if the compression ratio is not as good. We will
want pg_basebackup to support LZ4 compression and decompression on the
client side as well, and there is a pending patch for that, but it's
by a different author, so I am committing this part separately for
that reason.

Jeevan Ladhe, reviewed by Tushar Ahuja and by me.

Discussion: http://postgr.es/m/CANm22Cg9cArXEaYgHVZhCnzPLfqXCZLAzjwTq7Fc0quXRPfbxA@mail.gmail.com
2022-02-11 08:29:38 -05:00
Tom Lane a745b93650 Make pg_ctl stop/restart/promote recheck postmaster aliveness.
"pg_ctl stop/restart" checked that the postmaster PID is valid just
once, as a side-effect of sending the stop signal, and then would
wait-till-timeout for the postmaster.pid file to go away.  This
neglects the case wherein the postmaster dies uncleanly after we
signal it.  Similarly, once "pg_ctl promote" has sent the signal,
it'd wait for the corresponding on-disk state change to occur
even if the postmaster dies.

I'm not sure how we've managed not to notice this problem, but it
seems to explain slow execution of the 017_shm.pl test script on AIX
since commit 4fdbf9af5, which added a speculative "pg_ctl stop" with
the idea of making real sure that the postmaster isn't there.  In the
test steps that kill-9 and then restart the postmaster, it's possible
to get past the initial signal attempt before kill() stops working
for the doomed postmaster.  If that happens, pg_ctl waited till
PGCTLTIMEOUT before giving up ... and the buildfarm's AIX members
have that set very high.

To fix, include a "kill(pid, 0)" test (similar to what
postmaster_is_alive uses) in these wait loops, so that we'll
give up immediately if the postmaster PID disappears.

While here, I chose to refactor those loops out of where they were.
do_stop() and do_restart() can perfectly well share one copy of the
wait-for-stop loop, and it seems desirable to put a similar function
beside that for wait-for-promote.

Back-patch to all supported versions, since pg_ctl's wait logic
is substantially identical in all, and we're seeing the slow test
behavior in all branches.

Discussion: https://postgr.es/m/20220210023537.GA3222837@rfd.leadboat.com
2022-02-10 16:49:39 -05:00
Andrew Dunstan ea09a75e1e
Use gendef instead of pexports for building windows .def files
Modern msys systems lack pexports but have gendef instead, so use that.

Discussion: https://postgr.es/m/3ccde7a9-e4f9-e194-30e0-0936e6ad68ba@dunslane.net

Backpatch to release 9.4 to enable building with perl on older branches.
Before that pexports is not used for plperl.
2022-02-10 13:44:05 -05:00
Tomas Vondra 0da92dc530 Logical decoding of sequences
This extends the logical decoding to also decode sequence increments.
We differentiate between sequences created in the current (in-progress)
transaction, and sequences created earlier. This mixed behavior is
necessary because while sequences are not transactional (increments are
not subject to ROLLBACK), relfilenode changes are. So we do this:

* Changes for sequences created in the same top-level transaction are
  treated as transactional, i.e. just like any other change from that
  transaction, and discarded in case of a rollback.

* Changes for sequences created earlier are applied immediately, as if
  performed outside any transaction. This applies also after ALTER
  SEQUENCE, which may create a new relfilenode.

Moreover, if we ever get support for DDL replication, the sequence
won't exist until the transaction gets applied.

Sequences created in the current transaction are tracked in a simple
hash table, identified by a relfilenode. That means a sequence may
already exist, but if a transaction does ALTER SEQUENCE then the
increments for the new relfilenode will be treated as transactional.

For each relfilenode we track the XID of (sub)transaction that created
it, which is needed for cleanup at transaction end. We don't need to
check the XID to decide if an increment is transactional - if we find a
match in the hash table, it has to be the same transaction.

This requires two minor changes to WAL-logging. Firstly, we need to
ensure the sequence record has a valid XID - until now the the increment
might have XID 0 if it was the first change in a subxact. But the
sequence might have been created in the same top-level transaction. So
we ensure the XID is assigned when WAL-logging increments.

The other change is addition of "created" flag, marking increments for
newly created relfilenodes. This makes it easier to maintain the hash
table of sequences that need transactional handling.
Note: This is needed because of subxacts. A XID 0 might still have the
sequence created in a different subxact of the same top-level xact.

This does not include any changes to test_decoding and/or the built-in
replication - those will be committed in separate patches.

A patch adding decoding of sequences was originally submitted by Cary
Huang. This commit reworks various important aspects (e.g. the WAL
logging and transactional/non-transactional handling). However, the
original patch and reviews were very useful.

Author: Tomas Vondra, Cary Huang
Reviewed-by: Peter Eisentraut, Hannu Krosing, Andres Freund
Discussion: https://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com
Discussion: https://postgr.es/m/1710ed7e13b.cd7177461430746.3372264562543607781@highgo.ca
2022-02-10 18:43:51 +01:00
Robert Haas 0d4513b613 Remove server support for the previous base backup protocol.
Commit cc333f3233 added a new COPY
sub-protocol for taking base backups, but retained support for the
previous protocol. For the same reasons articulated in the message
for commit 9cd28c2e5f, remove support
for the previous protocol from the server.

Discussion: http://postgr.es/m/CA+TgmoazKcKUWtqVa0xZqSzbKgTH+X-aw4V7GyLD68EpDLMh8A@mail.gmail.com
2022-02-10 12:12:43 -05:00
Tom Lane d37776e451 Make timeout.c more robust against missed timer interrupts.
Commit 09cf1d522 taught schedule_alarm() to not do anything if
the next requested event is after when we expect the next interrupt
to fire.  However, if somehow an interrupt gets lost, we'll continue
to not do anything indefinitely, even after the "next interrupt" time
is obviously in the past.  Thus, one missed interrupt can break
timeout scheduling for the life of the session.  Michael Harris
reported a scenario where a bug in a user-defined function caused this
to happen, so you don't even need to assume kernel bugs exist to think
this is worth fixing.  We can make things more robust at little cost
by detecting the case where signal_due_at is before "now" and forcing
a new setitimer call to occur.  This isn't a completely bulletproof
fix of course; but in our typical usage pattern where we frequently set
timeouts and clear them before they are reached, the interrupt will
get re-enabled after at most one timeout interval, which with a little
luck will be before we really need it.

While here, let's mark signal_due_at as volatile, since the signal
handler can both examine and set it.  I'm not sure there's any
actual risk given that signal_pending is already volatile, but
it's surely questionable.

Backpatch to v14 where this logic came in.

Michael Harris and Tom Lane

Discussion: https://postgr.es/m/CADofcAWbMrvgwSMqO4iG_iD3E2v8ZUrC-_crB41my=VMM02-CA@mail.gmail.com
2022-02-10 11:52:28 -05:00
Robert Haas 9cd28c2e5f Remove server support for old BASE_BACKUP command syntax.
Commit 0ba281cb4b added a new syntax
for the BASE_BACKUP command, with extensible options, but maintained
support for the legacy syntax. This isn't important for PostgreSQL,
where pg_basebackup works with older server versions but not newer
ones, but it could in theory matter for out-of-core users of the
replication protocol.

Discussion on pgsql-hackers, however, suggests that no one is aware
of any out-of-core use of the BASE_BACKUP command, and the consensus
is in favor of removing support for the old syntax to simplify the
code, so do that.

Discussion: http://postgr.es/m/CA+TgmoazKcKUWtqVa0xZqSzbKgTH+X-aw4V7GyLD68EpDLMh8A@mail.gmail.com
2022-02-10 10:48:33 -05:00
Daniel Gustafsson 6d503d2a47 Set SNI ClientHello extension to localhost in tests
The connection strings in the SSL client tests were using the host
set up from Cluster.pm which is a temporary pathname. When SNI is
enabled we pass the host to OpenSSL in order to set the server name
indication ClientHello extension via SSL_set_tlsext_host_name.

OpenSSL doesn't validate the hostname apart from checking the max
length, but LibreSSL checks for RFC 5890 conformance which results
in errors during testing as the pathname from Cluster.pm is not a
valid hostname.

Fix by setting the host explicitly to localhost, as that's closer
to the intent of the test.

Backpatch through 14 where SNI support came in.

Reported-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/17391-304f81bcf724b58b@postgresql.org
Backpatch-through: 14
2022-02-10 14:23:36 +01:00
Peter Eisentraut 4c5c41b4d9 Remove unnecessary resetPQExpBuffer call
Oversight in e2c52beecd.

Author: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/20220209025007.eogz2aivcnvw46ym%40jrouhaud
2022-02-10 12:23:40 +01:00
Peter Eisentraut b9a3139397 psql: Rename results to result when only a single one is meant
This makes the naming more consistent with the libpq API and the rest
of the code, and makes actually supporting multiple result sets in the
future less confusing.

Discussion: https://www.postgresql.org/message-id/flat/db72fb98-9b43-d776-7247-6ed38f28e7c6%40enterprisedb.com
2022-02-10 12:12:52 +01:00
Peter Eisentraut f5744f1d1e Update comment
Update a comment that assumed that libc collations don't support
versioning.  Also improve an adjacent error message a bit.
2022-02-10 09:16:17 +01:00
Fujii Masao 400fc6b648 Add min() and max() aggregates for xid8.
Bump catalog version.

Author: Ken Kato
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/47d77b18c44f87f8222c4c7a3e2dee6b@oss.nttdata.com
2022-02-10 12:33:41 +09:00
Noah Misch adbd00f7a5 Use Test::Builder::todo_start(), replacing $::TODO.
Some pre-2017 Test::More versions need perfect $Test::Builder::Level
maintenance to find the variable.  Buildfarm member snapper reported an
overall failure that the file intended to hide via the TODO construct.
That trouble was reachable in v11 and v10.  For later branches, this
serves as defense in depth.  Back-patch to v10 (all supported versions).

Discussion: https://postgr.es/m/20220202055556.GB2745933@rfd.leadboat.com
2022-02-09 18:16:59 -08:00
Michael Paquier 0147fc7c8c Fix typo in multixact.c
Introduced in aa64f23.

Author: Nathan Bossart
Discussion: https://postgr.es/m/20220209175338.GB1627503@nathanxps13
2022-02-10 10:45:14 +09:00
Michael Paquier 4567596316 Reduce more the number of calls to GetMaxBackends()
Some of the code paths changed by aa64f23 can reduce the number of times
GetMaxBackends() is called.  The performance gain is marginal, but most
of the code changed by this commit already did that.  Hence, let's be
clean and apply the same rule everywhere, for consistency.

Some of the code paths, like in deadlock.c, involve only assertions.
These are left unchanged.

Reviewed-by: Nathan Bossart, Robert Haas
Discussion: https://postgr.es/m/YgMpGZhPOjNfS7er@paquier.xyz
2022-02-10 10:27:29 +09:00
Tom Lane f0cd9097cf Further tweaks for psql's new tab-completion logic.
The behavior I proposed, of matching case only when only keywords
are available to complete, turns out to be too cute.  It adds about
as many problems as it removes.  Simplify down to ilmari's original
proposal of just always matching case when completing a keyword.

Also, I noticed while testing this that we've pessimized the behavior
for qualified GUC names: the code is insisting that they be
double-quoted, which was not the case before.  Fix that by treating
GUC names as verbatim matches instead of possibly-schema-qualified
names.  (While it's tempting to try to split qualified GUC names
so that we *could* treat them with the schema-qualified-name code
path, that really isn't going to work in light of guc.c's willingness
to allow more than two name components.)

Dagfinn Ilmari Mannsåker and Tom Lane

Discussion: https://postgr.es/m/445692.1644018081@sss.pgh.pa.us
2022-02-09 17:06:21 -05:00
Tom Lane c5f5b4dd4b Test honestly for <sys/signalfd.h>.
Commit 6a2a70a02 supposed that any platform having <sys/epoll.h>
would also have <sys/signalfd.h>.  It turns out there are still a
few people using platforms where that's not so, so we'd better make
a separate configure probe for it.  But since it took this long to
notice, I'm content with the decision to not have a separate code
path for epoll-only machines; we'll just fall back to using poll()
for these stragglers.

Per gripe from Gabriela Serventi.  Back-patch to v14 where this
code came in.

Discussion: https://postgr.es/m/CAHOHWE-JjJDfcYuLAAEO7Jk07atFAU47z8TzHzg71gbC0aMy=g@mail.gmail.com
2022-02-09 14:24:54 -05:00
Daniel Gustafsson f48385c132 Fix typo in archive modules docs
Discussion: https://postgr.es/m/4F8E8D8F-45CA-4833-AB19-CC6105326583@yesql.se
2022-02-09 15:36:46 +01:00