Commit Graph

53575 Commits

Author SHA1 Message Date
Tatsuo Ishii 17a856d08b Change aggregated log format of pgbench.
Commit 4a39f87acd changed the aggregated log format. Problem is, now
the explanatory paragraph for the log line in the document is too
long. Also the log format included more optional columns, and it's
harder to parse the log lines.  This commit tries to solve the
problems.

- There's no optional log columns anymore. If a column is not
  meaningful with provided pgbench option, it will be presented as 0.

- Reorder the log columns so that it's easier to parse them.

- Adjust explanatory paragraph for the log line in the doc.

Discussion: https://postgr.es/m/flat/202203280757.3tu4ovs3petm%40alvherre.pgsql
2022-04-06 09:55:58 +09:00
Tom Lane e37ad5fa4d Remove race condition in 022_crash_temp_files.pl test.
It's possible for the query that "waits for restart" to complete a
successful iteration before the postmaster has noticed its SIGKILL'd
child and begun the restart cycle.  (This is a bit hard to believe
perhaps, but it's been seen at least twice in the buildfarm, mainly
on ancient platforms that likely have quirky schedulers.)

To provide a more secure interlock, wait for the other session
we're using to report that it's been forcibly shut down.

Patch by me, based on a suggestion from Andres Freund.
Back-patch to v14 where this test case came in.

Discussion: https://postgr.es/m/1801850.1649047827@sss.pgh.pa.us
2022-04-05 20:44:01 -04:00
Daniel Gustafsson 75edb91961 Fix compilerwarning in logging size_t
The pg_fatal log which included filesizes were using UINT64_FORMAT for
the size_t variables, which failed on 32 bit buildfarm animals. Change
to using plain int instead, which is in line with how digestControlFile
is doing it already.

Per buildfarm animals florican and lapwing.

Discussion: https://postgr.es/m/13C2BF64-4A6D-47E4-9181-3A658F00C9B7@yesql.se
2022-04-05 22:16:45 +02:00
Andrew Dunstan fadb48b00e PLAN clauses for JSON_TABLE
These clauses allow the user to specify how data from nested paths are
joined, allowing considerable freedom in shaping the tabular output of
JSON_TABLE.

PLAN DEFAULT allows the user to specify the global strategies when
dealing with sibling or child nested paths. The is often sufficient to
achieve the necessary goal, and is considerably simpler than the full
PLAN clause, which allows the user to specify the strategy to be used
for each named nested path.

Nikita Glukhov

Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zhihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.

Discussion: https://postgr.es/m/7e2cb85d-24cf-4abb-30a5-1a33715959bd@postgrespro.ru
2022-04-05 14:17:08 -04:00
Peter Geoghegan e83ebfe6d7 Have VACUUM warn on relfrozenxid "in the future".
Commits 74cf7d46 and a61daa14 fixed pg_upgrade bugs involving oversights
in how relfrozenxid or relminmxid are carried forward or initialized.
Corruption caused by bugs of this nature was ameliorated by commit
78db307bb2, which taught VACUUM to always overwrite existing invalid
relfrozenxid or relminmxid values that are apparently "in the future".

Extend that work now by showing a warning in the event of overwriting
either relfrozenxid or relminmxid due to an existing value that is "in
the future".  There is probably a decent chance that the sanity checks
added by commit 699bf7d05c will raise an error before VACUUM reaches
this point, but we shouldn't rely on that.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-WzmRZEzeGvLv8yDW0AbFmSvJjTziORqjVUrf74mL4GL0Ww@mail.gmail.com
2022-04-05 09:44:52 -07:00
Daniel Gustafsson 1691512674 pg_rewind: Fetch small files according to new size.
There's a race condition if a file changes in the source system
after we have collected the file list. If the file becomes larger,
we only fetched up to its original size. That can easily result in
a truncated file.  That's not a problem for relation files, files
in pg_xact, etc. because any actions on them will be replayed from
the WAL.  However, configuration files are affected.

This commit mitigates the race condition by fetching small files in
whole, even if they have grown.  A test is added in which an extra
file copied is concurrently grown with the output of pg_rewind thus
guaranteeing it to have changed in size during the operation.  This
is not a full fix: we still believe the original file size for files
larger than 1 MB.  That should be enough for configuration files,
and doing more than that would require big changes to the chunking
logic in libpq_source.c.

This mitigates the race condition if the file is modified between
the original scan of files and copying the file, but there's still
a race condition if a file is changed while it's being copied.
That's a much smaller window, though, and pg_basebackup has the
same issue.

This race can be seen with pg_auto_failover, which frequently uses
ALTER SYSTEM, which updates postgresql.auto.conf.  Often, pg_rewind
will fail, because the postgresql.auto.conf file changed concurrently
and a partial version of it was copied to the target.  The partial
file would fail to parse, preventing the server from starting up.

Author: Heikki Linnakangas
Reviewed-by: Cary Huang
Discussion: https://postgr.es/m/f67feb24-5833-88cb-1020-19a4a2b83ac7%40iki.fi
2022-04-05 14:45:31 +02:00
Michael Paquier 98fe74218d Extend TAP tests of pg_dump to test for compression with gzip
The test logic is extended with two new concepts:
- Addition of a compression command called compress_cmd, executed
between restore_cmd and dump_cmd to control the contents of the dumps.
In the case of this commit, this is used to compress or decompress
elements of a dump to test new code paths.
- Addition of a new flag called compile_option, to check if a set of
tests can be executed depending on the ./configure options used in a
given build.

The tests introduced here are for gzip, but they are designed so as they
can easily be extended for new compression methods.

Author: Georgios Kokolatos, Rachel Heaton
Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss=@protonmail.com
2022-04-05 19:10:10 +09:00
Alvaro Herrera 297daa9d43
Refactor and cleanup runtime partition prune code a little
* Move the execution pruning initialization steps that are common
between both ExecInitAppend() and ExecInitMergeAppend() into a new
function ExecInitPartitionPruning() defined in execPartition.c.
Those steps include creation of a PartitionPruneState to be used for
all instances of pruning and determining the minimal set of child
subplans that need to be initialized by performing initial pruning if
needed, and finally adjusting the subplan_map arrays in the
PartitionPruneState to reflect the new set of subplans remaining
after initial pruning if it was indeed performed.
ExecCreatePartitionPruneState() is no longer exported out of
execPartition.c and has been renamed to CreatePartitionPruneState()
as a local sub-routine of ExecInitPartitionPruning().

* Likewise, ExecFindInitialMatchingSubPlans() that was in charge of
performing initial pruning no longer needs to be exported.  In fact,
since it would now have the same body as the more generally named
ExecFindMatchingSubPlans(), except differing in the value of
initial_prune passed to the common subroutine
find_matching_subplans_recurse(), it seems better to remove it and add
an initial_prune argument to ExecFindMatchingSubPlans().

* Add an ExprContext field to PartitionPruneContext to remove the
implicit assumption in the runtime pruning code that the ExprContext to
use to compute pruning expressions that need one can always rely on the
PlanState providing it.  A future patch will allow runtime pruning (at
least the initial pruning steps) to be performed without the
corresponding PlanState yet having been created, so this will help.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqEYCpEqh2LMDOp9mT+4-QoVe8HgFMKBjntEMCTZLpcCCA@mail.gmail.com
2022-04-05 11:46:48 +02:00
Tom Lane 7a43a1fc52 Update some tests in 013_crash_restart.pl.
The expected backend message after SIGQUIT changed in commit
7e784d1dc, but we missed updating this test case.  Also, experience
shows that we might sometimes get "could not send data to server"
instead of either of the libpq messages the test is looking for.

Per report from Mark Dilger.  Back-patch to v14 where the
backend message changed.

Discussion: https://postgr.es/m/17BD82D7-49AC-40C9-8204-E7ADD30321A0@enterprisedb.com
2022-04-04 22:10:06 -04:00
Andres Freund 909eebf27b dshash: revise sequential scan support.
The previous coding of dshash_seq_next(), on the first call, accessed
status->hash_table->size_log2 without holding a partition lock and without
guaranteeing that ensure_valid_bucket_pointers() had ever been called.

That oversight turns out to not have immediately visible effects, because
bucket 0 is always in partition 0, and ensure_valid_bucket_pointers() was
called after acquiring the partition lock.  However,
PARTITION_FOR_BUCKET_INDEX() with a size_log2 of 0 ends up triggering formally
undefined behaviour.

Simplify by accessing partition 0, without using PARTITION_FOR_BUCKET_INDEX().

While at it, remove dshash_get_current(), there is no convincing use
case. Also polish a few comments.

Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA+hUKGL9hY_VY=+oUK+Gc1iSRx-Ls5qeYJ6q=dQVZnT3R63Taw@mail.gmail.com
2022-04-04 14:32:52 -07:00
Andres Freund 55e566fc4b pgstat: remove some superflous comments from pgstat.h.
These would all need to be rephrased when moving to shared memory stats, but
since they don't provide actual information right now, remove them instead.

The comments for PgStat_Msg* are left in, because they will all be removed as
part of the shared memory stats patch.

Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
2022-04-04 14:23:02 -07:00
Andres Freund edadf8098f pgstat: consistent function comment formatting.
There was a wild mishmash of function comment formatting in pgstat, making it
hard to know what to use for any new function and hard to extend existing
comments (particularly due to randomly different forms of indentation).

Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20220329191727.mzzwbl7udhpq7pmf@alap3.anarazel.de
Discussion: https://postgr.es/m/20220308205351.2xcn6k4x5yivcxyd@alap3.anarazel.de
2022-04-04 13:53:34 -07:00
Andrew Dunstan 4e34747c88 JSON_TABLE
This feature allows jsonb data to be treated as a table and thus used in
a FROM clause like other tabular data. Data can be selected from the
jsonb using jsonpath expressions, and hoisted out of nested structures
in the jsonb to form multiple rows, more or less like an outer join.

Nikita Glukhov

Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zhihong Yu (whose
name I previously misspelled), Himanshu Upadhyaya, Daniel Gustafsson,
Justin Pryzby.

Discussion: https://postgr.es/m/7e2cb85d-24cf-4abb-30a5-1a33715959bd@postgrespro.ru
2022-04-04 16:03:47 -04:00
Peter Geoghegan c42a6fc41d vacuumlazy.c: Further consolidate resource allocation.
Move remaining VACUUM resource allocation and deallocation code from
lazy_scan_heap() to its caller, heap_vacuum_rel().  This finishes off
work started by commit 73f6ec3d.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzk3fNBa_S3Ngi+16GQiyJ=AmUu3oUY99syMDTMRxitfyQ@mail.gmail.com
2022-04-04 11:53:33 -07:00
Peter Eisentraut 7844c9918a psql: Show all query results by default
Previously, psql printed only the last result if a command string
returned multiple result sets.  Now it prints all of them.  The
previous behavior can be obtained by setting the psql variable
SHOW_ALL_RESULTS to off.

This is a significantly enhanced version of
3a51306722 (that was later reverted).
There is also much more test coverage for various psql features now.

Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: "Iwata, Aya" <iwata.aya@jp.fujitsu.com> (earlier version)
Reviewed-by: Daniel Verite <daniel@manitou-mail.org> (earlier version)
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> (earlier version)
Reviewed-by: vignesh C <vignesh21@gmail.com> (earlier version)
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
2022-04-04 20:00:33 +02:00
Tom Lane cbf4177f2c Disable synchronize_seqscans in 027_stream_regress.pl.
This script runs the core regression tests with quite a small value of
shared_buffers, making it prone to breakage due to synchronize_seqscans
kicking in where the tests don't expect that.  Disable that feature to
stabilize the tests.

Discussion: https://postgr.es/m/1258185.1648876239@sss.pgh.pa.us
2022-04-04 12:38:51 -04:00
Andrew Dunstan 4eb9798879 Avoid freeing objects during json aggregate finalization
Commit f4fb45d15c tried to free memory during aggregate finalization.
This cause issues, particularly when used as a window function, so stop
doing that.

Per complaint by Jaime Casanova and diagnosis by Andres Freund

Discussion: https://postgr.es/m/YkfeMNYRCGhySKyg@ahch-to
2022-04-04 11:03:49 -04:00
Robert Haas afb529e677 pg_basebackup: Fix code that thinks about LZ4 buffer size.
Before this patch, there was some code that tried to make sure that the
buffer was always big enough at the start, and then asserted that it
didn't need to be enlarged later. However, the code to make sure it was
big enough at the start doesn't actually work, and therefore it was
possible to fail an assertion and crash later.

Remove the code that tries to make sure the buffer is always big enough
at the start in favor of enlarging the buffer as we go along whenever
that is necessary.

The mistake probably happened because, on the server side, we do
actually need to guarantee that the buffer is big enough at the start
to avoid subsequent resizings. However, in that case, the calling
code makes promises about how much data it will provide at once, but
here, that's not the case.

Report by Justin Pryzby. Analysis by me. Patch by Dipesh Pandit.

Discussion: http://postgr.es/m/20220330143536.GG28503@telsasoft.com
2022-04-04 10:36:23 -04:00
David Rowley 40af10b571 Use Generation memory contexts to store tuples in sorts
The general usage pattern when we store tuples in tuplesort.c is that
we store a series of tuples one by one then either perform a sort or spill
them to disk.  In the common case, there is no pfreeing of already stored
tuples.  For the common case since we do not individually pfree tuples, we
have very little need for aset.c memory allocation behavior which
maintains freelists and always rounds allocation sizes up to the next
power of 2 size.

Here we conditionally use generation.c contexts for storing tuples in
tuplesort.c when the sort will never be bounded.  Unfortunately, the
memory context to store tuples is already created by the time any calls
would be made to tuplesort_set_bound(), so here we add a new sort option
that allows callers to specify if they're going to need a bounded sort or
not.  We'll use a standard aset.c allocator when this sort option is not
set.

Extension authors must ensure that the TUPLESORT_ALLOWBOUNDED flag is
used when calling tuplesort_begin_* for any sorts that make a call to
tuplesort_set_bound().

Author: David Rowley
Reviewed-by: Andy Fan
Discussion: https://postgr.es/m/CAApHDvoH4ASzsAOyHcxkuY01Qf++8JJ0paw+03dk+W25tQEcNQ@mail.gmail.com
2022-04-04 22:52:35 +12:00
David Rowley 77bae396df Adjust tuplesort API to have bitwise option flags
This replaces the bool flag for randomAccess.  An upcoming patch requires
adding another option, so instead of breaking the API for that, then
breaking it again one day if we add more options, let's just break it
once.  Any boolean options we add in the future will just make use of an
unused bit in the flags.

Any extensions making use of tuplesorts will need to update their code
to pass TUPLESORT_RANDOMACCESS instead of true for randomAccess.
TUPLESORT_NONE can be used for a set of empty options.

Author: David Rowley
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/CAApHDvoH4ASzsAOyHcxkuY01Qf%2B%2B8JJ0paw%2B03dk%2BW25tQEcNQ%40mail.gmail.com
2022-04-04 22:24:59 +12:00
David Rowley 1b0d9aa4f7 Improve the generation memory allocator
Here we make a series of improvements to the generation memory
allocator, namely:

1. Allow generation contexts to have a minimum, initial and maximum block
sizes. The standard allocator allows this already but when the generation
context was added, it only allowed fixed-sized blocks.  The problem with
fixed-sized blocks is that it's difficult to choose how large to make the
blocks.  If the chosen size is too small then we'd end up with a large
number of blocks and a large number of malloc calls. If the block size is
made too large, then memory is wasted.

2. Add support for "keeper" blocks.  This is a special block that is
allocated along with the context itself but is never freed.  Instead,
when the last chunk in the keeper block is freed, we simply mark the block
as empty to allow new allocations to make use of it.

3. Add facility to "recycle" newly empty blocks instead of freeing them
and having to later malloc an entire new block again.  We do this by
recording a single GenerationBlock which has become empty of any chunks.
When we run out of space in the current block, we check to see if there is
a "freeblock" and use that if it contains enough space for the allocation.

Author: David Rowley, Tomas Vondra
Reviewed-by: Andy Fan
Discussion: https://postgr.es/m/d987fd54-01f8-0f73-af6c-519f799a0ab8@enterprisedb.com
2022-04-04 20:53:13 +12:00
Thomas Munro cc58eecc5d Fix tuplesort optimization for CLUSTER-on-expression.
When dispatching sort operations to specialized variants, commit
69749243 failed to handle the case where CLUSTER-sort decides not to
initialize datum1 and isnull1.  Fix by hoisting that decision up a level
and advertising whether datum1 can be relied on, in the Tuplesortstate
object.

Per reports from UBsan and Valgrind build farm animals, while running
the cluster.sql test.

Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAFBsxsF1TeK5Fic0M%2BTSJXzbKsY6aBqJGNj6ptURuB09ZF6k_w%40mail.gmail.com
2022-04-04 10:52:02 +12:00
Tom Lane 591e088dd5 Fix portability issues in datetime parsing.
datetime.c's parsing logic has assumed that strtod() will accept
a string that looks like ".", which it does in glibc, but not on
some less-common platforms such as AIX.  The result of this was
that datetime fields like "123." would be accepted on some platforms
but not others; which is a sufficiently odd case that it's not that
surprising we've heard no field complaints.  But commit e39f99046
extended that assumption to new places, and happened to add a test
case that exposed the platform dependency.  Remove this dependency
by special-casing situations without any digits after the decimal
point.

(Again, this is in part a pre-existing bug but I don't feel a
compulsion to back-patch.)

Also, rearrange e39f99046's changes in formatting.c to avoid a
Coverity complaint that we were copying an uninitialized field.

Discussion: https://postgr.es/m/1592893.1648969747@sss.pgh.pa.us
2022-04-03 17:04:33 -04:00
Peter Geoghegan f3c15cbe50 Generalize how VACUUM skips all-frozen pages.
Non-aggressive VACUUMs were at a gratuitous disadvantage (relative to
aggressive VACUUMs) around advancing relfrozenxid and relminmxid before
now.  The issue only came up when concurrent activity unset some heap
page's visibility map bit right as VACUUM was considering if the page
should get counted in frozenskipped_pages.  The non-aggressive case
would recheck the all-frozen bit at this point.  The aggressive case
reasoned that the page (a skippable page) must have at least been
all-frozen in the recent past, so skipping it won't make relfrozenxid
advancement unsafe (which is never okay for aggressive VACUUMs).

The recheck created a window for some other backend to confuse matters
for VACUUM.  If the page's VM bit turned out to be unset, VACUUM would
conclude that the page was _never_ all-frozen.  frozenskipped_pages was
not incremented, and yet VACUUM couldn't back out of skipping at this
late stage (it couldn't choose to scan the page instead).  This made it
unsafe to advance relfrozenxid later on.

Consistently avoid the issue by generalizing how we skip frozen pages
during aggressive VACUUMs: take the same approach when skipping any
skippable page range during aggressive and non-aggressive VACUUMs alike.
The new approach makes ranges (not individual pages) the fundamental
unit of skipping using the visibility map.  frozenskipped_pages is
replaced with a boolean flag that represents whether some skippable
range with one or more all-visible pages was actually skipped.

It is safe for VACUUM to treat a page as all-frozen provided it at least
had its all-frozen bit set after the OldestXmin cutoff was established.
VACUUM is only required to scan pages that might have XIDs < OldestXmin
(unfrozen XIDs) to be able to safely advance relfrozenxid.  Tuples
concurrently inserted on "skipped" pages can be thought of as equivalent
to tuples concurrently inserted on a block >= rel_pages.

It's possible that the issue this commit fixes hardly ever came up in
practice.  But we only had to be unlucky once to lose out on advancing
relfrozenxid -- a single affected heap page was enough to throw VACUUM
off.  That seems like something to avoid on general principle.  This is
similar to an issue fixed by commit 44fa8488, which taught vacuumlazy.c
to not give up on non-aggressive relfrozenxid advancement just because a
cleanup lock wasn't immediately available on some heap page.

Skipping an all-visible range is now explicitly structured as a choice
made by non-aggressive VACUUMs, by weighing known costs (scanning extra
skippable pages to freeze their tuples early) against known benefits
(advancing relfrozenxid early).  This works in essentially the same way
as it always has (don't skip ranges < SKIP_PAGES_THRESHOLD).  We could
do much better here in the future by considering other relevant factors.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wzn6bGJGfOy3zSTJicKLw99PHJeSOQBOViKjSCinaxUKDQ@mail.gmail.com
Discussion: https://postgr.es/m/CA%2BTgmoZiSOY6H7aadw5ZZGm7zYmfDzL6nwmL5V7GL4HgJgLF_w%40mail.gmail.com
2022-04-03 13:35:43 -07:00
Peter Geoghegan 0b018fabaa Set relfrozenxid to oldest extant XID seen by VACUUM.
When VACUUM set relfrozenxid before now, it set it to whatever value was
used to determine which tuples to freeze -- the FreezeLimit cutoff.
This approach was very naive.  The relfrozenxid invariant only requires
that new relfrozenxid values be <= the oldest extant XID remaining in
the table (at the point that the VACUUM operation ends), which in
general might be much more recent than FreezeLimit.

VACUUM now carefully tracks the oldest remaining XID/MultiXactId as it
goes (the oldest remaining values _after_ lazy_scan_prune processing).
The final values are set as the table's new relfrozenxid and new
relminmxid in pg_class at the end of each VACUUM.  The oldest XID might
come from a tuple's xmin, xmax, or xvac fields.  It might even come from
one of the table's remaining MultiXacts.

Final relfrozenxid values must still be >= FreezeLimit in an aggressive
VACUUM (FreezeLimit still acts as a lower bound on the final value that
aggressive VACUUM can set relfrozenxid to).  Since standard VACUUMs
still make no guarantees about advancing relfrozenxid, they might as
well set relfrozenxid to a value from well before FreezeLimit when the
opportunity presents itself.  In general standard VACUUMs may now set
relfrozenxid to any value > the original relfrozenxid and <= OldestXmin.

Credit for the general idea of using the oldest extant XID to set
pg_class.relfrozenxid at the end of VACUUM goes to Andres Freund.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CAH2-WzkymFbz6D_vL+jmqSn_5q1wsFvFrE+37yLgL_Rkfd6Gzg@mail.gmail.com
2022-04-03 09:57:21 -07:00
Peter Geoghegan 05023a237c Doc: Add relfrozenxid Tip to XID wraparound section.
VACUUM VERBOSE and autovacuum log reports were taught to report the
details of how VACUUM advanced relfrozenxid (and relminmxid) by commit
872770fd.  Highlight this by adding a "Tip" to the documentation, next
to related discussion of age(relfrozenxid) monitoring.

Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wzk0C1O-MKkOrj4YAfsGRru2=cA2VQpqM-9R1HNuG3nFaQ@mail.gmail.com
2022-04-02 13:34:57 -07:00
Tom Lane e39f990467 Fix overflow hazards in interval input and output conversions.
DecodeInterval (interval input) was careless about integer-overflow
hazards, allowing bogus results to be obtained for sufficiently
large input values.  Also, since it initially converted the input
to a "struct tm", it was impossible to produce the full range of
representable interval values.

Meanwhile, EncodeInterval (interval output) and a few other
functions could suffer failures if asked to process sufficiently
large interval values, because they also relied on being able to
represent an interval in "struct tm" which is not designed to
handle that.

Fix all this stuff by introducing new struct types that are more
fit for purpose.

While this is clearly a bug fix, it's also an API break for any
code that's calling these functions directly.  So back-patching
doesn't seem wise, especially in view of the lack of field
complaints.

Joe Koshakow, editorialized a bit by me

Discussion: https://postgr.es/m/CAAvxfHff0JLYHwyBrtMx_=6wr=k2Xp+D+-X3vEhHjJYMj+mQcg@mail.gmail.com
2022-04-02 16:12:29 -04:00
Peter Geoghegan f7e4d5c64f Doc: Clarify the role of aggressive VACUUMs.
Adjust the documentation's coverage of aggressive VACUUMs to make it
clearer that aggressive vacuuming isn't always strictly necessary.  It's
possible for non-aggressive VACUUMs to advance relfrozenxid/relminmxid
without fail, given the right workload conditions.

While this has always been true, it matters more with recent and pending
improvements to VACUUM.  These improvements make non-aggressive
vacuuming more likely to advance relfrozenxid/relminmxid in practice.
While this is an unrelated improvement to the docs, formally speaking,
it still doesn't seem worth backpatching.  So don't backpatch.
2022-04-02 12:29:35 -07:00
Tom Lane 1b208ebaf1 Add a couple more tests for interval input decoding.
Cover some cases that would have been broken by a proposed patch,
but we failed to notice for lack of test coverage.  I'm pushing
this separately mainly to memorialize that it *is* our historical
behavior.

Discussion: https://postgr.es/m/1344498.1648920056@sss.pgh.pa.us
2022-04-02 13:50:05 -04:00
Peter Geoghegan 14bf1e8313 vacuumlazy.c: Clean up variable declarations.
Move some of the heap_vacuum_rel() instrumentation related variables to
the scope where they're actually needed.  Also reorder some of the
variable declarations at the start of heap_vacuum_rel() so that related
variables appear together.
2022-04-02 10:33:21 -07:00
Joe Conway 9752436f04 Use has_privs_for_roles for predefined role checks: round 2
Similar to commit 6198420ad, replace is_member_of_role with
has_privs_for_role for predefined role access checks in recently
committed basebackup code. In passing fix a double-word error
in a nearby comment.

Discussion: https://postgr.es/m/flat/CAGB+Vh4Zv_TvKt2tv3QNS6tUM_F_9icmuj0zjywwcgVi4PAhFA@mail.gmail.com
2022-04-02 13:24:38 -04:00
Alvaro Herrera cfdd03f45e
Allow CLUSTER on partitioned tables
This is essentially the same as applying VACUUM FULL to a partitioned
table, which has been supported since commit 3c3bb99330 (March 2017).
While there's no great use case in applying CLUSTER to partitioned
tables, we don't have any strong reason not to allow it either.

For now, partitioned indexes cannot be marked clustered, so an index
must always be specified.

While at it, rename some variables that were RangeVars during the
development that led to 8bc717cb88 but never made it that way to the
source tree; there's no need to perpetuate names that have always been
more confusing than helpful.

Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/20201028003312.GU9241@telsasoft.com
Discussion: https://postgr.es/m/20200611153502.GT14879@telsasoft.com
2022-04-02 19:08:34 +02:00
Peter Geoghegan b7c485fb93 Doc: Remove MultiXact wraparound section link.
Remove circular "25.1.5.1. Multixacts And Wraparound" link that
references the section that the link itself appears in.  An explanation
of MultiXactId age appears only a few sentences before the link, so
there's no question that the link is superfluous at best.

Oversight in commit d5409295.

Author: Peter Geoghegan <pg@bowt.ie>
Backpatch: 14-
2022-04-02 09:49:08 -07:00
Alvaro Herrera 0af504733c
Remove excess semicolon in MERGE docs
Author: Euler Taveira <euler@eulerto.com>
Discussion: https://postgr.es/m/01dea9ef-85ad-4e09-a783-a1eadeae7bbe@www.fastmail.com
2022-04-02 17:18:14 +02:00
Andrew Dunstan c6dc6a0124 Use ORDER BY in catalog results in SQL/JSON tests
The buildfarm has revealed some instability in results from catalog
queries in tests from commit 1a36bc9dba. Cure this by adding ORDER BY
to such queries.
2022-04-02 10:00:10 -04:00
John Naylor 6974924347 Specialize tuplesort routines for different kinds of abbreviated keys
Previously, the specialized tuplesort routine inlined handling for
reverse-sort and NULLs-ordering but called the datum comparator via a
pointer in the SortSupport struct parameter. Testing has showed that we
can get a useful performance gain by specializing datum comparison for
the different representations of abbreviated keys -- signed and unsigned
64-bit integers and signed 32-bit integers. Almost all abbreviatable data
types will benefit -- the only exception for now is numeric, since the
datum comparison is more complex. The performance gain depends on data
type and input distribution, but often falls in the range of 10-20% faster.

Thomas Munro

Reviewed by Peter Geoghegan, review and performance testing by me

Discussion:
https://www.postgresql.org/message-id/CA%2BhUKGKKYttZZk-JMRQSVak%3DCXSJ5fiwtirFf%3Dn%3DPAbumvn1Ww%40mail.gmail.com
2022-04-02 15:22:25 +07:00
Peter Eisentraut db086de5ab Remove obsolete comment
accidentally left behind by 4cb658af70
2022-04-02 07:41:12 +02:00
Michael Paquier d2a2ce4184 Make upgradecheck a no-op in MSVC's vcregress.pl
322becb has changed upgradecheck to use the TAP tests, discarding
pg_upgrade's tests in bincheck.  However, this is proving to be a bad
idea for the Windows buildfarm clients that use MSVC when TAP tests are
disabled as this causes a hard failure at the pg_upgrade step.

This commit disables upgradecheck, moving the execution of the tests of
pg_upgrade to bincheck, as per an initial suggestion from Andres
Freund, so as the buildfarm is able to live happily with those changes.

While on it, remove the routine that was used by upgradecheck to
create databases whose names are generated with a range of ASCII
characters as it is not used since 322becb.  upgradecheck is removed
from the CI script for Windows, as bincheck takes care of that now.

Per report from buildfarm member hamerkop (MSVC 2017 without a TAP
setup).

Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/YkbnpriYEAagZ2wH@paquier.xyz
2022-04-02 12:06:11 +09:00
Michael Paquier d43085d12e pageinspect: Use better macros to get special page area for GIN and GiST
These five code paths are the last ones that made use of
PageGetSpecialPointer() to get the special area of such pages, while
those index AMs have already macros to do this job.

Noticed while reviewing the use PageGetSpecialPointer() in the whole
tree, in relation to the recent commit d16773c.
2022-04-02 11:27:20 +09:00
Peter Eisentraut 465ab24296 libpq: Fix pkg-config without OpenSSL
Do not add OpenSSL dependencies to libpq pkg-config file if OpenSSL is
not enabled.  Oversight in beff361bc1.

Author: Fabrice Fontaine <fontaine.fabrice@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/20220331163759.32665-1-fontaine.fabrice%40gmail.com
2022-04-01 17:15:24 +02:00
Tom Lane 479b69a4a5 Set minimum required version of zstd as 1.4.0.
It emerges that our new zstd code depends on features that were not
present (or at least not enabled by default) in zstd before 1.4.0.
That's already four years old, so there's little reason to try to
make our code work with something older.  Instead make configure
check that zstd is at least 1.4.0, and document this requirement.

Justin Pryzby (doc changes by me)

Discussion: https://postgr.es/m/efbd94cd1102f99fd1300e44905ee4a994ee2ef6.camel@gunduz.org
2022-04-01 11:05:52 -04:00
Peter Eisentraut c1932e5428 libpq: Allow IP address SANs in server certificates
The current implementation supports exactly one IP address in a server
certificate's Common Name, which is brittle (the strings must match
exactly).  This patch adds support for IPv4 and IPv6 addresses in a
server's Subject Alternative Names.

Per discussion on-list:

- If the client's expected host is an IP address, we allow fallback to
  the Subject Common Name if an iPAddress SAN is not present, even if
  a dNSName is present.  This matches the behavior of NSS, in
  violation of the relevant RFCs.

- We also, counter-intuitively, match IP addresses embedded in dNSName
  SANs.  From inspection this appears to have been the behavior since
  the SAN matching feature was introduced in acd08d76.

- Unlike NSS, we don't map IPv4 to IPv6 addresses, or vice-versa.

Author: Jacob Champion <pchampion@vmware.com>
Co-authored-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/9f5f20974cd3a4091a788cf7f00ab663d5fcdffe.camel@vmware.com
2022-04-01 15:51:23 +02:00
Robert Haas fa25bebb82 docs: Changing column type doesn't always require an index rebuild.
James Coleman and Robert Haas, reviewed by Matthias van de Meent.

Discussion: https://postgr.es/m/CAAaqYe90Ea3RG=A7H-ONvTcx549-oQhp07BrHErwM=AyH2ximg@mail.gmail.com
2022-04-01 08:48:44 -04:00
Peter Eisentraut af9e180495 Add SSL tests for IP addresses in certificates
This tests some scenarios that already work.  A subsequent patch will
introduce more functionality.

Author: Jacob Champion <pchampion@vmware.com>
Co-authored-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/9f5f20974cd3a4091a788cf7f00ab663d5fcdffe.camel@vmware.com
2022-04-01 14:08:43 +02:00
Peter Eisentraut 5519d5affd psql: Refactor ProcessResult()
Separate HandleCopyResult() from ProcessResult() in preparation for a
subsequent patch.

Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
2022-04-01 13:03:33 +02:00
Michael Paquier d16773cdc8 Add macros in hash and btree AMs to get the special area of their pages
This makes the code more consistent with SpGiST, GiST and GIN, that
already use this style, and the idea is to make easier the introduction
of more sanity checks for each of these AM-specific macros.  BRIN uses a
different set of macros to get a page's type and flags, so it has no
need for something similar.

Author: Matthias van de Meent
Discussion: https://postgr.es/m/CAEze2WjE3+tGO9Fs9+iZMU+z6mMZKo54W1Zt98WKqbEUHbHOBg@mail.gmail.com
2022-04-01 13:24:50 +09:00
Michael Paquier 73db8f4d17 Improve handling and logging of TAP tests for pg_upgrade
This commit includes a set of improvements to help with the debugging of
failures in these new TAP tests:
- Instead of a plain diff command to compare the dumps generated, use
File::Compare::compare for the same effect.  diff is still used to
provide more context in the event of an error.
- Log the contents of regression.diffs if the pg_regress command fails.
- Unify the format of the logs generated, getting inspiration from the
style used in 027_stream_regress.pl.

wrasse is the only buildfarm member that has reported a failure until
now after the introduction of 322becb, complaining that the dumps
generated do not match, and I am lacking information to understand what
is going in this environment.
2022-04-01 12:45:40 +09:00
Michael Paquier 322becb608 Switch the regression tests of pg_upgrade to use TAP tests
This simplifies a lot of code in the tests of pg_upgrade without
sacrificing its coverage:
- Removal of test.sh used for builds with make, that has accumulated
over the years tweaks for problems that are solved in a duplicated way
by the centralized TAP framework (initialization of the various
environment variables PG*, port selection).
- Removal of the code in MSVC to test pg_upgrade.  This was roughly a
duplicate of test.sh adapted for Windows, with an extra footprint of
a pg_regress command and all the assumptions behind it.

Support for upgrades with older versions is changed, not removed.
test.sh was able to set up the regression database on the old instance
by launching itself the pg_regress command and a dependency to the
source tree of thd old cluster, with tweaks on the command arguments to
adapt across the versions used.  This created a backward-compatibility
dependency with older pg_regress commands, and recent changes like
d1029bb have made that much more complicated.

Instead, this commit allows tests with older major versions by
specifying a path to a SQL dump (taken with pg_dumpall from the old
cluster's installation) that will be loaded into the old instance to
upgrade instead of running pg_regress, through an optional environment
variable called $olddump.  This requires a second variable called
$oldinstall to point to the base path of the installation of the old
cluster.  This method is more in line with the buildfarm client that
uses a set of static dumps to set up an old instance, so hopefully we
will be able to reuse what is introduced in this commit there.  The last
step of the tests that checks for differences between the two dumps
taken still needs to be improved as it can fail, requiring a manual
lookup at the dumps.  This is not different from the old way of testing
where things could fail at the last step.

Support for EXTRA_REGRESS_OPTS is kept.  vcregress.pl in the MSVC
scripts still handles the test of pg_upgrade with its upgradecheck, and
bincheck is changed to skip pg_upgrade.

Author: Michael Paquier
Reviewed-by: Andrew Dunstan, Andres Freund, Rachel Heaton, Tom Lane,
Discussion: https://postgr.es/m/YJ8xTmLQkotVLpN5@paquier.xyz
2022-04-01 10:13:50 +09:00
Tom Lane fb691bbb4c Keep plpgsql.h C++-clean.
I forgot that "typeid" is a C++ keyword.  Per buildfarm.
2022-03-31 18:29:10 -04:00
Tom Lane 53ef6c40f1 Expose a few more PL/pgSQL functions to debugger plugins.
Add exec_assign_value, exec_eval_datum, and exec_cast_value
to the set of functions a PL/pgSQL debugger plugin can
conveniently call.  This allows more convenient manipulation
of the values of PL/pgSQL function variables.

Pavel Stehule, reviewed by Aleksander Alekseev and myself

Discussion: https://postgr.es/m/CAFj8pRD+dBPU0T-KrkP7ef6QNPDEsjYCejEsBe07NDq8TybOkA@mail.gmail.com
2022-03-31 17:05:47 -04:00