Checking the execution of archive_cleanup_command on a standby requires
a valid checkpoint coming from its primary, but the logic did not check
that the standby replayed up to the point of the checkpoint, causing the
test checking for the execution of archive_cleanup_command to fail.
This race was more visible in slow environments.
Issue introduced in 46dea24, so no backpatch is needed.
Author: Tom Lane
Discussion: https://postgr.es/m/4015413.1649454951@sss.pgh.pa.us
This test, introduced in df86e52, uses a second standby to check that
it is able to remove correctly RECOVERYHISTORY and RECOVERYXLOG at the
end of recovery. This standby uses the archives of the primary to
restore its contents, with some of the archive's contents coming from
the first standby previously promoted. In slow environments, it was
possible that the test did not check what it should, as the history file
generated by the promotion of the first standby may not be stored yet on
the archives the second standby feeds on. So, it could be possible that
the second standby selects an incorrect timeline, without restoring a
history file at all.
This commits adds a wait phase to make sure that the history file
required by the second standby is archived before this cluster is
created. This relies on poll_query_until() with pg_stat_file() and an
absolute path, something not supported in REL_10_STABLE.
While on it, this adds a new test to check that the history file has
been restored by looking at the logs of the second standby. This
ensures that a RECOVERYHISTORY, whose removal needs to be checked,
is created in the first place. This should make the test more robust.
This test has been introduced by df86e52, but it came in light as an
effect of the bug fixed by acf1dd42, where the extra restore_command
calls made the test much slower.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/YlT23IvsXkGuLzFi@paquier.xyz
Backpatch-through: 11
The new option set of pg_receivewal introduced in 042a923 to control the
compression method makes it now easy to pass down various options,
including the compression level. The change to be able to do is simple,
and requires one LZ4F_preferences_t fed to LZ4F_compressBegin().
Note that LZ4F_INIT_PREFERENCES could be used to initialize the contents
of LZ4F_preferences_t as required by LZ4, but this is only available
since v1.8.3. memset()'ing its structure to 0 is enough.
Discussion: https://postgr.es/m/YlPQGNAAa04raObK@paquier.xyz
The target failed, tested $PATH binaries, or tested a stale temporary
installation. Commit c66b438db6 missed
this. Back-patch to v10 (all supported versions).
Suppress further attempts to read ahead in the WAL if we run out of
data, until the records already decoded have been replayed. This
restores the traditional behavior for continuous archive recovery, which
is to retry the failing restore_command only every 5 seconds. With the
coding in 5dc0418f, we would start retrying every time through the
recovery loop when our WAL decoding window hit the end of the current
segment and we tried to look ahead into a not-yet-available next file.
That was very slow.
Also change the no_readahead_until mechanism to use <= rather than <,
which seems more useful. Otherwise we'd either get one extra unwanted
retry of restore_command, or we'd need to add 1 to an LSN.
No change in behavior for regular streaming. That was already limited
by the flushedUpto variable, which won't be updated until we replay what
we have already.
Reported by Andres Freund while analyzing the failure of a TAP test on
build farm animal skink (investigation ongoing but probably due to
otherwise unrelated timing bugs triggered by this slowness magnified by
valgrind).
Discussion: https://postgr.es/m/20220409005910.alw46xqmmgny2sgr%40alap3.anarazel.de
Previously we didn't, which lead to an assertion failure when resetting
partially loaded statistics. This was encountered on the buildfarm, for
as-of-yet unknown reasons.
Ttighten up a validity check when reading the stats file, verifying 'E'
signals the end of the file (rather than just stopping reading). That's then
used in a test appending to the stats file that crashed before the fix in
pgstat_drop_all_entries().
Reported by buildfarm animals mylodon and kestrel, via Tom Lane.
Discussion: https://postgr.es/m/1656446.1650043715@sss.pgh.pa.us
This function gave the wrong answer when there's more than one
RegisteredSnapshots entry, whether or not any of them is the
CatalogSnapshot. This leads to assertion failure in some scenarios
involving fetching toasted data using a cursor. (As per discussion,
I'm dubious that this is the right contract to be enforcing at all;
but it surely doesn't help to be enforcing it incorrectly.)
Fetching toasted data using a cursor is evidently under-tested,
so add a test case too.
Per report from Erik Rijkers. This is new code, so no need for
back-patch.
Discussion: https://postgr.es/m/dc9dd229-ed30-6c62-4c41-d733ffff776b@xs4all.nl
Per-backend global variables like VacuumPageHit are initialized once per
VACUUM command. This was missed by commit 49c9d9fc, which unified
VACUUM VERBOSE and autovacuum logging. As a result of that oversight,
incorrect values were shown when multiple relations were processed by a
single VACUUM VERBOSE command.
Relations that happened to be processed later on would show "buffer
usage:" values that incorrectly included buffer accesses made while
processing earlier unrelated relations. The same accesses were counted
multiple times.
To fix, take initial values for the tracker variables at the start of
heap_vacuum_rel(), and report delta values later on.
With a pre-v15 server, show NULL for the "ICU Locale" column,
matching what you see in v15 when the database locale isn't ICU.
The previous coding incorrectly repeated datcollate here.
(There's an unfinished discussion about whether to consolidate
these columns in \l output, but in any case we'd want this fix
for \l+ output.)
Euler Taveira, per report from Christoph Berg
Discussion: https://postgr.es/m/YlmIFCqu+TZSW4rB@msg.df7cb.de
ComputeXidHorizons (nee GetOldestXmin) thought that it could identify
walsenders by checking for proc->databaseId == 0. Perhaps that was
safe when the code was written, but it's been wrong at least since
autovacuum was invented. Background processes that aren't connected
to any particular database, such as the autovacuum launcher and
logical replication launcher, look like that too.
This imprecision is harmful because when such a process advertises an
xmin, the result is to hold back dead-tuple cleanup in all databases,
though it'd be sufficient to hold it back in shared catalogs (which
are the only relations such a process can access). Aside from being
generally inefficient, this has recently been seen to cause regression
test failures in the buildfarm, as a consequence of the logical
replication launcher's startup transaction preventing VACUUM from
marking pages of a user table as all-visible.
We only want that global hold-back effect for the case where a
walsender is advertising a hot standby feedback xmin. Therefore,
invent a new PGPROC flag that says that a process' xmin should be
considered globally, and check that instead of using the incorrect
databaseId == 0 test. Currently only a walsender sets that flag,
and only if it is not connected to any particular database. (This is
for bug-compatibility with the undocumented behavior of the existing
code, namely that feedback sent by a client who has connected to a
particular database would not be applied globally. I'm not sure this
is a great definition; however, such a client is capable of issuing
plain SQL commands, and I don't think we want xmins advertised for
such commands to be applied globally. Perhaps this could do with
refinement later.)
While at it, I rewrote the comment in ComputeXidHorizons, and
re-ordered the commented-upon if-tests, to make them match up
for intelligibility's sake.
This is arguably a back-patchable bug fix, but given the lack of
complaints I think it prudent to let it age awhile in HEAD first.
Discussion: https://postgr.es/m/1346227.1649887693@sss.pgh.pa.us
Be consistent about the lines that VACUUM VERBOSE outputs by including
an "index scan not needed: " line for completely empty tables. This
makes the output more readable, especially with multiple distinct VACUUM
operations processed by the same VACUUM command. It's also more
consistent; even empty tables can use the failsafe, which wasn't
reported in the standard way until now.
Follow-up to commit 6e20f460, which taught VACUUM VERBOSE to be more
consistent about reporting on scanned pages with empty tables.
The age of OldestXmin (a.k.a. "removable cutoff") when VACUUM ends often
indicates the approximate number of XIDs consumed while VACUUM ran.
However, there is at least one important exception: the cutoff could be
held back by a snapshot that was acquired before our VACUUM even began.
Successive VACUUM operations may even use exactly the same old cutoff in
extreme cases involving long held snapshots.
The log messages that described how removable cutoff aged (which were
added by commit 872770fd) created the impression that we were reporting
on how VACUUM's usable cutoff advanced while VACUUM ran, which was
misleading in these extreme cases. Fix by using a more general wording.
Per gripe from Tom Lane.
In passing, relocate related instrumentation code for clarity.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/1643035.1650035653@sss.pgh.pa.us
When not loading stats at startup (i.e. pgstat_discard_stats() getting
called), reset timestamps of fixed numbered stats would be left at
0. Oversight in 5891c7a8ed.
Instead use pgstat_reset_after_failure() and add tests verifying that
fixed-numbered reset timestamps are set appropriately.
Reported-By: "David G. Johnston" <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/CAKFQuwamFuaQHKdhcMt4Gbw5+Hca2UE741B8gOOXoA=TtAd2Yw@mail.gmail.com
If a partitioned table has partitions owned by roles other than the
owner of the partitioned table, don't include them in the to-be-
clustered list. This is similar to what VACUUM FULL does (except we do
it sooner, because there is no reason to postpone it). Add a simple
test to verify that only owned partitions are clustered.
While at it, change memory context switch-and-back to occur once per
partition instead of outside of the loop.
Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/20220411140609.GF26620@telsasoft.com
This is to gather some more evidence about why buildfarm member wrasse
is failing. We should revert it (or at least scale it way back) once
that's resolved.
Discussion: https://postgr.es/m/1346227.1649887693@sss.pgh.pa.us
Commit 1a36bc9dba conained some logic that was a little opaque and
could have involved a NULL dereference, as complained about by Coverity.
Make the logic more transparent and in doing so avoid the NULL
dereference.
Only 1 of 3 of these changes appear to be handled by pgindent. That change
is new to v15. The remaining two appear to be left alone by pgindent. The
exact reason for that is not 100% clear to me. It seems related to the
fact that it's a line that contains *only* a single line comment and no
actual code. It does not seem worth investigating this in too much
detail. In any case, these do not conform to our usual practices, so fix
them.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220411020336.GB26620@telsasoft.com
Input for these should be case-insensitive, but was not completely
so. Comparing to the similar queries for timezone names, I realized
that we'd missed forcing the comparison pattern to lower-case.
With that, it behaves as I expect.
While here, flatten the sub-selects in these queries; I don't
find that those add any readability.
Discussion: https://postgr.es/m/3369130.1649348542@sss.pgh.pa.us
Define "parameters with non-default settings" as being those that
not only have pg_settings.source different from 'default', but
also have a current value different from the hard-wired boot_val.
Adding the latter restriction removes a number of not-very-interesting
cases where the active setting is chosen by initdb but in practice
tends to be the same all the time.
Per discussion with Jonathan Katz.
Discussion: https://postgr.es/m/YlFQLzlPi4QD0wSi@msg.df7cb.de
heap_fetch() used to have a "keep_buf" parameter that told it to return
ownership of the buffer pin to the caller after finding that the
requested tuple TID exists but is invisible to the specified snapshot.
This was thoughtlessly removed in commit 5db6df0c0, which broke
heapam_tuple_lock() (formerly EvalPlanQualFetch) because that function
needs to do more accesses to the tuple even if it's invisible. The net
effect is that we would continue to touch the page for a microsecond or
two after releasing pin on the buffer. Usually no harm would result;
but if a different session decided to defragment the page concurrently,
we could see garbage data and mistakenly conclude that there's no newer
tuple version to chain up to. (It's hard to say whether this has
happened in the field. The bug was actually found thanks to a later
change that allowed valgrind to detect accesses to non-pinned buffers.)
The most reasonable way to fix this is to reintroduce keep_buf,
although I made it behave slightly differently: buffer ownership
is passed back only if there is a valid tuple at the requested TID.
In HEAD, we can just add the parameter back to heap_fetch().
To avoid an API break in the back branches, introduce an additional
function heap_fetch_extended() in those branches.
In HEAD there is an additional, less obvious API change: tuple->t_data
will be set to NULL in all cases where buffer ownership is not returned,
in particular when the tuple exists but fails the time qual (and
!keep_buf). This is to defend against any other callers attempting to
access non-pinned buffers. We concluded that making that change in back
branches would be more likely to introduce problems than cure any.
In passing, remove a comment about heap_fetch that was obsoleted by
9a8ee1dc6.
Per bug #17462 from Daniil Anisimov. Back-patch to v12 where the bug
was introduced.
Discussion: https://postgr.es/m/17462-9c98a0f00df9bd36@postgresql.org
There was a small buglet in commit 52e4f0cd47 whereby a tuple acquired
from cache was not released, giving rise to WARNING messages; fix that.
While at it, restructure the code a bit on stylistic grounds.
Author: Hou zj <houzj.fnst@fujitsu.com>
Reported-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CAHut+PvKTyhTBtYCQsP6Ph7=o-oWRSX+v+PXXLXp81-o2bazig@mail.gmail.com
Commit f4fb45d15c misguidedly tried to free some state during aggregate
finalization for json_objectagg. This resulted in attempts to access
freed memory, especially when the function is used as a window function.
Commit 4eb9798879 attempted to ameliorate that, but in fact it should
just be ripped out, which is done here. Also add some regression tests
for json_objectagg in various flavors as a window function.
Original report from Jaime Casanova, diagnosis by Andres Freund.
Discussion: https://postgr.es/m/YkfeMNYRCGhySKyg@ahch-to
The last usage of this argument in this routine can be tracked down to
7e2f9062, aka 11 years ago. Getting rid of this argument can also be an
advantage for extensions calling check_index_is_clusterable(), as it
removes any need to worry about the meaning of what a recheck would be
at this level.
Author: Justin Pryzby
Discussion: https://postgr.es/m/20220411140609.GF26620@telsasoft.com
Since babbbb5 and the introduction of LZ4 in pg_receivewal, the
compression of the WAL archived is controlled by two options:
- --compression-method with "gzip", "none" or "lz4" as possible value.
- --compress=N to specify a compression level. This includes a
backward-incompatible change where a value of 0 leads to a failure
instead of no compression enforced.
This commit takes advantage of a4b5754 and 3603f7c to rework the
compression options of pg_receivewal, as of:
- The removal of --compression-method.
- The extenction of --compress to use the same grammar as pg_basebackup,
with a METHOD:DETAIL format, where a METHOD is "gzip", "none" or "lz4"
and a DETAIL is a comma-separated list of options, the only keyword
supported is now "level" to control the compression level. If only an
integer is specified as value of this option, "none" is implied on 0
and "gzip" is implied otherwise. This brings back --compress to be
backward-compatible with ~14, while still supporting LZ4.
This has also the advantage of centralizing the set of checks used by
pg_receivewal to validate its compression options.
Author: Michael Paquier
Reviewed-by: Robert Haas, Georgios Kokolatos
Discussion: https://postgr.es/m/YlPQGNAAa04raObK@paquier.xyz
This reverts commits 0147fc7, 4567596, aa64f23, and 5ecd018.
There is no longer agreement that introducing this function
was the right way to address the problem. The consensus now
seems to favor trying to make a correct value for MaxBackends
available to mdules executing their _PG_init() functions.
Nathan Bossart
Discussion: http://postgr.es/m/20220323045229.i23skfscdbvrsuxa@jrouhaud
Enforce __pg_log_level message filtering centrally in logging.c,
instead of relying on the calling macros to do it. This is more
reliable (e.g. it works correctly for direct calls to pg_log_generic)
and it saves a percent or so of total code size because we get rid of
so many duplicate checks of __pg_log_level.
This does mean that argument expressions in a logging macro will be
evaluated even if we end up not printing anything. That seems of
little concern for INFO and higher levels as those messages are printed
by default, and most of our frontend programs don't even offer a way to
turn them off. I left the unlikely() checks in place for DEBUG
messages, though.
Discussion: https://postgr.es/m/3993549.1649449609@sss.pgh.pa.us
The same structure, with the same set of elements (for none, lz4, gzip
and zstd), exists in compression.h, so let's make use of the centralized
version instead of duplicating things. Some of the variables used
previously for WalCompressionMethod are renamed to stick better with the
new structure and routine names.
WalCompressionMethod was leading to some confusion in walmethods.c, as
it was sometimes used to refer to some data unrelated to WAL.
Reported-by: Robert Haas
Author: Michael Paquier
Reviewed-by: Robert Haas, Georgios Kokolatos
Discussion: https://postgr.es/m/YlPQGNAAa04raObK@paquier.xyz
We were setting MERGE source subplan's targetlist by expanding the
individual attributes of the source relation completely, early in the
parse analysis phase. This failed to work when the condition of an
action included a whole-row reference, causing setrefs.c to error out
with
ERROR: variable not found in subplan target lists
because at that point there is nothing to resolve the whole-row
reference with. We can fix this by having preprocess_targetlist expand
the source targetlist for Vars required from the source rel by all
actions. Moreover, by using this expansion mechanism we can do away
with the targetlist expansion in transformMergeStmt, which is good
because then we no longer pull in columns that aren't needed for
anything.
Add a test case for the problem.
While at it, remove some redundant code in preprocess_targetlist():
MERGE was doing separately what is already being done for UPDATE/DELETE,
so we can just rely on the latter and remove the former. (The handling
of inherited rels was different for MERGE, but that was a no-longer-
necessary hack.)
Fix outdated, related comments for fix_join_expr also.
Author: Richard Guo <guofenglinux@gmail.com>
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Joe Wildish <joe@lateraljoin.com>
Discussion: https://postgr.es/m/fab3b90a-914d-46a9-beb0-df011ee39ee5@www.fastmail.com
Compression option handling (level, algorithm or even workers) can be
used across several parts of the system and not only base backups.
Structures, objects and routines are renamed in consequence, to remove
the concept of base backups from this part of the code making this
change straight-forward.
pg_receivewal, that has gained support for LZ4 since babbbb5, will make
use of this infrastructure for its set of compression options, bringing
more consistency with pg_basebackup. This cleanup needs to be done
before releasing a beta of 15. pg_dump is a potential future target, as
well, and adding more compression options to it may happen in 16~.
Author: Michael Paquier
Reviewed-by: Robert Haas, Georgios Kokolatos
Discussion: https://postgr.es/m/YlPQGNAAa04raObK@paquier.xyz
All but a few existing callers assume without checking that this
function succeeds. While it probably will, that's a poor excuse for
not checking. Let's make it return void and instead throw an error
if it doesn't find the block reference. Callers that actually need
to handle the no-such-block case must now use the underlying function
XLogRecGetBlockTagExtended.
In addition to being a bit less error-prone, this should also serve
to suppress some Coverity complaints about XLogRecGetBlockRefInfo.
While at it, clean up some inconsistency about use of the
XLogRecHasBlockRef macro: make XLogRecGetBlockTagExtended use
that instead of open-coding the same condition, and avoid calling
XLogRecHasBlockRef twice in relevant code paths. (That is,
calling XLogRecHasBlockRef followed by XLogRecGetBlockTag is now
deprecated: use XLogRecGetBlockTagExtended instead.)
Patch HEAD only; this doesn't seem to have enough value to consider
a back-branch API break.
Discussion: https://postgr.es/m/425039.1649701221@sss.pgh.pa.us
Remove comment block about how heap page vacuuming used to set tuples
with storage to LP_UNUSED in a rare edge case that can no longer happen
following commit 8523492d4e. The comments seem unnecessary now, since
it's now generally clear that heap vacuuming only applies to LP_DEAD
items from VACUUM's first heap pass following more recent work from
commits 12b5ade902 and 4f8d9d1217.
As of commit 39969e2a1, no caller of do_pg_backup_start() passes NULL
for labelfile or tblspcmapfile, nor is it plausible that any would
do so in the future. Remove the code that coped with that case,
as (a) it's dead and (b) it causes Coverity to bleat about possibly
leaked storage.
While here, do some janitorial work on the function's header comment.
\dconfig without an argument originally printed all parameters,
but it seems more useful to print only those parameters with
non-default settings. You can easily get the show-everything
behavior with "\dconfig *", but that output is unwieldy and
seems unlikely to be wanted very often.
Per suggestion from Christoph Berg.
Discussion: https://postgr.es/m/YlFQLzlPi4QD0wSi@msg.df7cb.de
With nowait passed as false, pgstat_lock_entry() must return true
so there's no need to check its result. Coverity seems unconvinced
of this, so whack it upside the head with a (void) cast.
The existing coding resulted in touching every copyright-containing
file in the tree, even if it was already up to date. That doesn't
matter much for the annual run, but it's an annoyance if you try
to use the script for mop-up at the close of a devel cycle, as
I just did.
Discussion: https://postgr.es/m/266030.1649685473@sss.pgh.pa.us
Commit 4a7e964fc6 made pgperlsyncheck fail, but apparently nobody
noticed, although the buildfarm module that does more or less the same
thing was modified. So fix the in-core test. I will look at unifying the
two sets of tests so we avoid a future mismatch.
These apply to traces from Test::More functions such as ok(), is(),
diag() and note(). Output from other sources (e.g. external programs
such a initdb) is not affected. The elapsed time is the time since the
last such trace (or the beginning of the test in the first case). Times
and timestamps are at millisecond precision.
Discussion: https://postgr.es/m/20220401172150.rsycz4lrn7ewruil@alap3.anarazel.de
wrasse, at least, moans about the lack of any "return" statement
in these functions. You'd think pretty much everything would
know that exit(1) doesn't return, but evidently not.
Get rid of the separate "FATAL" log level, as it was applied
so inconsistently as to be meaningless. This mostly involves
s/pg_log_fatal/pg_log_error/g.
Create a macro pg_fatal() to handle the common use-case of
pg_log_error() immediately followed by exit(1). Various
modules had already invented either this or equivalent macros;
standardize on pg_fatal() and apply it where possible.
Invent the ability to add "detail" and "hint" messages to a
frontend message, much as we have long had in the backend.
Except where rewording was needed to convert existing coding
to detail/hint style, I have (mostly) resisted the temptation
to change existing message wording.
Patch by me. Design and patch reviewed at various stages by
Robert Haas, Kyotaro Horiguchi, Peter Eisentraut and
Daniel Gustafsson.
Discussion: https://postgr.es/m/1363732.1636496441@sss.pgh.pa.us
Before commit 412ad7a556, delayChkpt
was a Boolean. Now it's an integer. Extensions using it need to be
appropriately updated, so let's rename the field to make sure that
a hard compilation failure occurs.
Replacing delayChkpt with delayChkptFlags made a few comments extend
past 80 characters, so I reflowed them and changed some wording very
slightly.
The back-branches will need a different change to restore compatibility
with existing minor releases; this is just for master.
Per suggestion from Tom Lane.
Discussion: http://postgr.es/m/a7880f4d-1d74-582a-ada7-dad168d046d1@enterprisedb.com
Commit 5a2832465f added
addFooterToPublicationDesc() as a wrapper around
printTableAddFooter(). The translation marker _() was moved to the
body of addFooterToPublicationDesc(), but addFooterToPublicationDesc()
was not added to GETTEXT_TRIGGERS, so those strings were lost for
translation. To fix, add the translation markers to the call sites of
addFooterToPublicationDesc() and remove the translation marker from
the body of the function. This seems easiest since there were only
two callers and it keeps the API consistent with
printTableAddFooter(). While we're here, add some const decorations
to the prototype of addFooterToPublicationDesc() for consistency with
printTableAddFooter().
Up until now, we've had a policy of only marking certain variables
in the PostgreSQL header files with PGDLLIMPORT, but now we've
decided to mark them all. This means that extensions running on
Windows should no longer operate at a disadvantage as compared to
extensions running on Linux: if the variable is present in a header
file, it should be accessible.
Discussion: http://postgr.es/m/CA+TgmoYanc1_FSfimhgiWSqVyP5KKmh5NP2BWNwDhO8Pg2vGYQ@mail.gmail.com
These are usually not useful since users will use packaged
distributions and won't be interested in rebuilding their installation
from source. Also, we have only used these kinds of hints for some
features and in some places, not consistently throughout.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/2552aed7-d0e9-280a-54aa-2dc7073f371d%40enterprisedb.com
Previously, the output of EXPLAIN (BUFFERS) option showed only the I/O
timing spent reading and writing shared and local buffers. This commit
adds on top of that the I/O timing for temporary buffers in the output
of EXPLAIN (for spilled external sorts, hashes, materialization. etc).
This can be helpful for users in cases where the I/O related to
temporary buffers is the bottleneck.
Like its cousin, this information is available only when track_io_timing
is enabled. Playing the patch, this is showing an extra overhead of up
to 1% even when using gettimeofday() as implementation for interval
timings, which is slightly within the usual range noise still that's
measurable.
Author: Masahiko Sawada
Reviewed-by: Georgios Kokolatos, Melanie Plageman, Julien Rouhaud,
Ranier Vilela
Discussion: https://postgr.es/m/CAD21AoAJgotTeP83p6HiAGDhs_9Fw9pZ2J=_tYTsiO5Ob-V5GQ@mail.gmail.com
With -DCATCACHE_FORCE_RELEASE a few tests failed. Those were trying to test
behavior in the absence of invalidation processing and
-DCATCACHE_FORCE_RELEASE obviously adds a lot of invalidation processing. The
test already tried to handle debug_discard_caches > 0, by disabling it for
individual tests.
Instead hide potentially problematic function calls in a wrapper function that
catches the does-not-exist error. The error isn't the actually interesting
bit, it's whether the stats entry still exist afterwards.
I confirmed that the tests still catches leaked function stats if I nuke the
protections against that in pgstat_function.c.
Per buildfarm animal prion.
Discussion: https://postgr.es/m/20220407165709.jgdkrzqlkcwue6ko@alap3.anarazel.de
- subscriber stats reset path was untested
- slot stat sreset path for all slots was untested
- pg_stat_database.sessions etc was untested
- pg_stat_reset_shared() was untested, for any kind of shared stats
- pg_stat_reset() was untested
Author: Melanie Plageman <melanieplageman@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Reclaim space from the line pointer array when heap pruning leaves
behind a contiguous group of LP_UNUSED items at the end of the array.
This happens during subsequent page defragmentation. Certain kinds of
heap line pointer bloat are ameliorated by this new optimization.
Follow-up work to commit 3c3b8a4b26, which taught VACUUM to truncate the
line pointer array in about the same way during VACUUM's second pass
over the heap. We now apply line pointer array truncation during both
the first and the second pass over the heap made by VACUUM. We can also
perform line pointer array truncation during opportunistic pruning.
Matthias van de Meent, with small tweaks by me.
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAEze2WjgaQc55Y5f5CQd3L=eS5CZcff2Obxp=O6pto8-f0hC4w@mail.gmail.com
Discussion: https://postgr.es/m/CAEze2Wg36%2B4at2eWJNcYNiW2FJmht34x3YeX54ctUSs7kKoNcA%40mail.gmail.com
Window functions such as row_number() always return a value higher than
the previously returned value for tuples in any given window partition.
Traditionally queries such as;
SELECT * FROM (
SELECT *, row_number() over (order by c) rn
FROM t
) t WHERE rn <= 10;
were executed fairly inefficiently. Neither the query planner nor the
executor knew that once rn made it to 11 that nothing further would match
the outer query's WHERE clause. It would blindly continue until all
tuples were exhausted from the subquery.
Here we implement means to make the above execute more efficiently.
This is done by way of adding a pg_proc.prosupport function to various of
the built-in window functions and adding supporting code to allow the
support function to inform the planner if the window function is
monotonically increasing, monotonically decreasing, both or neither. The
planner is then able to make use of that information and possibly allow
the executor to short-circuit execution by way of adding a "run condition"
to the WindowAgg to allow it to determine if some of its execution work
can be skipped.
This "run condition" is not like a normal filter. These run conditions
are only built using quals comparing values to monotonic window functions.
For monotonic increasing functions, quals making use of the btree
operators for <, <= and = can be used (assuming the window function column
is on the left). You can see here that once such a condition becomes false
that a monotonic increasing function could never make it subsequently true
again. For monotonically decreasing functions the >, >= and = btree
operators for the given type can be used for run conditions.
The best-case situation for this is when there is a single WindowAgg node
without a PARTITION BY clause. Here when the run condition becomes false
the WindowAgg node can simply return NULL. No more tuples will ever match
the run condition. It's a little more complex when there is a PARTITION
BY clause. In this case, we cannot return NULL as we must still process
other partitions. To speed this case up we pull tuples from the outer
plan to check if they're from the same partition and simply discard them
if they are. When we find a tuple belonging to another partition we start
processing as normal again until the run condition becomes false or we run
out of tuples to process.
When there are multiple WindowAgg nodes to evaluate then this complicates
the situation. For intermediate WindowAggs we must ensure we always
return all tuples to the calling node. Any filtering done could lead to
incorrect results in WindowAgg nodes above. For all intermediate nodes,
we can still save some work when the run condition becomes false. We've
no need to evaluate the WindowFuncs anymore. Other WindowAgg nodes cannot
reference the value of these and these tuples will not appear in the final
result anyway. The savings here are small in comparison to what can be
saved in the top-level WingowAgg, but still worthwhile.
Intermediate WindowAgg nodes never filter out tuples, but here we change
WindowAgg so that the top-level WindowAgg filters out tuples that don't
match the intermediate WindowAgg node's run condition. Such filters
appear in the "Filter" clause in EXPLAIN for the top-level WindowAgg node.
Here we add prosupport functions to allow the above to work for;
row_number(), rank(), dense_rank(), count(*) and count(expr). It appears
technically possible to do the same for min() and max(), however, it seems
unlikely to be useful enough, so that's not done here.
Bump catversion
Author: David Rowley
Reviewed-by: Andy Fan, Zhihong Yu
Discussion: https://postgr.es/m/CAApHDvqvp3At8++yF8ij06sdcoo1S_b2YoaT9D4Nf+MObzsrLQ@mail.gmail.com
Previously none of our tests triggered recovery conflicts. The test is
primarily motivated by needing tests for recovery conflict stats for shared
memory based pgstats. But it's also a decent start for recovery conflict
handling in general.
The only type of recovery conflict not tested yet are rcovery deadlock
conflicts.
By configuring log_recovery_conflict_waits the test adds some very minimal
testing for that path as well.
Author: Melanie Plageman <melanieplageman@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Tests that standbys:
- drop stats for objects when the those records are replayed
- persist stats across graceful restarts
- discard stats after immediate / crash restarts
Author: Melanie Plageman <melanieplageman@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Plain \dconfig is basically equivalent to SHOW except that you can
give it a pattern with wildcards, either to match multiple GUCs or
because you don't exactly remember the name you want.
\dconfig+ adds type, context, and access-privilege information,
mainly because every other kind of object privilege has a psql command
to show it, so GUC privileges should too. (A form of this command was
in some versions of the patch series leading up to commit a0ffa885e.
We pulled it out then because of doubts that the design and code were
up to snuff, but I think subsequent work has resolved that.)
In passing, fix incorrect completion of GUC names in GRANT/REVOKE
ON PARAMETER: a0ffa885e neglected to use the VERBATIM form of
COMPLETE_WITH_QUERY, so it misbehaved for custom (qualified) GUC
names.
Mark Dilger and Tom Lane
Discussion: https://postgr.es/m/3118455.1649267333@sss.pgh.pa.us
Test that stats are restored during normal restarts, discarded after a crash /
immediate restart, and that a corrupted stats file leads to stats being reset.
Author: Melanie Plageman <melanieplageman@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Modify the subroutines called by RI trigger functions that want to check
if a given referenced value exists in the referenced relation to simply
scan the foreign key constraint's unique index, instead of using SPI to
execute
SELECT 1 FROM referenced_relation WHERE ref_key = $1
This saves a lot of work, especially when inserting into or updating a
referencing relation.
This rewrite allows to fix a PK row visibility bug caused by a partition
descriptor hack which requires ActiveSnapshot to be set to come up with
the correct set of partitions for the RI query running under REPEATABLE
READ isolation. We now set that snapshot indepedently of the snapshot
to be used by the PK index scan, so the two no longer interfere. The
buggy output in src/test/isolation/expected/fk-snapshot.out of the
relevant test case added by commit 00cb86e75d has been corrected.
(The bug still exists in branch 14, however, but this fix is too
invasive to backpatch.)
Author: Amit Langote <amitlangote09@gmail.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Li Japin <japinli@hotmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/CA+HiwqGkfJfYdeq5vHPh6eqPKjSbfpDDY+j-kXYFePQedtSLeg@mail.gmail.com
They are to check the behavior of RI_FKey_check() and
ri_Check_Pk_Match(). A test case whereby RI_FKey_check() queries a
partitioned PK table under REPEATABLE READ isolation produces wrong
output due to a bug of the partition-descriptor logic and that is noted
as such in the comment in the test. A subsequent commit will fix the
bug and replace the buggy output by the correct one.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/1627848.1636676261@sss.pgh.pa.us
Add support for unlogged sequences. Unlike for unlogged tables, this
is not a performance feature. It allows sequences associated with
unlogged tables to be excluded from replication.
A new subcommand ALTER SEQUENCE ... SET LOGGED/UNLOGGED is added.
An identity/serial sequence now automatically gets and follows the
persistence level (logged/unlogged) of its owning table. (The
sequences owned by temporary tables were already temporary through the
separate mechanism in RangeVarAdjustRelationPersistence().) But you
can still change the persistence of an owned sequence separately.
Also, pg_dump and pg_upgrade preserve the persistence of existing
sequences.
Discussion: https://www.postgresql.org/message-id/flat/04e12818-2f98-257c-b926-2845d74ed04f%402ndquadrant.com
It might be worth instead splitting the test up to produce a smaller
alternative output file. But that's not trivial either, due to the number of
steps defined. And more than I want to do tonight.
Per buildfarm.
Introduce a new GUC recovery_prefetch. When enabled, look ahead in the
WAL and try to initiate asynchronous reading of referenced data blocks
that are not yet cached in our buffer pool. For now, this is done with
posix_fadvise(), which has several caveats. Since not all OSes have
that system call, "try" is provided so that it can be enabled where
available. Better mechanisms for asynchronous I/O are possible in later
work.
Set to "try" for now for test coverage. Default setting to be finalized
before release.
The GUC wal_decode_buffer_size limits the distance we can look ahead in
bytes of decoded data.
The existing GUC maintenance_io_concurrency is used to limit the number
of concurrent I/Os allowed, based on pessimistic heuristics used to
infer that I/Os have begun and completed. We'll also not look more than
maintenance_io_concurrency * 4 block references ahead.
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>
Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com> (earlier version)
Reviewed-by: Andres Freund <andres@anarazel.de> (earlier version)
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com> (earlier version)
Tested-by: Tomas Vondra <tomas.vondra@2ndquadrant.com> (earlier version)
Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com> (earlier version)
Tested-by: Dmitry Dolgov <9erthalion6@gmail.com> (earlier version)
Tested-by: Sait Talha Nisanci <Sait.Nisanci@microsoft.com> (earlier version)
Discussion: https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com
Change two macros to be static inline functions instead to keep the
data type consistent. This avoids a "comparison is always true"
warning that was occurring with -Wtype-limits. In the process, change
the names to look less like macros.
Discussion: https://postgr.es/m/20220407063505.njnnrmbn4sxqfsts@alap3.anarazel.de
In the stats collector days it was hard to write tests for the stats system,
because fundamentally delivery of stats messages over UDP was not
synchronous (nor guaranteed). Now we easily can force pending stats updates to
be flushed synchronously.
This moves stats.sql into a parallel group, there isn't a reason for it to run
in isolation anymore. And it may shake out some bugs.
Bumps catversion.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Just after committing 5891c7a8ed, a test running with debug_discard_caches=1
failed locally...
pgstat_drop_relation() neither checked pgstat_should_count_relation() nor
called pgstat_prep_relation_pending(). With debug_discard_caches=1
rel->pgstat_info wasn't set up, leading pg_stat_get_xact_tuples_inserted()
spuriously still returning > 0 while in the transaction dropping the table.
Zeroing out an lwlock in a normal build turns out to not trigger any alarms,
if nobody can use the lwlock at that moment (as the case here). But with
--disable-spinlocks --disable-atomics, the sema field needs to be initialized.
We probably should make sure that this fails on more common configurations as
well...
Per buildfarm animal rorqual
Broke with 5c279a6d35. But looks like it had been half-broken since
70e81861fa, because 'rmid' didn't refer to the current record's rmid anymore,
but to rmid from "Initialize resource managers" - a constant.
Allow extensions to specify a new custom resource manager (rmgr),
which allows specialized WAL. This is meant to be used by a Table
Access Method or Index Access Method.
Prior to this commit, only Generic WAL was available, which offers
support for recovery and physical replication but not logical
replication.
Reviewed-by: Julien Rouhaud, Bharath Rupireddy, Andres Freund
Discussion: https://postgr.es/m/ed1fb2e22d15d3563ae0eb610f7b61bb15999c0a.camel%40j-davis.com
This change affects SubTransGetTopmostTransaction(), used to find the
topmost transaction ID of a given transaction ID. The cache is able to
store one value, so as we can save the backend from unnecessary lookups
at pg_subtrans/ on repetitive calls of this routine. There is a similar
practice in transam.c, for example.
Author: Simon Riggs
Reviewed-by: Andrey Borodin, Julien Rouhaud
Discussion: https://postgr.es/m/CANbhV-G8Co=yq4v4BkW7MJDqVt68K_8A48nAZ_+8UQS7LrwLEQ@mail.gmail.com
With stats now being stored in shared memory, the GUC isn't needed
anymore. However, the pg_stat_tmp directory and PG_STAT_TMP_DIR define are
kept, as pg_stat_statements (and some out-of-core extensions) store data in
it.
Docs will be updated in a subsequent commit, together with the other pending
docs updates due to shared memory stats.
Author: Andres Freund <andres@anarazel.de>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220330233550.eiwsbearu6xhuqwe@alap3.anarazel.de
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Previously the statistics collector received statistics updates via UDP and
shared statistics data by writing them out to temporary files regularly. These
files can reach tens of megabytes and are written out up to twice a
second. This has repeatedly prevented us from adding additional useful
statistics.
Now statistics are stored in shared memory. Statistics for variable-numbered
objects are stored in a dshash hashtable (backed by dynamic shared
memory). Fixed-numbered stats are stored in plain shared memory.
The header for pgstat.c contains an overview of the architecture.
The stats collector is not needed anymore, remove it.
By utilizing the transactional statistics drop infrastructure introduced in a
prior commit statistics entries cannot "leak" anymore. Previously leaked
statistics were dropped by pgstat_vacuum_stat(), called from [auto-]vacuum. On
systems with many small relations pgstat_vacuum_stat() could be quite
expensive.
Now that replicas drop statistics entries for dropped objects, it is not
necessary anymore to reset stats when starting from a cleanly shut down
replica.
Subsequent commits will perform some further code cleanup, adapt docs and add
tests.
Bumps PGSTAT_FILE_FORMAT_ID.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Author: Andres Freund <andres@anarazel.de>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Reviewed-By: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-By: "David G. Johnston" <david.g.johnston@gmail.com>
Reviewed-By: Tomas Vondra <tomas.vondra@2ndquadrant.com> (in a much earlier version)
Reviewed-By: Arthur Zakirov <a.zakirov@postgrespro.ru> (in a much earlier version)
Reviewed-By: Antonin Houska <ah@cybertec.at> (in a much earlier version)
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Discussion: https://postgr.es/m/20220308205351.2xcn6k4x5yivcxyd@alap3.anarazel.de
Discussion: https://postgr.es/m/20210319235115.y3wz7hpnnrshdyv6@alap3.anarazel.de
Most of pgstat uses pgstat_<verb>_<subject>() or just <verb>_<subject>(). But
not all (some introduced fairly recently by me). Rename ones that aren't
intentionally following a different scheme (e.g. AtEOXact_*).
The column 'subskiplsn' uses TYPALIGN_DOUBLE (which has 4 bytes alignment
on AIX) for storage. But the C Struct (Form_pg_subscription) has 8-byte
alignment for this field, so retrieving it from storage causes an
unaligned read.
To fix this, we rearranged the 'subskiplsn' column in the catalog so that
it naturally comes at an 8-byte boundary.
We have fixed a similar problem in commit f3b421da5f. This patch adds a
test to avoid a similar mistake in the future.
Reported-by: Noah Misch
Diagnosed-by: Noah Misch, Masahiko Sawada, Amit Kapila
Author: Masahiko Sawada
Reviewed-by: Noah Misch, Amit Kapila
Discussion: https://postgr.es/m/20220401074423.GC3682158@rfd.leadboat.comhttps://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
Previously the pgstat <-> replication slots API was done with on the basis of
names. However, the upcoming move to storing stats in shared memory makes it
more convenient to use a integer as key.
Change the replication slot functions to take the slot rather than the slot
name, and expose ReplicationSlotIndex() to compute the index of an replication
slot. Special handling will be required for restarts, as the index is not
stable across restarts. For now pgstat internally still uses names.
Rename pgstat_report_replslot_{create,drop}() to
pgstat_{create,drop}_replslot() to match the functions for other kinds of
stats.
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220404041516.cctrvpadhuriawlq@alap3.anarazel.de
One problematic part of the current statistics collector design is that there
is no reliable way of getting rid of statistics entries. Because of that
pgstat_vacuum_stat() (called by [auto-]vacuum) matches all stats for the
current database with the catalog contents and tries to drop now-superfluous
entries. That's quite expensive. What's worse, it doesn't work on physical
replicas, despite physical replicas collection statistics entries.
This commit introduces infrastructure to create / drop statistics entries
transactionally, together with the underlying catalog objects (functions,
relations, subscriptions). pgstat_xact.c maintains a list of stats entries
created / dropped transactionally in the current transaction. To ensure the
removal of statistics entries is durable dropped statistics entries are
included in commit / abort (and prepare) records, which also ensures that
stats entries are dropped on standbys.
Statistics entries created separately from creating the underlying catalog
object (e.g. when stats were previously lost due to an immediate restart)
are *not* WAL logged. However that can only happen outside of the transaction
creating the catalog object, so it does not lead to "leaked" statistics
entries.
For this to work, functions creating / dropping functions / relations /
subscriptions need to call into pgstat. For subscriptions this was already
done when dropping subscriptions, via pgstat_report_subscription_drop() (now
renamed to pgstat_drop_subscription()).
This commit does not actually drop stats yet, it just provides the
infrastructure. It is however a largely independent piece of infrastructure,
so committing it separately makes sense.
Bumps XLOG_PAGE_MAGIC.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
With the introduction of PgStat_Kind PgStat_Single_Reset_Type,
PgStat_Shared_Reset_Target don't make sense anymore. Replace them with
PgStat_Kind.
Instead of having dedicated reset functions for different kinds of stats, use
two generic helper routines (one to reset all stats of a kind, one to reset
one stats entry).
A number of reset functions were named pgstat_reset_*_counter(), despite
affecting multiple counters. The generic helper routines get rid of
pgstat_reset_single_counter(), pgstat_reset_subscription_counter().
Rename pgstat_reset_slru_counter(), pgstat_reset_replslot_counter() to
pgstat_reset_slru(), pgstat_reset_replslot() respectively, and have them only
deal with a single SLRU/slot. Resetting all SLRUs/slots goes through the
generic pgstat_reset_of_kind().
Previously pg_stat_reset_replication_slot() used SearchNamedReplicationSlot()
to check if a slot exists. API wise it seems better to move that to
pgstat_replslot.c.
This is done separately from the - quite large - shared memory statistics
patch to make review easier.
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220404041516.cctrvpadhuriawlq@alap3.anarazel.de
This option is useful to do a rewind with the server configuration file
(aka postgresql.conf) located outside the data directory, which is
something that some Linux distributions and some HA tools like to rely
on. As a result, this can simplify the logic around a rewind by
avoiding the copy of such files before running pg_rewind.
This option affects pg_rewind when it internally starts the target
cluster with some "postgres" commands, adding -c config_file=FILE to the
command strings generated, when:
- retrieving a restore_command using a "postgres -C" command for
-c/--restore-target-wal.
- forcing crash recovery once to get the cluster into a clean shutdown
state.
Author: Gunnar "Nick" Bluth
Reviewed-by: Michael Banck, Alexander Kukushkin, Michael Paquier,
Alexander Alekseev
Discussion: https://postgr.es/m/7c59265d-ac50-b0aa-ca1e-65e8bd27642a@pro-open.de
This seems beneficial on high-core-count machines, and not harmful
on lesser hardware. However, older ARM32 gear doesn't have this
instruction, so restrict the patch to ARM64.
Geoffrey Blake
Discussion: https://postgr.es/m/78338F29-9D7F-4DC8-BD71-E9674CE71425@amazon.com
Until now index_concurrently_swap() directly modified pgstat internal
datastructures. That will break with the introduction of shared memory
statistics and seems off architecturally.
This is done separately from the - quite large - shared memory statistics
patch to make review easier.
Author: Andres Freund <andres@anarazel.de>
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Only the pgstat_send_* functions that are called from outside pgstat*.c are
renamed (the rest will go away). This is done separately from the - quite
large - shared memory statistics patch to make review easier.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220404041516.cctrvpadhuriawlq@alap3.anarazel.de
Soon the stats collector will be no more, with statistics instead getting
stored in shared memory. There are a lot of references to the stats collector
in comments. This commit replaces most of these references with "cumulative
statistics system", with the remaining ones getting replaced as part of
subsequent commits.
This is done separately from the - quite large - shared memory statistics
patch to make review easier.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
Discussion: https://postgr.es/m/20220308205351.2xcn6k4x5yivcxyd@alap3.anarazel.de
The transactional integration code is largely independent from the rest of
pgstat.c. Subsequent commits will add more related code.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20220404041516.cctrvpadhuriawlq@alap3.anarazel.de
It might seem pointless to allow use of dsm in single user mode, but otherwise
subsystems might need dedicated single user mode code paths.
Besides changing the assert, all that's needed is to make some windows code
assuming the presence of postmaster conditional.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA+hUKGL9hY_VY=+oUK+Gc1iSRx-Ls5qeYJ6q=dQVZnT3R63Taw@mail.gmail.com
Exclusive-mode backups have been deprecated since 9.6 (when
non-exclusive backups were introduced) due to the issues
they can cause should the system crash while one is running and
generally because non-exclusive provides a much better interface.
Further, exclusive backup mode wasn't really being tested (nor was most
of the related code- like being able to log in just to stop an exclusive
backup and the bits of the state machine related to that) and having to
possibly deal with an exclusive backup and the backup_label file
existing during pg_basebackup, pg_rewind, etc, added other complexities
that we are better off without.
This patch removes the exclusive backup mode, the various special cases
for dealing with it, and greatly simplifies the online backup code and
documentation.
Authors: David Steele, Nathan Bossart
Reviewed-by: Chapman Flack
Discussion: https://postgr.es/m/ac7339ca-3718-3c93-929f-99e725d1172c@pgmasters.nethttps://postgr.es/m/CAHg+QDfiM+WU61tF6=nPZocMZvHDzCK47Kneyb0ZRULYzV5sKQ@mail.gmail.com
This patch allows "PGC_SUSET" parameters to be set by non-superusers
if they have been explicitly granted the privilege to do so.
The privilege to perform ALTER SYSTEM SET/RESET on a specific parameter
can also be granted.
Such privileges are cluster-wide, not per database. They are tracked
in a new shared catalog, pg_parameter_acl.
Granting and revoking these new privileges works as one would expect.
One caveat is that PGC_USERSET GUCs are unaffected by the SET privilege
--- one could wish that those were handled by a revocable grant to
PUBLIC, but they are not, because we couldn't make it robust enough
for GUCs defined by extensions.
Mark Dilger, reviewed at various times by Andrew Dunstan, Robert Haas,
Joshua Brindle, and myself
Discussion: https://postgr.es/m/3D691E20-C1D5-4B80-8BA5-6BEB63AF3029@enterprisedb.com
The test created a 1m row table in order to test parallel operation of
JSON_VALUE. However, this was more than were needed for the test, so
save time by halving it, and also by making the table unlogged.
Experimentation shows that this size is only a little above the number
required to generate the expected output.
Per gripe from Andres Freund
Discussion: https://postgr.es/m/20220406022118.3ocqvhxr6kciw5am@alap3.anarazel.de
In commit 27e1f1456, create_append_plan() only allowed the subplan
created from a given subpath to be executed asynchronously when it was
an async-capable ForeignPath. To extend coverage, this patch handles
cases when the given subpath includes some other Path types as well that
can be omitted in the plan processing, such as a ProjectionPath directly
atop an async-capable ForeignPath, allowing asynchronous execution in
partitioned-scan/partitioned-join queries with non-Var tlist expressions
and more UNION queries.
Andrey Lepikhov and Etsuro Fujita, reviewed by Alexander Pyhalov and
Zhihong Yu.
Discussion: https://postgr.es/m/659c37a8-3e71-0ff2-394c-f04428c76f08%40postgrespro.ru
Commit 4a39f87acd changed the aggregated log format. Problem is, now
the explanatory paragraph for the log line in the document is too
long. Also the log format included more optional columns, and it's
harder to parse the log lines. This commit tries to solve the
problems.
- There's no optional log columns anymore. If a column is not
meaningful with provided pgbench option, it will be presented as 0.
- Reorder the log columns so that it's easier to parse them.
- Adjust explanatory paragraph for the log line in the doc.
Discussion: https://postgr.es/m/flat/202203280757.3tu4ovs3petm%40alvherre.pgsql
It's possible for the query that "waits for restart" to complete a
successful iteration before the postmaster has noticed its SIGKILL'd
child and begun the restart cycle. (This is a bit hard to believe
perhaps, but it's been seen at least twice in the buildfarm, mainly
on ancient platforms that likely have quirky schedulers.)
To provide a more secure interlock, wait for the other session
we're using to report that it's been forcibly shut down.
Patch by me, based on a suggestion from Andres Freund.
Back-patch to v14 where this test case came in.
Discussion: https://postgr.es/m/1801850.1649047827@sss.pgh.pa.us
The pg_fatal log which included filesizes were using UINT64_FORMAT for
the size_t variables, which failed on 32 bit buildfarm animals. Change
to using plain int instead, which is in line with how digestControlFile
is doing it already.
Per buildfarm animals florican and lapwing.
Discussion: https://postgr.es/m/13C2BF64-4A6D-47E4-9181-3A658F00C9B7@yesql.se
These clauses allow the user to specify how data from nested paths are
joined, allowing considerable freedom in shaping the tabular output of
JSON_TABLE.
PLAN DEFAULT allows the user to specify the global strategies when
dealing with sibling or child nested paths. The is often sufficient to
achieve the necessary goal, and is considerably simpler than the full
PLAN clause, which allows the user to specify the strategy to be used
for each named nested path.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zhihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/7e2cb85d-24cf-4abb-30a5-1a33715959bd@postgrespro.ru
Commits 74cf7d46 and a61daa14 fixed pg_upgrade bugs involving oversights
in how relfrozenxid or relminmxid are carried forward or initialized.
Corruption caused by bugs of this nature was ameliorated by commit
78db307bb2, which taught VACUUM to always overwrite existing invalid
relfrozenxid or relminmxid values that are apparently "in the future".
Extend that work now by showing a warning in the event of overwriting
either relfrozenxid or relminmxid due to an existing value that is "in
the future". There is probably a decent chance that the sanity checks
added by commit 699bf7d05c will raise an error before VACUUM reaches
this point, but we shouldn't rely on that.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-WzmRZEzeGvLv8yDW0AbFmSvJjTziORqjVUrf74mL4GL0Ww@mail.gmail.com
There's a race condition if a file changes in the source system
after we have collected the file list. If the file becomes larger,
we only fetched up to its original size. That can easily result in
a truncated file. That's not a problem for relation files, files
in pg_xact, etc. because any actions on them will be replayed from
the WAL. However, configuration files are affected.
This commit mitigates the race condition by fetching small files in
whole, even if they have grown. A test is added in which an extra
file copied is concurrently grown with the output of pg_rewind thus
guaranteeing it to have changed in size during the operation. This
is not a full fix: we still believe the original file size for files
larger than 1 MB. That should be enough for configuration files,
and doing more than that would require big changes to the chunking
logic in libpq_source.c.
This mitigates the race condition if the file is modified between
the original scan of files and copying the file, but there's still
a race condition if a file is changed while it's being copied.
That's a much smaller window, though, and pg_basebackup has the
same issue.
This race can be seen with pg_auto_failover, which frequently uses
ALTER SYSTEM, which updates postgresql.auto.conf. Often, pg_rewind
will fail, because the postgresql.auto.conf file changed concurrently
and a partial version of it was copied to the target. The partial
file would fail to parse, preventing the server from starting up.
Author: Heikki Linnakangas
Reviewed-by: Cary Huang
Discussion: https://postgr.es/m/f67feb24-5833-88cb-1020-19a4a2b83ac7%40iki.fi
The test logic is extended with two new concepts:
- Addition of a compression command called compress_cmd, executed
between restore_cmd and dump_cmd to control the contents of the dumps.
In the case of this commit, this is used to compress or decompress
elements of a dump to test new code paths.
- Addition of a new flag called compile_option, to check if a set of
tests can be executed depending on the ./configure options used in a
given build.
The tests introduced here are for gzip, but they are designed so as they
can easily be extended for new compression methods.
Author: Georgios Kokolatos, Rachel Heaton
Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss=@protonmail.com
* Move the execution pruning initialization steps that are common
between both ExecInitAppend() and ExecInitMergeAppend() into a new
function ExecInitPartitionPruning() defined in execPartition.c.
Those steps include creation of a PartitionPruneState to be used for
all instances of pruning and determining the minimal set of child
subplans that need to be initialized by performing initial pruning if
needed, and finally adjusting the subplan_map arrays in the
PartitionPruneState to reflect the new set of subplans remaining
after initial pruning if it was indeed performed.
ExecCreatePartitionPruneState() is no longer exported out of
execPartition.c and has been renamed to CreatePartitionPruneState()
as a local sub-routine of ExecInitPartitionPruning().
* Likewise, ExecFindInitialMatchingSubPlans() that was in charge of
performing initial pruning no longer needs to be exported. In fact,
since it would now have the same body as the more generally named
ExecFindMatchingSubPlans(), except differing in the value of
initial_prune passed to the common subroutine
find_matching_subplans_recurse(), it seems better to remove it and add
an initial_prune argument to ExecFindMatchingSubPlans().
* Add an ExprContext field to PartitionPruneContext to remove the
implicit assumption in the runtime pruning code that the ExprContext to
use to compute pruning expressions that need one can always rely on the
PlanState providing it. A future patch will allow runtime pruning (at
least the initial pruning steps) to be performed without the
corresponding PlanState yet having been created, so this will help.
Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqEYCpEqh2LMDOp9mT+4-QoVe8HgFMKBjntEMCTZLpcCCA@mail.gmail.com
The expected backend message after SIGQUIT changed in commit
7e784d1dc, but we missed updating this test case. Also, experience
shows that we might sometimes get "could not send data to server"
instead of either of the libpq messages the test is looking for.
Per report from Mark Dilger. Back-patch to v14 where the
backend message changed.
Discussion: https://postgr.es/m/17BD82D7-49AC-40C9-8204-E7ADD30321A0@enterprisedb.com
The previous coding of dshash_seq_next(), on the first call, accessed
status->hash_table->size_log2 without holding a partition lock and without
guaranteeing that ensure_valid_bucket_pointers() had ever been called.
That oversight turns out to not have immediately visible effects, because
bucket 0 is always in partition 0, and ensure_valid_bucket_pointers() was
called after acquiring the partition lock. However,
PARTITION_FOR_BUCKET_INDEX() with a size_log2 of 0 ends up triggering formally
undefined behaviour.
Simplify by accessing partition 0, without using PARTITION_FOR_BUCKET_INDEX().
While at it, remove dshash_get_current(), there is no convincing use
case. Also polish a few comments.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CA+hUKGL9hY_VY=+oUK+Gc1iSRx-Ls5qeYJ6q=dQVZnT3R63Taw@mail.gmail.com
These would all need to be rephrased when moving to shared memory stats, but
since they don't provide actual information right now, remove them instead.
The comments for PgStat_Msg* are left in, because they will all be removed as
part of the shared memory stats patch.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20220303021600.hs34ghqcw6zcokdh@alap3.anarazel.de
This feature allows jsonb data to be treated as a table and thus used in
a FROM clause like other tabular data. Data can be selected from the
jsonb using jsonpath expressions, and hoisted out of nested structures
in the jsonb to form multiple rows, more or less like an outer join.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zhihong Yu (whose
name I previously misspelled), Himanshu Upadhyaya, Daniel Gustafsson,
Justin Pryzby.
Discussion: https://postgr.es/m/7e2cb85d-24cf-4abb-30a5-1a33715959bd@postgrespro.ru
Previously, psql printed only the last result if a command string
returned multiple result sets. Now it prints all of them. The
previous behavior can be obtained by setting the psql variable
SHOW_ALL_RESULTS to off.
This is a significantly enhanced version of
3a51306722 (that was later reverted).
There is also much more test coverage for various psql features now.
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: "Iwata, Aya" <iwata.aya@jp.fujitsu.com> (earlier version)
Reviewed-by: Daniel Verite <daniel@manitou-mail.org> (earlier version)
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com> (earlier version)
Reviewed-by: vignesh C <vignesh21@gmail.com> (earlier version)
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
This script runs the core regression tests with quite a small value of
shared_buffers, making it prone to breakage due to synchronize_seqscans
kicking in where the tests don't expect that. Disable that feature to
stabilize the tests.
Discussion: https://postgr.es/m/1258185.1648876239@sss.pgh.pa.us
Commit f4fb45d15c tried to free memory during aggregate finalization.
This cause issues, particularly when used as a window function, so stop
doing that.
Per complaint by Jaime Casanova and diagnosis by Andres Freund
Discussion: https://postgr.es/m/YkfeMNYRCGhySKyg@ahch-to
Before this patch, there was some code that tried to make sure that the
buffer was always big enough at the start, and then asserted that it
didn't need to be enlarged later. However, the code to make sure it was
big enough at the start doesn't actually work, and therefore it was
possible to fail an assertion and crash later.
Remove the code that tries to make sure the buffer is always big enough
at the start in favor of enlarging the buffer as we go along whenever
that is necessary.
The mistake probably happened because, on the server side, we do
actually need to guarantee that the buffer is big enough at the start
to avoid subsequent resizings. However, in that case, the calling
code makes promises about how much data it will provide at once, but
here, that's not the case.
Report by Justin Pryzby. Analysis by me. Patch by Dipesh Pandit.
Discussion: http://postgr.es/m/20220330143536.GG28503@telsasoft.com
The general usage pattern when we store tuples in tuplesort.c is that
we store a series of tuples one by one then either perform a sort or spill
them to disk. In the common case, there is no pfreeing of already stored
tuples. For the common case since we do not individually pfree tuples, we
have very little need for aset.c memory allocation behavior which
maintains freelists and always rounds allocation sizes up to the next
power of 2 size.
Here we conditionally use generation.c contexts for storing tuples in
tuplesort.c when the sort will never be bounded. Unfortunately, the
memory context to store tuples is already created by the time any calls
would be made to tuplesort_set_bound(), so here we add a new sort option
that allows callers to specify if they're going to need a bounded sort or
not. We'll use a standard aset.c allocator when this sort option is not
set.
Extension authors must ensure that the TUPLESORT_ALLOWBOUNDED flag is
used when calling tuplesort_begin_* for any sorts that make a call to
tuplesort_set_bound().
Author: David Rowley
Reviewed-by: Andy Fan
Discussion: https://postgr.es/m/CAApHDvoH4ASzsAOyHcxkuY01Qf++8JJ0paw+03dk+W25tQEcNQ@mail.gmail.com
This replaces the bool flag for randomAccess. An upcoming patch requires
adding another option, so instead of breaking the API for that, then
breaking it again one day if we add more options, let's just break it
once. Any boolean options we add in the future will just make use of an
unused bit in the flags.
Any extensions making use of tuplesorts will need to update their code
to pass TUPLESORT_RANDOMACCESS instead of true for randomAccess.
TUPLESORT_NONE can be used for a set of empty options.
Author: David Rowley
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/CAApHDvoH4ASzsAOyHcxkuY01Qf%2B%2B8JJ0paw%2B03dk%2BW25tQEcNQ%40mail.gmail.com
Here we make a series of improvements to the generation memory
allocator, namely:
1. Allow generation contexts to have a minimum, initial and maximum block
sizes. The standard allocator allows this already but when the generation
context was added, it only allowed fixed-sized blocks. The problem with
fixed-sized blocks is that it's difficult to choose how large to make the
blocks. If the chosen size is too small then we'd end up with a large
number of blocks and a large number of malloc calls. If the block size is
made too large, then memory is wasted.
2. Add support for "keeper" blocks. This is a special block that is
allocated along with the context itself but is never freed. Instead,
when the last chunk in the keeper block is freed, we simply mark the block
as empty to allow new allocations to make use of it.
3. Add facility to "recycle" newly empty blocks instead of freeing them
and having to later malloc an entire new block again. We do this by
recording a single GenerationBlock which has become empty of any chunks.
When we run out of space in the current block, we check to see if there is
a "freeblock" and use that if it contains enough space for the allocation.
Author: David Rowley, Tomas Vondra
Reviewed-by: Andy Fan
Discussion: https://postgr.es/m/d987fd54-01f8-0f73-af6c-519f799a0ab8@enterprisedb.com
When dispatching sort operations to specialized variants, commit
69749243 failed to handle the case where CLUSTER-sort decides not to
initialize datum1 and isnull1. Fix by hoisting that decision up a level
and advertising whether datum1 can be relied on, in the Tuplesortstate
object.
Per reports from UBsan and Valgrind build farm animals, while running
the cluster.sql test.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAFBsxsF1TeK5Fic0M%2BTSJXzbKsY6aBqJGNj6ptURuB09ZF6k_w%40mail.gmail.com
datetime.c's parsing logic has assumed that strtod() will accept
a string that looks like ".", which it does in glibc, but not on
some less-common platforms such as AIX. The result of this was
that datetime fields like "123." would be accepted on some platforms
but not others; which is a sufficiently odd case that it's not that
surprising we've heard no field complaints. But commit e39f99046
extended that assumption to new places, and happened to add a test
case that exposed the platform dependency. Remove this dependency
by special-casing situations without any digits after the decimal
point.
(Again, this is in part a pre-existing bug but I don't feel a
compulsion to back-patch.)
Also, rearrange e39f99046's changes in formatting.c to avoid a
Coverity complaint that we were copying an uninitialized field.
Discussion: https://postgr.es/m/1592893.1648969747@sss.pgh.pa.us
Non-aggressive VACUUMs were at a gratuitous disadvantage (relative to
aggressive VACUUMs) around advancing relfrozenxid and relminmxid before
now. The issue only came up when concurrent activity unset some heap
page's visibility map bit right as VACUUM was considering if the page
should get counted in frozenskipped_pages. The non-aggressive case
would recheck the all-frozen bit at this point. The aggressive case
reasoned that the page (a skippable page) must have at least been
all-frozen in the recent past, so skipping it won't make relfrozenxid
advancement unsafe (which is never okay for aggressive VACUUMs).
The recheck created a window for some other backend to confuse matters
for VACUUM. If the page's VM bit turned out to be unset, VACUUM would
conclude that the page was _never_ all-frozen. frozenskipped_pages was
not incremented, and yet VACUUM couldn't back out of skipping at this
late stage (it couldn't choose to scan the page instead). This made it
unsafe to advance relfrozenxid later on.
Consistently avoid the issue by generalizing how we skip frozen pages
during aggressive VACUUMs: take the same approach when skipping any
skippable page range during aggressive and non-aggressive VACUUMs alike.
The new approach makes ranges (not individual pages) the fundamental
unit of skipping using the visibility map. frozenskipped_pages is
replaced with a boolean flag that represents whether some skippable
range with one or more all-visible pages was actually skipped.
It is safe for VACUUM to treat a page as all-frozen provided it at least
had its all-frozen bit set after the OldestXmin cutoff was established.
VACUUM is only required to scan pages that might have XIDs < OldestXmin
(unfrozen XIDs) to be able to safely advance relfrozenxid. Tuples
concurrently inserted on "skipped" pages can be thought of as equivalent
to tuples concurrently inserted on a block >= rel_pages.
It's possible that the issue this commit fixes hardly ever came up in
practice. But we only had to be unlucky once to lose out on advancing
relfrozenxid -- a single affected heap page was enough to throw VACUUM
off. That seems like something to avoid on general principle. This is
similar to an issue fixed by commit 44fa8488, which taught vacuumlazy.c
to not give up on non-aggressive relfrozenxid advancement just because a
cleanup lock wasn't immediately available on some heap page.
Skipping an all-visible range is now explicitly structured as a choice
made by non-aggressive VACUUMs, by weighing known costs (scanning extra
skippable pages to freeze their tuples early) against known benefits
(advancing relfrozenxid early). This works in essentially the same way
as it always has (don't skip ranges < SKIP_PAGES_THRESHOLD). We could
do much better here in the future by considering other relevant factors.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wzn6bGJGfOy3zSTJicKLw99PHJeSOQBOViKjSCinaxUKDQ@mail.gmail.com
Discussion: https://postgr.es/m/CA%2BTgmoZiSOY6H7aadw5ZZGm7zYmfDzL6nwmL5V7GL4HgJgLF_w%40mail.gmail.com
When VACUUM set relfrozenxid before now, it set it to whatever value was
used to determine which tuples to freeze -- the FreezeLimit cutoff.
This approach was very naive. The relfrozenxid invariant only requires
that new relfrozenxid values be <= the oldest extant XID remaining in
the table (at the point that the VACUUM operation ends), which in
general might be much more recent than FreezeLimit.
VACUUM now carefully tracks the oldest remaining XID/MultiXactId as it
goes (the oldest remaining values _after_ lazy_scan_prune processing).
The final values are set as the table's new relfrozenxid and new
relminmxid in pg_class at the end of each VACUUM. The oldest XID might
come from a tuple's xmin, xmax, or xvac fields. It might even come from
one of the table's remaining MultiXacts.
Final relfrozenxid values must still be >= FreezeLimit in an aggressive
VACUUM (FreezeLimit still acts as a lower bound on the final value that
aggressive VACUUM can set relfrozenxid to). Since standard VACUUMs
still make no guarantees about advancing relfrozenxid, they might as
well set relfrozenxid to a value from well before FreezeLimit when the
opportunity presents itself. In general standard VACUUMs may now set
relfrozenxid to any value > the original relfrozenxid and <= OldestXmin.
Credit for the general idea of using the oldest extant XID to set
pg_class.relfrozenxid at the end of VACUUM goes to Andres Freund.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CAH2-WzkymFbz6D_vL+jmqSn_5q1wsFvFrE+37yLgL_Rkfd6Gzg@mail.gmail.com
DecodeInterval (interval input) was careless about integer-overflow
hazards, allowing bogus results to be obtained for sufficiently
large input values. Also, since it initially converted the input
to a "struct tm", it was impossible to produce the full range of
representable interval values.
Meanwhile, EncodeInterval (interval output) and a few other
functions could suffer failures if asked to process sufficiently
large interval values, because they also relied on being able to
represent an interval in "struct tm" which is not designed to
handle that.
Fix all this stuff by introducing new struct types that are more
fit for purpose.
While this is clearly a bug fix, it's also an API break for any
code that's calling these functions directly. So back-patching
doesn't seem wise, especially in view of the lack of field
complaints.
Joe Koshakow, editorialized a bit by me
Discussion: https://postgr.es/m/CAAvxfHff0JLYHwyBrtMx_=6wr=k2Xp+D+-X3vEhHjJYMj+mQcg@mail.gmail.com
Cover some cases that would have been broken by a proposed patch,
but we failed to notice for lack of test coverage. I'm pushing
this separately mainly to memorialize that it *is* our historical
behavior.
Discussion: https://postgr.es/m/1344498.1648920056@sss.pgh.pa.us
Move some of the heap_vacuum_rel() instrumentation related variables to
the scope where they're actually needed. Also reorder some of the
variable declarations at the start of heap_vacuum_rel() so that related
variables appear together.
This is essentially the same as applying VACUUM FULL to a partitioned
table, which has been supported since commit 3c3bb99330 (March 2017).
While there's no great use case in applying CLUSTER to partitioned
tables, we don't have any strong reason not to allow it either.
For now, partitioned indexes cannot be marked clustered, so an index
must always be specified.
While at it, rename some variables that were RangeVars during the
development that led to 8bc717cb88 but never made it that way to the
source tree; there's no need to perpetuate names that have always been
more confusing than helpful.
Author: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/20201028003312.GU9241@telsasoft.com
Discussion: https://postgr.es/m/20200611153502.GT14879@telsasoft.com
The buildfarm has revealed some instability in results from catalog
queries in tests from commit 1a36bc9dba. Cure this by adding ORDER BY
to such queries.
Previously, the specialized tuplesort routine inlined handling for
reverse-sort and NULLs-ordering but called the datum comparator via a
pointer in the SortSupport struct parameter. Testing has showed that we
can get a useful performance gain by specializing datum comparison for
the different representations of abbreviated keys -- signed and unsigned
64-bit integers and signed 32-bit integers. Almost all abbreviatable data
types will benefit -- the only exception for now is numeric, since the
datum comparison is more complex. The performance gain depends on data
type and input distribution, but often falls in the range of 10-20% faster.
Thomas Munro
Reviewed by Peter Geoghegan, review and performance testing by me
Discussion:
https://www.postgresql.org/message-id/CA%2BhUKGKKYttZZk-JMRQSVak%3DCXSJ5fiwtirFf%3Dn%3DPAbumvn1Ww%40mail.gmail.com
322becb has changed upgradecheck to use the TAP tests, discarding
pg_upgrade's tests in bincheck. However, this is proving to be a bad
idea for the Windows buildfarm clients that use MSVC when TAP tests are
disabled as this causes a hard failure at the pg_upgrade step.
This commit disables upgradecheck, moving the execution of the tests of
pg_upgrade to bincheck, as per an initial suggestion from Andres
Freund, so as the buildfarm is able to live happily with those changes.
While on it, remove the routine that was used by upgradecheck to
create databases whose names are generated with a range of ASCII
characters as it is not used since 322becb. upgradecheck is removed
from the CI script for Windows, as bincheck takes care of that now.
Per report from buildfarm member hamerkop (MSVC 2017 without a TAP
setup).
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/YkbnpriYEAagZ2wH@paquier.xyz
The current implementation supports exactly one IP address in a server
certificate's Common Name, which is brittle (the strings must match
exactly). This patch adds support for IPv4 and IPv6 addresses in a
server's Subject Alternative Names.
Per discussion on-list:
- If the client's expected host is an IP address, we allow fallback to
the Subject Common Name if an iPAddress SAN is not present, even if
a dNSName is present. This matches the behavior of NSS, in
violation of the relevant RFCs.
- We also, counter-intuitively, match IP addresses embedded in dNSName
SANs. From inspection this appears to have been the behavior since
the SAN matching feature was introduced in acd08d76.
- Unlike NSS, we don't map IPv4 to IPv6 addresses, or vice-versa.
Author: Jacob Champion <pchampion@vmware.com>
Co-authored-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/9f5f20974cd3a4091a788cf7f00ab663d5fcdffe.camel@vmware.com
This makes the code more consistent with SpGiST, GiST and GIN, that
already use this style, and the idea is to make easier the introduction
of more sanity checks for each of these AM-specific macros. BRIN uses a
different set of macros to get a page's type and flags, so it has no
need for something similar.
Author: Matthias van de Meent
Discussion: https://postgr.es/m/CAEze2WjE3+tGO9Fs9+iZMU+z6mMZKo54W1Zt98WKqbEUHbHOBg@mail.gmail.com
This commit includes a set of improvements to help with the debugging of
failures in these new TAP tests:
- Instead of a plain diff command to compare the dumps generated, use
File::Compare::compare for the same effect. diff is still used to
provide more context in the event of an error.
- Log the contents of regression.diffs if the pg_regress command fails.
- Unify the format of the logs generated, getting inspiration from the
style used in 027_stream_regress.pl.
wrasse is the only buildfarm member that has reported a failure until
now after the introduction of 322becb, complaining that the dumps
generated do not match, and I am lacking information to understand what
is going in this environment.
This simplifies a lot of code in the tests of pg_upgrade without
sacrificing its coverage:
- Removal of test.sh used for builds with make, that has accumulated
over the years tweaks for problems that are solved in a duplicated way
by the centralized TAP framework (initialization of the various
environment variables PG*, port selection).
- Removal of the code in MSVC to test pg_upgrade. This was roughly a
duplicate of test.sh adapted for Windows, with an extra footprint of
a pg_regress command and all the assumptions behind it.
Support for upgrades with older versions is changed, not removed.
test.sh was able to set up the regression database on the old instance
by launching itself the pg_regress command and a dependency to the
source tree of thd old cluster, with tweaks on the command arguments to
adapt across the versions used. This created a backward-compatibility
dependency with older pg_regress commands, and recent changes like
d1029bb have made that much more complicated.
Instead, this commit allows tests with older major versions by
specifying a path to a SQL dump (taken with pg_dumpall from the old
cluster's installation) that will be loaded into the old instance to
upgrade instead of running pg_regress, through an optional environment
variable called $olddump. This requires a second variable called
$oldinstall to point to the base path of the installation of the old
cluster. This method is more in line with the buildfarm client that
uses a set of static dumps to set up an old instance, so hopefully we
will be able to reuse what is introduced in this commit there. The last
step of the tests that checks for differences between the two dumps
taken still needs to be improved as it can fail, requiring a manual
lookup at the dumps. This is not different from the old way of testing
where things could fail at the last step.
Support for EXTRA_REGRESS_OPTS is kept. vcregress.pl in the MSVC
scripts still handles the test of pg_upgrade with its upgradecheck, and
bincheck is changed to skip pg_upgrade.
Author: Michael Paquier
Reviewed-by: Andrew Dunstan, Andres Freund, Rachel Heaton, Tom Lane,
Discussion: https://postgr.es/m/YJ8xTmLQkotVLpN5@paquier.xyz
Add exec_assign_value, exec_eval_datum, and exec_cast_value
to the set of functions a PL/pgSQL debugger plugin can
conveniently call. This allows more convenient manipulation
of the values of PL/pgSQL function variables.
Pavel Stehule, reviewed by Aleksander Alekseev and myself
Discussion: https://postgr.es/m/CAFj8pRD+dBPU0T-KrkP7ef6QNPDEsjYCejEsBe07NDq8TybOkA@mail.gmail.com
This patch is extracted from a larger patch that allowed setting the
default returned value from these functions to json or jsonb. That had
problems, but this piece of it is fine. For these functions only json or
jsonb can be specified in the RETURNING clause.
Extracted from an original patch from Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
postgres_fdw would push ORDER BY clauses to the remote side without
verifying that the sort operator is safe to ship. Moreover, it failed
to print a suitable USING clause if the sort operator isn't default
for the sort expression's type. The net result of this is that the
remote sort might not have anywhere near the semantics we expect,
which'd be disastrous for locally-performed merge joins in particular.
We addressed similar issues in the context of ORDER BY within an
aggregate function call in commit 7012b132d, but failed to notice
that query-level ORDER BY was broken. Thus, much of the necessary
logic already existed, but it requires refactoring to be usable
in both cases.
Back-patch to all supported branches. In HEAD only, remove the
core code's copy of find_em_expr_for_rel, which is no longer used
and really should never have been pushed into equivclass.c in the
first place.
Ronan Dunklau, per report from David Rowley;
reviews by David Rowley, Ranier Vilela, and myself
Discussion: https://postgr.es/m/CAApHDvr4OeC2DBVY--zVP83-K=bYrTD7F8SZDhN4g+pj2f2S-A@mail.gmail.com
When testing check-world it's hard to know what the test the test failure
output belongs to. The tap test output is especially problematic, partially
due to our practice of reusing test names like 001_basic.pl.
This isn't a real issue on the buildfarm, which invokes tests separately, but
locally and for CI it's quite annoying.
To fix, the test target provisos in Makefile.global.in now output
echo "+++ (regress|isolation|tap) [install-]check in $(subdir) +++"
before running the tests.
Discussion: https://postgr.es/m/20220330165039.3zseuiraxfjkksf5@alap3.anarazel.de
Commit fb16d2c658 used the old but not quite old enough parent module,
which dates to perl version 5.10.1 as a core module. We still have a
dinosaur or two running older versions of perl, so rather than require
an upgrade in those we simply do in place what parent.pm's import()
would have done for us.
Reviewed by Tom Lane
Discussion: https://postgr.es/m/474104.1648685981@sss.pgh.pa.us
This breaks out the fetch-it-all-and-print case in SendQuery() into a
separate function. This makes the code more similar to the other
cases \gdesc and run query with FETCH_COUNT, and makes SendQuery()
itself a bit smaller.
Extracted from a larger patch with more changes in this area to
follow.
Author: Fabien COELHO <coelho@cri.ensmp.fr>
Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904132231510.8961@lancre
When we create or alter a subscription to add publications raise a warning
for non-existent publications. We don't want to give an error here because
it is possible that users can later create the missing publications.
Author: Vignesh C
Reviewed-by: Bharath Rupireddy, Japin Li, Dilip Kumar, Euler Taveira, Ashutosh Sharma, Amit Kapila
Discussion: https://postgr.es/m/CALDaNm0f4YujGW+q-Di0CbZpnQKFFrXntikaQQKuEmGG0=Zw=Q@mail.gmail.com
Compression with gzip has never been supported in the tar format of
pg_dump since this code has been introduced in c3e18804, as the use of
buffered I/O in gzdopen() changes the file positioning that tar
requires. The original idea behind the use of compression with the tar
mode is to be able to include compressed data files (named %u.dat.gz)
and blob files (blob_%u.dat.gz) in the tarball generated by the dump,
with toc.dat, that tracks down if compression is used in the dump,
always uncompressed.
Note that this commit removes the dump part of the code as well as the
restore part, removing any dependency to zlib in pg_backup_tar.c. There
could be an argument behind keeping around the restore part, but this
would require one to change the internals of a tarball previously dumped
so as data and blob files are compressed with toc.dat itself changed to
track down if compression is enabled. However, the argument about
gzdopen() still holds in the read case with pg_restore.
Removing this code simplifies future additions related to compression in
pg_dump.
Author: Georgios Kokolatos, Rachel Heaton
Discussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss=@protonmail.com
When evaluating a query with a multi-column GROUP BY clause using sort,
the cost may be heavily dependent on the order in which the keys are
compared when building the groups. Grouping does not imply any ordering,
so we're allowed to compare the keys in arbitrary order, and a Hash Agg
leverages this. But for Group Agg, we simply compared keys in the order
as specified in the query. This commit explores alternative ordering of
the keys, trying to find a cheaper one.
In principle, we might generate grouping paths for all permutations of
the keys, and leave the rest to the optimizer. But that might get very
expensive, so we try to pick only a couple interesting orderings based
on both local and global information.
When planning the grouping path, we explore statistics (number of
distinct values, cost of the comparison function) for the keys and
reorder them to minimize comparison costs. Intuitively, it may be better
to perform more expensive comparisons (for complex data types etc.)
last, because maybe the cheaper comparisons will be enough. Similarly,
the higher the cardinality of a key, the lower the probability we’ll
need to compare more keys. The patch generates and costs various
orderings, picking the cheapest ones.
The ordering of group keys may interact with other parts of the query,
some of which may not be known while planning the grouping. E.g. there
may be an explicit ORDER BY clause, or some other ordering-dependent
operation, higher up in the query, and using the same ordering may allow
using either incremental sort or even eliminate the sort entirely.
The patch generates orderings and picks those minimizing the comparison
cost (for various pathkeys), and then adds orderings that might be
useful for operations higher up in the plan (ORDER BY, etc.). Finally,
it always keeps the ordering specified in the query, on the assumption
the user might have additional insights.
This introduces a new GUC enable_group_by_reordering, so that the
optimization may be disabled if needed.
The original patch was proposed by Teodor Sigaev, and later improved and
reworked by Dmitry Dolgov. Reviews by a number of people, including me,
Andrey Lepikhov, Claudio Freire, Ibrar Ahmed and Zhihong Yu.
Author: Dmitry Dolgov, Teodor Sigaev, Tomas Vondra
Reviewed-by: Tomas Vondra, Andrey Lepikhov, Claudio Freire, Ibrar Ahmed, Zhihong Yu
Discussion: https://postgr.es/m/7c79e6a5-8597-74e8-0671-1c39d124c9d6%40sigaev.ru
Discussion: https://postgr.es/m/CA%2Bq6zcW_4o2NC0zutLkOJPsFt80megSpX_dVRo6GK9PC-Jx_Ag%40mail.gmail.com
This Patch introduces three SQL standard JSON functions:
JSON() (incorrectly mentioned in my commit message for f4fb45d15c)
JSON_SCALAR()
JSON_SERIALIZE()
JSON() produces json values from text, bytea, json or jsonb values, and
has facilitites for handling duplicate keys.
JSON_SCALAR() produces a json value from any scalar sql value, including
json and jsonb.
JSON_SERIALIZE() produces text or bytea from input which containis or
represents json or jsonb;
For the most part these functions don't add any significant new
capabilities, but they will be of use to users wanting standard
compliant JSON handling.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
We do this via a subclass for any branch older than the minimum known
to be compatible with the main package (currently release 12).
This should be useful for constructing cross-version tests.
In theory this could be extended back any number of versions, with
varying degrees of compatibility.
Reviewed by Michael Paquier and Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/a3efd19a-d5c9-fdf2-6094-4cde056a2708@dunslane.net
libzstd allows transparent parallel compression just by setting
an option when creating the compression context, so permit that
for both client and server-side backup compression. To use this,
use something like pg_basebackup --compress WHERE-zstd:workers=N
where WHERE is "client" or "server" and N is an integer.
When compression is performed on the server side, this will spawn
threads inside the PostgreSQL backend. While there is almost no
PostgreSQL server code which is thread-safe, the threads here are used
internally by libzstd and touch only data structures controlled by
libzstd.
Patch by me, based in part on earlier work by Dipesh Pandit
and Jeevan Ladhe. Reviewed by Justin Pryzby.
Discussion: http://postgr.es/m/CA+Tgmobj6u-nWF-j=FemygUhobhryLxf9h-wJN7W-2rSsseHNA@mail.gmail.com
COPY FROM supports the HEADER option to silently discard the header
line from a CSV or text file. It is possible to load by mistake a
file that matches the expected format, for example, if two text
columns have been swapped, resulting in garbage in the database.
This adds a new option value HEADER MATCH that checks the column names
in the header line against the actual column names and errors out if
they do not match.
Author: Rémi Lapeyre <remi.lapeyre@lenstra.fr>
Reviewed-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/CAF1-J-0PtCWMeLtswwGV2M70U26n4g33gpe1rcKQqe6wVQDrFA@mail.gmail.com
The current logical replication behavior is to send every transaction to
subscriber even if the transaction is empty. This can happen because
transaction doesn't contain changes from the selected publications or all
the changes got filtered. It is a waste of CPU cycles and network
bandwidth to build/transmit these empty transactions.
This patch addresses the above problem by postponing the BEGIN message
until the first change is sent. While processing a COMMIT message, if
there was no other change for that transaction, do not send the COMMIT
message. This allows us to skip sending BEGIN/COMMIT messages for empty
transactions.
When skipping empty transactions in synchronous replication mode, we send
a keepalive message to avoid delaying such transactions.
Author: Ajin Cherian, Hou Zhijie, Euler Taveira
Reviewed-by: Peter Smith, Takamichi Osumi, Shi Yu, Masahiko Sawada, Greg Nancarrow, Vignesh C, Amit Kapila
Discussion: https://postgr.es/m/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com
Curently, some TAP test that directly call the underlying function
PostgreSQL::Test::Utils::run_log() care about the return value, but
none of those that call it via PostgreSQL::Test::Cluster::run_log() care.
However, I'd like to add a test that will care, so adjust this function
to return whatever it gets back from the underlying function, just as
we do for a number of other functions in this module.
Discussion: http://postgr.es/m/CA+Tgmobj6u-nWF-j=FemygUhobhryLxf9h-wJN7W-2rSsseHNA@mail.gmail.com
This introduces the SQL/JSON functions for querying JSON data using
jsonpath expressions. The functions are:
JSON_EXISTS()
JSON_QUERY()
JSON_VALUE()
All of these functions only operate on jsonb. The workaround for now is
to cast the argument to jsonb.
JSON_EXISTS() tests if the jsonpath expression applied to the jsonb
value yields any values. JSON_VALUE() must return a single value, and an
error occurs if it tries to return multiple values. JSON_QUERY() must
return a json object or array, and there are various WRAPPER options for
handling scalar or multi-value results. Both these functions have
options for handling EMPTY and ERROR conditions.
Nikita Glukhov
Reviewers have included (in no particular order) Andres Freund, Alexander
Korotkov, Pavel Stehule, Andrew Alsup, Erik Rijkers, Zihong Yu,
Himanshu Upadhyaya, Daniel Gustafsson, Justin Pryzby.
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Linux thinks that something like "createdb foo -S bar" is perfectly
fine, but Windows wants the options to precede any bare arguments, so
we must write "createdb -S bar foo" for portability.
Per reports from CI, the buildfarm, and Andres.
Discussion: http://postgr.es/m/20220329173536.7d2ywdatsprxl4x6@alap3.anarazel.de
Because this strategy logs changes on a block-by-block basis, it
avoids the need to checkpoint before and after the operation.
However, because it logs each changed block individually, it might
generate a lot of extra write-ahead logging if the template database
is large. Therefore, the older strategy remains available via a new
STRATEGY parameter to CREATE DATABASE, and a corresponding --strategy
option to createdb.
Somewhat controversially, this patch assembles the list of relations
to be copied to the new database by reading the pg_class relation of
the template database. Cross-database access like this isn't normally
possible, but it can be made to work here because there can't be any
connections to the database being copied, nor can it contain any
in-doubt transactions. Even so, we have to use lower-level interfaces
than normal, since the table scan and relcache interfaces will not
work for a database to which we're not connected. The advantage of
this approach is that we do not need to rely on the filesystem to
determine what ought to be copied, but instead on PostgreSQL's own
knowledge of the database structure. This avoids, for example,
copying stray files that happen to be located in the source database
directory.
Dilip Kumar, with a fairly large number of cosmetic changes by me.
Reviewed and tested by Ashutosh Sharma, Andres Freund, John Naylor,
Greg Nancarrow, Neha Sharma. Additional feedback from Bruce Momjian,
Heikki Linnakangas, Julien Rouhaud, Adam Brusselback, Kyotaro
Horiguchi, Tomas Vondra, Andrew Dunstan, Álvaro Herrera, and others.
Discussion: http://postgr.es/m/CA+TgmoYtcdxBjLh31DLxUXHxFVMPGzrU5_T=CYCvRyFHywSBUQ@mail.gmail.com