In the long-going saga for analyze on partitioned tables, one thing I
missed while reverting 0827e8af70 is the maintenance of analyze count
and last analyze time for partitioned tables. This is a mostly trivial
change that enables users assess the need for invoking manual ANALYZE on
partitioned tables.
This patch, posted by Justin and modified a bit by me (Álvaro), can be
mostly traced back to Hosoya-san, though any problems introduced with
the scissors are mine.
Backpatch to 14, in line with 6f8127b739.
Co-authored-by: Yuzuko Hosoya <yuzukohosoya@gmail.com>
Co-authored-by: Justin Pryzby <pryzby@telsasoft.com>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20210816222810.GE10479@telsasoft.com
Déjà vu of commit fc40ba1296, for another backslash command.
Strictly speaking this isn't a bug, but since all references to catalog
objects are schema-qualified, we might as well be consistent. The
omission first appeared in commit ad600bba04 and replicated in
a4d75c86bf15; backpatch to 14.
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
Strictly speaking this isn't a bug, but since all references to catalog
objects are schema-qualified, we might as well be consistent. The
omission first appeared in commit 1c5d9270e3, so backpatch to 12.
Author: Justin Pryzby <pryzbyj@telsasoft.com>
Discussion: https://postgr.es/m/20210827193151.GN26465@telsasoft.com
If the system crashed between CREATE TABLESPACE and the next checkpoint,
the result could be some files in the tablespace unexpectedly containing
no rows. Affected files would be those for which the system did not
write WAL; see the wal_skip_threshold documentation. Before v13, a
different set of conditions governed the writing of WAL; see v12's
<sect2 id="populate-pitr">. (The v12 conditions were broader in some
ways and narrower in others.) Users may want to audit non-default
tablespaces for unexpected short files. The bug could have truncated an
index without affecting the associated table, and reindexing the index
would fix that particular problem.
This fixes the bug by making create_tablespace_directories() more like
TablespaceCreateDbspace(). create_tablespace_directories() was
recursively removing tablespace contents, reasoning that WAL redo would
recreate everything removed that way. That assumption holds for other
wal_level values. Under wal_level=minimal, the old approach could
delete files for which no other copy existed. Back-patch to 9.6 (all
supported versions).
Reviewed by Robert Haas and Prabhat Sahu. Reported by Robert Haas.
Discussion: https://postgr.es/m/CA+TgmoaLO9ncuwvr2nN-J4VEP5XyAcy=zKiHxQzBbFRxxGxm0w@mail.gmail.com
Somehow, spgist overlooked the need to call pgstat_count_index_scan().
Hence, pg_stat_all_indexes.idx_scan and equivalent columns never
became nonzero for an SP-GiST index, although the related per-tuple
counters worked fine.
This fix works a bit differently from other index AMs, in that the
counter increment occurs in spgrescan not spggettuple/spggetbitmap.
It looks like this won't make the user-visible semantics noticeably
different, so I won't go to the trouble of introducing an is-this-
the-first-call flag just to make the counter bumps happen in the
same places.
Per bug #17163 from Christian Quest. Back-patch to all supported
versions.
Discussion: https://postgr.es/m/17163-b8c5cc88322a5e92@postgresql.org
When prefetching pages for ANALYZE, we should be using
maintenance_io_concurrenty (by calling
get_tablespace_maintenance_io_concurrency(), not
get_tablespace_io_concurrency()).
ANALYZE prefetching was introduced in c6fc50c, so back-patch to 14.
Backpatch-through: 14
Reported-By: Egor Rogov
Discussion: https://postgr.es/m/9beada99-34ce-8c95-fadb-451768d08c64%40postgrespro.ru
Adjust track_io_timing related logging code added by commit 94d13d474d.
Make it consistent with other nearby autovacuum and autoanalyze logging
code by removing logic that suppressed zero millisecond outputs.
log_autovacuum_min_duration log output now reliably shows "read:" and
"write:" millisecond-based values in its report (when track_io_timing is
enabled).
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Stephen Frost <sfrost@snowman.net>
Discussion: https://postgr.es/m/CAH2-WznW0FNxSVQMSRazAMYNfZ6DR_gr5WE78hc6E1CBkkJpzw@mail.gmail.com
Backpatch: 14-, where the track_io_timing logging was introduced.
This order seems more natural. It starts with details that are
particular to heap and index data structures, and ends with system-level
costs incurred during the autovacuum worker's VACUUM/ANALYZE operation.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-WzkzxK6ahA9xxsOftRtBX_R0swuHZsvo4QUbak1Bz7hb7Q@mail.gmail.com
Backpatch: 14-, which enhanced the log output in various ways.
This was arguably a minor oversight in commit b4af70cb, which cleaned up
the function signatures of functions that modify IndexBulkDeleteResult
variables.
Second thoughts about commit 824bf7190: we apply makesearch() to
an NFA after having determined whether it is a MATCHALL pattern.
Prepending ".*" doesn't make it non-MATCHALL, but it does change the
maximum possible match length, and makesearch() failed to update that.
This has no ill effects given the stylized usage of search NFAs, but
it seems like it's better to keep the data structure consistent. In
particular, fixing this allows more honest handling of the MATCHALL
check in matchuntil(): we can now assert that maxmatchall is infinity,
instead of lamely assuming that it should act that way.
In passing, improve the code in dump[c]nfa so that infinite maxmatchall
is printed as "inf" not a magic number.
When looping over the resultset from a SQL query, extracting the field
number before the loop body to avoid repeated calls to PQfnumber is an
established pattern. On very wide tables this can have a performance
impact, but it wont be noticeable in the common case. This fixes a few
queries which were extracting the field number in the loop body.
Author: Hou Zhijie <houzj.fnst@fujitsu.com>
Reviewed-by: Nathan Bossart <bossartn@amazon.com>
Discussion: https://postgr.es/m/OS0PR01MB57164C392783F29F6D0ECA0B94139@OS0PR01MB5716.jpnprd01.prod.outlook.com
Previously, on the subscriber, we set the error context callback for the
tuple data conversion failures. This commit replaces the existing error
context callback with a comprehensive one so that it shows not only the
details of data conversion failures but also the details of logical change
being applied by the apply worker or table sync worker. The additional
information displayed will be the command, transaction id, and timestamp.
The error context is added to an error only when applying a change but not
while doing other work like receiving data etc.
This will help users in diagnosing the problems that occur during logical
replication. It also can be used for future work that allows skipping a
particular transaction on the subscriber.
Author: Masahiko Sawada
Reviewed-by: Hou Zhijie, Greg Nancarrow, Haiying Tang, Amit Kapila
Tested-by: Haiying Tang
Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
The hardcoded "wide character" set in ucs_wcwidth() was last updated
around the Unicode 5.0 era. This led to misalignment when printing
emojis and other codepoints that have since been designated
wide or full-width.
To fix and keep up to date, extend update-unicode to download the list
of wide and full-width codepoints from the offical sources.
In passing, remove some comments about non-spacing characters that
haven't been accurate since we removed the former hardcoded logic.
Jacob Champion
Reported and reviewed by Pavel Stehule
Discussion: https://www.postgresql.org/message-id/flat/CAFj8pRCeX21O69YHxmykYySYyprZAqrKWWg0KoGKdjgqcGyygg@mail.gmail.com
Since a partitioned index doesn't have storage, getting the number of
blocks from it will not give sensible results. Existing callers
already check that they don't call it that way, so there doesn't
appear to be a live problem. But for correctness, handle
RELKIND_PARTITIONED_INDEX together with the other non-storage
relkinds.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/1d3a5fbe-f48b-8bea-80da-9a5c4244aef9@enterprisedb.com
Pengchengliu reported an assertion failure in a parallel woker while
performing a parallel scan using an overflowed snapshot. The proximate
cause is that TransactionXmin was set to an incorrect value. The
underlying cause is incorrect snapshot handling in parallel.c.
In particular, InitializeParallelDSM() was unconditionally calling
GetTransactionSnapshot(), because I (rhaas) mistakenly thought that
was always retrieving an existing snapshot whereas, at isolation
levels less than REPEATABLE READ, it's actually taking a new one. So
instead do this only at higher isolation levels where there actually
is a single snapshot for the whole transaction.
By itself, this is not a sufficient fix, because we still need to
guarantee that TransactionXmin gets set properly in the workers. The
easiest way to do that seems to be to install the leader's active
snapshot as the transaction snapshot if the leader did not serialize a
transaction snapshot. This doesn't affect the results of future
GetTrasnactionSnapshot() calls since those have to take a new snapshot
anyway; what we care about is the side effect of setting TransactionXmin.
Report by Pengchengliu. Patch by Greg Nancarrow, except for some comment
text which I supplied.
Discussion: https://postgr.es/m/002f01d748ac$eaa781a0$bff684e0$@tju.edu.cn
The previous coding relied on the PID file appearing and the query
starting "fast enough", which can fail on slow machines. Also, there
might have been an undocumented interference between alarm and
IPC::Run. This new coding doesn't rely on any of these concurrency
mechanisms. Instead, we wait unitl the PID file is complete before
proceeding, and then also wait until the sleep query is registered by
the server.
Discussion: https://www.postgresql.org/message-id/flat/E1mH14Q-0002gh-HS%40gemulon.postgresql.org
Commit 325f2ec555 introduced pg_class.relwrite to skip operations on
tables created as part of a heap rewrite during DDL. It links such
transient heaps to the original relation OID via this new field in
pg_class but forgot to do anything about toast tables. So, logical
decoding was not able to skip operations on internally created toast
tables. This leads to an error when we tried to decode the WAL for the
next operation for which it appeared that there is a toast data where
actually it didn't have any toast data.
To fix this, we set pg_class.relwrite for internally created toast tables
as well which allowed skipping operations on them during logical decoding.
Author: Bertrand Drouvot
Reviewed-by: David Zhang, Amit Kapila
Backpatch-through: 11, where it was introduced
Discussion: https://postgr.es/m/b5146fb1-ad9e-7d6e-f980-98ed68744a7c@amazon.com
There are two identical error messages about valid value of modulus for
hash partition, in PostgreSQL source code. Commit 0e1275fb07 improved
only one of them so that ambiguous word "positive" was avoided there,
and forgot to improve the other. This commit improves the other.
Which would reduce translator burden.
Back-pach to v11 where the error message exists.
Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
The distance in phrase operator must be an integer value between zero
and MAXENTRYPOS inclusive. But previously the error message about
its valid value included the information about its upper limit
but not lower limit (i.e., zero). This commit improves the error message
so that it also includes the information about its lower limit.
Back-patch to v9.6 where full-text phrase search was supported.
Author: Kyotaro Horiguchi
Reviewed-by: Fujii Masao
Discussion: https://postgr.es/m/20210819.170315.1413060634876301811.horikyota.ntt@gmail.com
Regexps like "(.){0}...\1" drew an "invalid backreference number".
That's not unreasonable on its face, since the capture group will
never be matched if it's iterated zero times. However, other engines
such as Perl's don't complain about this, nor do we throw an error for
related cases such as "(.)|\1", even though that backref can never
succeed either. Also, if the zero-iterations case happens at runtime
rather than compile time --- say, "(x)*...\1" when there's no "x" to
be found --- that's not an error, we just deem the backref to not
match. Making this even less defensible, no error was thrown for
nested cases such as "((.)){0}...\2"; and to add insult to injury,
those cases could result in assertion failures instead. (It seems
that nothing especially bad happened in non-assert builds, though.)
Let's just fix it so that no error is thrown and instead the backref
is deemed to never match, so that compile-time detection of no
iterations behaves the same as run-time detection.
Per report from Mark Dilger. This appears to be an aboriginal error
in Spencer's library, so back-patch to all supported versions.
Pre-v14, it turns out to also be necessary to back-patch one aspect of
commits cb76fbd7e/00116dee5, namely to create capture-node subREs with
the begin/end states of their subexpressions, not the current lp/rp
of the outer parseqatom invocation. Otherwise delsub complains that
we're trying to disconnect a state from itself. This is a bit scary
but code examination shows that it's safe: in the pre-v14 code, if we
want to wrap iteration around the subexpression, the first thing we do
is overwrite the atom's begin/end fields with new states. So the
bogus values didn't survive long enough to be used for anything, except
if no iteration is required, in which case it doesn't matter.
Discussion: https://postgr.es/m/A099E4A8-4377-4C64-A98C-3DEDDC075502@enterprisedb.com
The current refresh behavior tries to just refresh added/dropped
publications but that leads to removing wrong tables from subscription. We
can't refresh just the dropped publication because it is quite possible
that some of the tables are removed from publication by that time and now
those will remain as part of the subscription. Also, there is a chance
that the tables that were part of the publication being dropped are also
part of another publication, so we can't remove those.
So, we decided that by default, add/drop commands will also act like
REFRESH PUBLICATION which means they will refresh all the publications. We
can keep the old behavior for "add publication" but it is better to be
consistent with "drop publication".
Author: Hou Zhijie
Reviewed-by: Masahiko Sawada, Amit Kapila
Backpatch-through: 14, where it was introduced
Discussion: https://postgr.es/m/OS0PR01MB5716935D4C2CC85A6143073F94EF9@OS0PR01MB5716.jpnprd01.prod.outlook.com
The recursion in cdissect() was careless about clearing match data
for capturing parentheses after rejecting a partial match. This
could allow a later back-reference to succeed when by rights it
should fail for lack of a defined referent.
To fix, think a little more rigorously about what the contract
between different levels of cdissect's recursion needs to be.
With the right spec, we can fix this using fewer rather than more
resets of the match data; the key decision being that a failed
sub-match is now explicitly responsible for clearing any matches
it may have set.
There are enough other cross-checks and optimizations in the code
that it's not especially easy to exhibit this problem; usually, the
match will fail as-expected. Plus, regexps that are even potentially
vulnerable are most likely user errors, since there's just not much
point in writing a back-ref that doesn't always have a referent.
These facts perhaps explain why the issue hasn't been detected,
even though it's almost certainly a couple of decades old.
Discussion: https://postgr.es/m/151435.1629733387@sss.pgh.pa.us
WAL records may span multiple segments, but XLogWrite() does not
wait for the entire record to be written out to disk before
creating archive status files. Instead, as soon as the last WAL page of
the segment is written, the archive status file is created, and the
archiver may process it. If PostgreSQL crashes before it is able to
write and flush the rest of the record (in the next WAL segment), the
wrong version of the first segment file lingers in the archive, which
causes operations such as point-in-time restores to fail.
To fix this, keep track of records that span across segments and ensure
that segments are only marked ready-for-archival once such records have
been completely written to disk.
This has always been wrong, so backpatch all the way back.
Author: Nathan Bossart <bossartn@amazon.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Ryo Matsumura <matsumura.ryo@fujitsu.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/CBDDFA01-6E40-46BB-9F98-9340F4379505@amazon.com
In a backup manifest, WAL-Ranges stores the range of WAL that is
required for the backup to be valid. pg_verifybackup would then
internally use pg_waldump for the checks based on this data.
When the timeline where the backup started was more than 1 with a
history file looked at for the manifest data generation, the calculation
of the WAL range for the first timeline to check was incorrect. The
previous logic used as start LSN the start position of the first
timeline, but it needs to use the start LSN of the backup. This would
cause failures with pg_verifybackup, or any tools making use of the
backup manifests.
This commit adds a test based on a logic using a self-promoted node,
making it rather cheap.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210818.143031.1867083699202617521.horikyota.ntt@gmail.com
Backpatch-through: 13
We've supported parallel aggregation since e06a38965. At the time, we
didn't quite get around to also adding parallel DISTINCT. So, let's do
that now.
This is implemented by introducing a two-phase DISTINCT. Phase 1 is
performed on parallel workers, rows are made distinct there either by
hashing or by sort/unique. The results from the parallel workers are
combined and the final distinct phase is performed serially to get rid of
any duplicate rows that appear due to combining rows for each of the
parallel workers.
Author: David Rowley
Reviewed-by: Zhihong Yu
Discussion: https://postgr.es/m/CAApHDvrjRxVKwQN0he79xS+9wyotFXL=RmoWqGGO2N45Farpgw@mail.gmail.com
Improve two places in plpgsql, and one in spi.c, where an error
message would confusingly tell you that you couldn't use a SELECT
query, when what you had written *was* a SELECT query. The actual
problem is that you can't use SELECT ... INTO in these contexts,
but the messages failed to make that apparent. Special-case
SELECT INTO to make these errors more helpful.
Also, fix the same spots in plpgsql, as well as several messages
in exec_eval_expr(), to not quote the entire complained-of query or
expression in the primary error message. That behavior very easily
led to violating our message style guideline about keeping the primary
error message short and single-line. Also, since the important part
of the message was after the inserted text, it could make the real
problem very hard to see. We can report the query or expression as
the first line of errcontext instead.
Per complaint from Roger Mason. Back-patch to v14, since (a) some
of these messages are new in v14 and (b) v14's translatable strings
are still somewhat in flux. The problem's older than that of
course, but I'm hesitant to change the behavior further back.
Discussion: https://postgr.es/m/1914708.1629474624@sss.pgh.pa.us
After detecting a sub-match "dissect" failure (i.e., a backref match
failure) in the i'th sub-match of an iteration node, we should proceed
by adjusting the attempted length of the i'th submatch. As coded,
though, these functions changed the attempted length of the *last*
sub-match, and only after exhausting all possibilities for that would
they back up to adjust the next-to-last sub-match, and then the
second-from-last, etc; all of which is wasted effort, since only
changing the start or length of the i'th sub-match can possibly make
it succeed. This oversight creates the possibility for exponentially
bad performance. Fortunately the problem is masked in most cases by
optimizations or constraints applied elsewhere; which explains why
we'd not noticed it before. But it is possible to reach the problem
with fairly simple, if contrived, regexps.
Oversight in my commit 173e29aa5. That's pretty ancient now,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/1808998.1629412269@sss.pgh.pa.us
Using --quiet in combination with --no-strict-names didn't work as
documented, a warning message was still emitted. Since the --quiet
flag was working in an unconventional way to other utilities, fix
by removing the functionality instead.
Backpatch through 14 where pg_amcheck was introduced.
Bug: 17148
Reported-by: Chen Jiaoqian <chenjq.jy@fujitsu.com>
Reviewed-by: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/17148-b5087318e2b04fc6@postgresql.org
Backpatch-through: 14
transformLockingClause neglected to exclude the pseudo-RTEs for
OLD/NEW when processing a rule's query. This led to odd errors
or even crashes later on. This bug is very ancient, but it's
not terribly surprising that nobody noticed, since the use-case
for SELECT FOR UPDATE in a non-view rule is somewhere between
thin and non-existent. Still, crashing is not OK.
Per bug #17151 from Zhiyong Wu. Thanks to Masahiko Sawada
for analysis of the problem.
Discussion: https://postgr.es/m/17151-c03a3e6e4ec9aadb@postgresql.org
Previously log messages late during shutdown could end up using either another
backend's PgBackendStatus (multi user) or segfault (single user) because
pgstat_get_my_query_id()'s check for !MyBEEntry didn't filter out use after
pgstat_beshutdown_hook().
This became a bug in 4f0b0966c8, but was a bit fishy before. But given
there's no known problematic cases before 14, it doesn't seem worth
backpatching further.
Also fixes a wrong filename in a comment, introduced in e1025044.
Reported-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/Julien Rouhaud <rjuju123@gmail.com>
Backpatch: 14-
In the code, most places used the term "Stream Stop" for the logical
stream message. This commit improves consistency by renaming LogicalRepMsgType
"LOGICAL_REP_MSG_STREAM_END" to "LOGICAL_REP_MSG_STREAM_STOP".
Author: Masahiko Sawada
Reviewed-by: Hou Zhijie, Amit Kapila
Discussion: https://postgr.es/m/CAD21AoDeScrsHhLyEPYqN3sydg6PxAPVBboK=30xJfUVihNZDA@mail.gmail.com
This is a combined revert of the following commits:
- c3826f8, a refactoring piece that moved the hex decoding code to
src/common/. This code was cleaned up by aef8948, as it originally
included no overflow checks in the same way as the base64 routines in
src/common/ used by SCRAM, making it unsafe for its purpose.
- aef8948, a more advanced refactoring of the hex encoding/decoding code
to src/common/ that added sanity checks on the result buffer for hex
decoding and encoding. As reported by Hans Buschmann, those overflow
checks are expensive, and it is possible to see a performance drop in
the decoding/encoding of bytea or LOs the longer they are. Simple SQLs
working on large bytea values show a clear difference in perf profile.
- ccf4e27, a cleanup made possible by aef8948.
The reverts of all those commits bring back the performance of hex
decoding and encoding back to what it was in ~13. Fow now and
post-beta3, this is the simplest option.
Reported-by: Hans Buschmann
Discussion: https://postgr.es/m/1629039545467.80333@nidsa.net
Backpatch-through: 14
Recursion into the FILTER clause was mis-implemented, such that a
relevant Var or Aggref at the very top of the FILTER clause would
be ignored. (Of course, that'd have to be a plain boolean Var or
boolean-returning aggregate.) The consequence would be
mis-identification of the correct semantic level of the aggregate,
which could lead to not-per-spec query behavior. If the FILTER
expression is an aggregate, this could also lead to failure to issue
an expected "aggregate function calls cannot be nested" error, which
would likely result in a core dump later on, since the planner and
executor aren't expecting such cases to appear.
The root cause is that commit b560ec1b0 blindly copied some code
that assumed it's recursing into a List, and thus didn't examine the
top-level node. To forestall questions about why this call doesn't
look like the others, as well as possible future copy-and-paste
mistakes, let's change all three check_agg_arguments_walker calls in
check_agg_arguments, even though only the one for the filter clause
is really broken.
Per bug #17152 from Zhiyong Wu. This has been wrong since we
implemented FILTER, so back-patch to all supported versions.
(Testing suggests that pre-v11 branches manage to avoid crashing
in the bad-Aggref case, thanks to "redundant" checks in ExecInitAgg.
But I'm not sure how thorough that protection is, and anyway the
wrong-behavior issue remains, so fix 9.6 and 10 too.)
Discussion: https://postgr.es/m/17152-c7f906cc1a88e61b@postgresql.org
The skip options set for all-visible and all-frozen were incorrect
as they used space rather than hyphen, causing a syntax error when
invoked. Also, the option for not skipping any pages at all, none,
was documented but not implemented.
Backpatch through 14 where pg_amcheck was introduced.
Bug: #17149
Reported-by: Chen Jiaoqian <chenjq.jy@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/17149-5918ea748da36b15@postgresql.org
Backpatch-through: 14
If recordDependencyOnCurrentExtension is invoked on a pre-existing,
free-standing object during an extension update script, that object
will become owned by the extension. In our current code this is
possible in three cases:
* Replacing a "shell" type or operator.
* CREATE OR REPLACE overwriting an existing object.
* ALTER TYPE SET, ALTER DOMAIN SET, and ALTER OPERATOR SET.
The first of these cases is intentional behavior, as noted by the
existing comments for GenerateTypeDependencies. It seems like
appropriate behavior for CREATE OR REPLACE too; at least, the obvious
alternatives are not better. However, the fact that it happens during
ALTER is an artifact of trying to share code (GenerateTypeDependencies
and makeOperatorDependencies) between the CREATE and ALTER cases.
Since an extension script would be unlikely to ALTER an object that
didn't already belong to the extension, this behavior is not very
troubling for the direct target object ... but ALTER TYPE SET will
recurse to dependent domains, and it is very uncool for those to
become owned by the extension if they were not already.
Let's fix this by redefining the ALTER cases to never change extension
membership, full stop. We could minimize the behavioral change by
only changing the behavior when ALTER TYPE SET is recursing to a
domain, but that would complicate the code and it does not seem like
a better definition.
Per bug #17144 from Alex Kozhemyakin. Back-patch to v13 where ALTER
TYPE SET was added. (The other cases are older, but since they only
affect the directly-named object, there's not enough of a problem to
justify changing the behavior further back.)
Discussion: https://postgr.es/m/17144-e67d7a8f049de9af@postgresql.org
I was overoptimistic to assume that UTF8-based locales would all
consider U+1500 to be a member of the [[:alpha:]] char class.
Tweak the test cases added by commit 78a843f11 to avoid that
assumption. We might need to lobotomize them further, but this
should be enough to fix the early buildfarm reports.
The functions moveins(), copyins(), moveouts(), copyouts() are
required to preserve the invariant that there are no duplicate arcs in
the regex's NFA. Spencer's original implementation of them was O(N^2)
since it checked separately for a match to each source arc. In commit
579840ca0 I improved that by adding sort/merge logic to be used if
more than a few arcs are to be moved/copied. However, I now realize
that that missed a bet. At many call sites, the target state is newly
made and cannot have any existing in-arcs (respectively out-arcs)
that could be duplicates. So spending any cycles at all on checking
for duplicates is wasted effort; in these cases we can just blindly
move/copy all the source arcs. Add code paths to do that.
It turns out that for copyins()/copyouts(), *all* the call sites have
this property, making all the "improved" logic in them flat out
unreachable. Perhaps we'll need the full capability again someday,
so I just #ifdef'd those paths out rather than removing them entirely.
In passing, add a few test cases to improve code coverage in this
area as well as in regc_locale.c/regc_pg_locale.c.
Discussion: https://postgr.es/m/810272.1629064063@sss.pgh.pa.us
In OpenSSL there are two types of BIO's (I/O abstractions):
source/sink and filters. A source/sink BIO is a source and/or
sink of data, ie one acting on a socket or a file. A filter
BIO takes a stream of input from another BIO and transforms it.
In order for BIO_find_type() to be able to traverse the chain
of BIO's and correctly find all BIO's of a certain type they
shall have the type bit set accordingly, source/sink BIO's
(what PostgreSQL implements) use BIO_TYPE_SOURCE_SINK and
filter BIO's use BIO_TYPE_FILTER. In addition to these, file
descriptor based BIO's should have the descriptor bit set,
BIO_TYPE_DESCRIPTOR.
The PostgreSQL implementation didn't set the type bits, which
went unnoticed for a long time as it's only really relevant
for code auditing the OpenSSL installation, or doing similar
tasks. It is required by the API though, so this fixes it.
Backpatch through 9.6 as this has been wrong for a long time.
Author: Itamar Gafni
Discussion: https://postgr.es/m/SN6PR06MB39665EC10C34BB20956AE4578AF39@SN6PR06MB3966.namprd06.prod.outlook.com
Backpatch-through: 9.6
This reverts the following commits:
1b5617eb84 Describe (auto-)analyze behavior for partitioned tables
0e69f705cc Set pg_class.reltuples for partitioned tables
41badeaba8 Document ANALYZE storage parameters for partitioned tables
0827e8af70 autovacuum: handle analyze for partitioned tables
There are efficiency issues in this code when handling databases with
large numbers of partitions, and it doesn't look like there isn't any
trivial way to handle those. There are some other issues as well. It's
now too late in the cycle for nontrivial fixes, so we'll have to let
Postgres 14 users continue to manually deal with ANALYZE their
partitioned tables, and hopefully we can fix the issues for Postgres 15.
I kept [most of] be280cdad2 ("Don't reset relhasindex for partitioned
tables on ANALYZE") because while we added it due to 0827e8af70, it is
a good bugfix in its own right, since it affects manual analyze as well
as autovacuum-induced analyze, and there's no reason to revert it.
I retained the addition of relkind 'p' to tables included by
pg_stat_user_tables, because reverting that would require a catversion
bump.
Also, in pg14 only, I keep a struct member that was added to
PgStat_TabStatEntry to avoid breaking compatibility with existing stat
files.
Backpatch to 14.
Discussion: https://postgr.es/m/20210722205458.f2bug3z6qzxzpx2s@alap3.anarazel.de
The existing data structures in inval.c are fairly inefficient for
the common case of a command or subtransaction that registers a small
number of cache invalidation events. While this doesn't matter if we
commit right away, it can build up to a lot of bloat in a transaction
that contains many DDL operations. By making a few more assumptions
about the expected use-case, we can switch to a representation using
densely-packed arrays. Although this eliminates some data-copying,
it doesn't seem to make much difference time-wise. But the space
consumption decreases substantially.
Patch by me; thanks to Nathan Bossart for review.
Discussion: https://postgr.es/m/2380555.1622395376@sss.pgh.pa.us
During a VACUUM or CLUSTER command, the initial output emits a
fully qualified relation path with namespace. The post-action
errmsg only emitted the relation name however, which may lead
to hard to parse output when using multiple jobs with vacuumdb
as the output from different jobs may be interleaved. Include
the full path in the post-action errmsg to be consistent with
the initial errmsg.
Author: Mike Fiedler <miketheman@gmail.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/CAMerE0oz+8G-aORZL_BJcPxnBqewZAvND4bSUysjz+r-oT1BxQ@mail.gmail.com
The initdb help message for --sync-only was a bit terse, and not
really self-explanatory. Make it clearer that initdb --sync-only
will exit after syncing, and expand the docs with a note on when
the option can be useful. Also align the help output with others
that exit immediately.
Author: Nathan Bossart, Gurjeet Singh
Discussion: https://postgr.es/m/CABwTF4U6hbNNE1bv=LxQdJybmUdZ5NJQ9rKY9tN82NXM8QH+iQ@mail.gmail.com
This commit ensures that the wait interval in the replay delay loop
waiting for an amount of time defined by recovery_min_apply_delay is
correctly handled on reload, recalculating the delay if this GUC value
is updated, based on the timestamp of the commit record being replayed.
The previous behavior would be problematic for example with replay
still waiting even if the delay got reduced or just cancelled. If the
apply delay was increased to a larger value, the wait would have just
respected the old value set, finishing earlier.
Author: Soumyadeep Chakraborty, Ashwin Agrawal
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/CAE-ML+93zfr-HLN8OuxF0BjpWJ17O5dv1eMvSE5jsj9jpnAXZA@mail.gmail.com
Backpatch-through: 9.6
Commit 80abbeba2 evidently didn't bother checking this code.
Also, list the generated executable in .gitignore (so it's
been a REALLY long time since anyone tried this).
Noted while trying out RISC-V spinlock patch. Given that
this has been broken for 5 years and nobody noticed, it's
likely not worth back-patching.
Like the ARM case, just use gcc's __sync_lock_test_and_set();
that will compile into AMOSWAP.W.AQ which does what we need.
At some point it might be worth doing some work on atomic ops
for RISC-V, but this should be enough for a creditable port.
Back-patch to all supported branches, just in case somebody
wants to try them on RISC-V.
Marek Szuba
Discussion: https://postgr.es/m/dea97b6d-f55f-1f6d-9109-504aa7dfa421@gentoo.org
Background workers without shared memory access have been broken on
EXEC_BACKEND / windows builds since shortly after background workers have been
introduced, without that being reported. Clearly they are not commonly used.
The problem is that bgworker startup requires to be attached to shared memory
in EXEC_BACKEND child processes. StartBackgroundWorker() detaches from shared
memory for unconnected workers, but at that point we already have initialized
subsystems referencing shared memory.
Fixing this problem is not entirely trivial, so removing the option to not be
connected to shared memory seems the best way forward. In most use cases the
advantages of being connected to shared memory far outweigh the disadvantages.
As there have been no reports about this issue so far, we have decided that it
is not worth trying to address the problem in the back branches.
Per discussion with Alvaro Herrera, Robert Haas and Tom Lane.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20210802065116.j763tz3vz4egqy3w@alap3.anarazel.de
The check for sslsni only checked for existence of the parameter
but not for the actual value of the param. This meant that the
SNI extension was always turned on. Fix by inspecting the value
of sslsni and only activate the SNI extension iff sslsni has been
enabled. Also update the docs to be more in line with how other
boolean params are documented.
Backpatch to 14 where sslsni was first implemented.
Reviewed-by: Tom Lane
Backpatch-through: 14, where sslni was added
This fixes a bug in simplehash.h which caused an incorrect size mask to be
used when the hash table grew to SH_MAX_SIZE (2^32). The code was
incorrectly setting the size mask to 0 when the hash tables reached the
maximum possible number of buckets. This would result always trying to
use the 0th bucket causing an infinite loop of trying to grow the hash
table due to there being too many collisions.
Seemingly it's not that common for simplehash tables to ever grow this big
as this bug dates back to v10 and nobody seems to have noticed it before.
However, probably the most likely place that people would notice it would
be doing a large in-memory Hash Aggregate with something close to at least
2^31 groups.
After this fix, the code now works correctly with up to within 98% of 2^32
groups and will fail with the following error when trying to insert any
more items into the hash table:
ERROR: hash table size exceeded
However, the work_mem (or hash_mem_multiplier in newer versions) settings
will generally cause Hash Aggregates to spill to disk long before reaching
that many groups. The minimal test case I did took a work_mem setting of
over 192GB to hit the bug.
simplehash hash tables are used in a few other places such as Bitmap Index
Scans, however, again the size that the hash table can become there is
also limited to work_mem and it would take a relation of around 16TB
(2^31) pages and a very large work_mem setting to hit this. With smaller
work_mem values the table would become lossy and never grow large enough
to hit the problem.
Author: Yura Sokolov
Reviewed-by: David Rowley, Ranier Vilela
Discussion: https://postgr.es/m/b1f7f32737c3438136f64b26f4852b96@postgrespro.ru
Backpatch-through: 10, where simplehash.h was added
It's hard to disable ASLR on current macOS releases, for testing with
-DEXEC_BACKEND. You could already set the environment variable
PG_SHMEM_ADDR to something not likely to collide with mappings created
earlier in process startup. Let's also provide a default value that
works on current releases and architectures, for developer convenience.
As noted in the pre-existing comment, this is a horrible hack, but
-DEXEC_BACKEND is only used by Unix-based PostgreSQL developers for
testing some otherwise Windows-only code paths, so it seems excusable.
Back-patch to all supported branches.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20210806032944.m4tz7j2w47mant26%40alap3.anarazel.de
The FDW batching code was using the same tuple descriptor both for all
slots (regular and plan slots), but that's incorrect - the subplan may
use a different descriptor. Currently this is benign, because batching
is used only for INSERTs, and in that case the descriptors always match.
But that would change if we allow batching UPDATEs.
Fix by copying the appropriate tuple descriptor. Backpatch to 14, where
the FDW batching was implemented.
Author: Amit Langote
Backpatch-through: 14, where FDW batching was added
Discussion: https://postgr.es/m/CA%2BHiwqEWd5B0-e-RvixGGUrNvGkjH2s4m95%3DJcwUnyV%3Df0rAKQ%40mail.gmail.com
Sets of Unicode keys are picky about the primes used when generating
a perfect hash function for them. Callers can spend many seconds
iterating through all the possible combinations of candidate
multipliers and seeds to find one that works.
Unicode updates typically happen only once a year, but it still makes
development and testing of Unicode scripts unnecessarily slow. To fix,
iterate over the primes in the innermost loop. This does not change
any existing functions checked into the tree.
It's not sensible to re-evaluate a direct-modify Foreign Update or Delete
during EvalPlanQual. However, ExecInitForeignScan() can still get called
if a table mixes local and foreign partitions. EvalPlanQualStart() left
the es_result_relations array uninitialized in the child EPQ EState, but
ExecInitForeignScan() still expected to find it. That caused a segfault.
Fix by skipping the es_result_relations lookup during EvalPlanQual
processing. To make things a bit more robust, also skip the
BeginDirectModify calls, and add a runtime check that ExecForeignScan()
is not called on direct-modify foreign scans during EvalPlanQual
processing.
This is new in v14, commit 1375422c78. Before that, EvalPlanQualStart()
copied the whole ResultRelInfo array to the EPQ EState. Backpatch to v14.
Report and diagnosis by Andrey Lepikhov.
Discussion: https://www.postgresql.org/message-id/cb2b808d-cbaa-4772-76ee-c8809bafcf3d%40postgrespro.ru
ALTER TABLE .. SET {LOGGED,UNLOGGED,ACCESS METHOD} would never do a
table-level object access hook, which was inconsistent with SET
TABLESPACE. Note that contrary to SET TABLESPACE, the no-op case is
left off for those commands as this requires tracking if commands have
been called, but they may not execute a physical rewrite. Another thing
worth noting is that the physical file swap at the end of a rewrite
does a couple of access calls for internal objects created for the swap
operation (internal objects are for example skipped by the tests of
sepgsql), but this does not trigger the hook for the table on which the
operation is done.
f41872d, that added support for SET LOGGED/UNLOGGED in ALTER TABLE,
visibly forgot to consider that.
Based on what I checked, two regression tests of sepgsql in ddl.sql are
going to log more information with this test, something that buildfarm
member rhinoceros will tell soon enough. I am not completely sure of
their format though, so these are not refreshed yet.
This is arguably a bug, but no backpatch is done as this could cause a
behavior change for anybody using object access hooks.
Reported-by: Jeff Davis
Discussion: https://postgr.es/m/YQJKV29/1a60uG68@paquier.xyz
If the replacement string doesn't contain \1...\9, then we don't
need sub-match locations, so we can use the REG_NOSUB optimization
here too. There's already a pre-scan of the replacement string
to look for backslashes, so extend that to check for digits, and
refactor to allow that to happen before we compile the regexp.
While at it, try to speed up the pre-scan by using memchr() instead
of a handwritten loop. It's likely that this is lost in the noise
compared to the regexp processing proper, but maybe not. In any
case, this coding is shorter.
Also, add some test cases to improve the poor coverage of
appendStringInfoRegexpSubstr().
Discussion: https://postgr.es/m/3534632.1628536485@sss.pgh.pa.us
Identifying the precise match locations for parenthesized subexpressions
is a fairly expensive task given the way our regexp engine works, both
at regexp compile time (where we must create an optimized NFA for each
parenthesized subexpression) and at runtime (where determining exact
match locations requires laborious search).
Up to now we've made little attempt to optimize this situation. This
patch identifies cases where we know at compile time that we won't
need to know subexpression match locations, and teaches the regexp
compiler to not bother creating per-subexpression regexps for
parenthesis pairs that are not referenced by backrefs elsewhere in
the regexp. (To preserve semantics, we obviously still have to
pin down the match locations of backref references.) Users could
have obtained the same results before this by being careful to
write "non capturing" parentheses wherever possible, but few people
bother with that.
Discussion: https://postgr.es/m/2219936.1628115334@sss.pgh.pa.us
Here we add additional parsing of Makefiles to determine when to add
references to libpgport and libpgcommon. We also remove the need for
adding the current contrib_extrasource by adding sine very basic logic to
implement the Makefile rules which add .l and .y files when they exist for
a given .o file in the Makefile.
This is just some very basic additional parsing of Makefiles to try to
keep things more consistent between builds using make and MSVC builds.
This happens to work with how our current Makefiles are laid out, but it
could easily be broken in the future if someone chooses do something in
the Makefile that we don't have parsing support for. We will cross that
bridge when we come to it.
Author: David Rowley
Discussion: https://postgr.es/m/CAApHDvoPULi5JW3933NxgwxOmu9Ncvpcyt87UhEHAUX16QqmpA@mail.gmail.com
This saves a few lines of code. Also add a comment to mention why we use
ExplainPropertyInteger instead of ExplainPropertyUInteger given that
queryid is a uint64 type.
Author: David Rowley
Reviewed-by: Julien Rouhaud
Discussion: https://postgr.es/m/CAApHDvqhSLYpSU_EqUdN39w9Uvb8ogmHV7_3YhJ0S3aScGBjsg@mail.gmail.com
Backpatch-through: 14, where this code was originally added
02a6a54ec added code to make use of the POPCNT instruction when available
for many of our common platforms. Here we do the same for MSVC for x86_64
machines.
MSVC's intrinsic functions for popcnt seem to differ from GCCs in that
they always appear to emit the popcnt instructions. In GCC the behavior
will depend on if the source file was compiled with -mpopcnt or not. For
this reason, the MSVC intrinsic function has been lumped into the
pg_popcount*_asm function, however doing that sort of invalidates the name
of that function, so let's rename it to pg_popcount*_fast().
Author: David Rowley
Reviewed-by: John Naylor
Discussion: https://postgr.es/m/CAApHDvqL3cbbK%3DGzNcwzsNR9Gi%2BaUvTudKkC4XgnQfXirJ_oRQ%40mail.gmail.com
I had committer's remorse almost immediately after pushing cb76fbd7e,
upon finding that removing capturing subexpressions' subREs from the
data structure broke my proposed patch for REG_NOSUB optimization.
Revert that data structure change. Instead, address the concern
about not changing capturing subREs' endpoints by not changing the
endpoints. We don't need to, because the point of that bit was just
to ensure that the atom has endpoints distinct from the outer state
pair that we're stringing the branch between. We already made
suitable states in the parenthesized-subexpression case, so the
additional ones were just useless overhead. This seems more
understandable than Spencer's original coding, and it ought to be
a shade faster too by saving a few state creations and arc changes.
(I actually see a couple percent improvement on Jacobson's web
corpus, though that's barely above the noise floor so I wouldn't
put much stock in that result.)
Also, fix the logic added by ea1268f63 to ensure that the subRE
recorded in v->subs[subno] is exactly the one with capno == subno.
Spencer's original coding recorded the child subRE of the capture
node, which is okay so far as having the right endpoint states is
concerned, but as of cb76fbd7e the capturing subRE itself always
has those endpoints too. I think the inconsistency is confusing
for the REG_NOSUB optimization.
As before, backpatch to v14.
Discussion: https://postgr.es/m/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com
It appears that check_track_commit_timestamp was declared but has never
been defined in our code base. Likely this is just leftover cruft from
a development version of the original patch to add commit timestamps.
Let's just remove the useless declaration. The inclusion of guc.h also
seems surplus to requirements.
Author: Andrey Lepikhov
Discussion: https://postgr.es/m/f49aefb5-edbb-633a-af07-3e777023a94d@postgrespro.ru
Up to now, we remembered the definition of a capturing parenthesis
subexpression by storing a pointer to the associated subRE node.
That was okay before, because that subRE didn't get modified anymore
while parsing the rest of the regexp. However, in the wake of
commit ea1268f63, that's no longer true: the outer invocation of
parseqatom() feels free to scribble on that subRE. This seems to
work anyway, because the states we jam into the child atom in the
"prepare a general-purpose state skeleton" stanza aren't really
semantically different from the original endpoints of the child atom.
But that would be mighty easy to break, and it's definitely not how
things worked before.
Between this and the issue fixed in the prior commit, it seems best
to get rid of this dependence on subRE nodes entirely. We don't need
the whole child subRE for future backrefs, only its starting and ending
NFA states; so let's just store pointers to those.
Also, in the corner case where we make an extra subRE to handle
immediately-nested capturing parentheses, it seems like it'd be smart
to have the extra subRE have the same begin/end states as the original
child subRE does (s/s2 not lp/rp). I think that linking it from lp to
rp might actually be semantically wrong, though since Spencer's original
code did it that way, I'm not totally certain. Using s/s2 is certainly
not wrong, in any case.
Per report from Mark Dilger. Back-patch to v14 where the problematic
patches came in.
Discussion: https://postgr.es/m/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com
Commit cebc1d34e taught parseqatom() to optimize cases where a branch
contains only one, "messy", atom by getting rid of excess subRE nodes.
The way we really should do that is to keep the subRE built for the
"messy" child atom; but to avoid changing parseqatom's nominal API,
I made it delete that node after copying its fields to the outer subRE
made by parsebranch(). It seems that that actually worked at the time;
but it became dangerous after ea1268f63, because that later commit
allowed the lower invocation of parse() to return a subRE that was also
pointed to by some v->subs[] entry. This meant we could wind up with a
dangling pointer in v->subs[], allowing a later backref to misbehave,
but only if that subRE struct had been reused in between. So the damage
seems confined to cases like '((...))...(...\2'.
To fix, do what I should have done before and modify parseqatom's API
to make it possible for it to remove the caller's subRE instead of the
callee's. That's safer because we know that subRE isn't complete yet,
so noplace else will have a pointer to it.
Per report from Mark Dilger. Back-patch to v14 where the problematic
patches came in.
Discussion: https://postgr.es/m/0203588E-E609-43AF-9F4F-902854231EE7@enterprisedb.com
As reported by a few OSX buildfarm animals there exist at least one path where
temporary files exist during AtProcExit_Files() processing. As temporary file
cleanup causes pgstat reporting, the assertions added in ee3f8d3d3a caused
failures.
This is not an OSX specific issue, we were just lucky that timing on OSX
reliably triggered the problem. The known way to cause this is a FATAL error
during perform_base_backup() with a MANIFEST used - adding an elog(FATAL)
after InitializeBackupManifest() reliably reproduces the problem in isolation.
The problem is that the temporary file created in InitializeBackupManifest()
is not cleaned up via resource owner cleanup as WalSndResourceCleanup()
currently is only used for non-FATAL errors. That then allows to reach
AtProcExit_Files() with existing temporary files, causing the assertion
failure.
To fix this problem, move temporary file cleanup to a before_shmem_exit() hook
and add assertions ensuring that no temporary files are created before / after
temporary file management has been initialized / shut down. The cleanest way
to do so seems to be to split fd.c initialization into two, one for plain file
access and one for temporary file access.
Right now there's no need to perform further fd.c cleanup during process exit,
so I just renamed AtProcExit_Files() to BeforeShmemExit_Files(). Alternatively
we could perform another pass through the files to check that no temporary
files exist, but the added assertions seem to provide enough protection
against that.
It might turn out that the assertions added in ee3f8d3d3a will cause too much
noise - in that case we'll have to downgrade them to a WARNING, at least
temporarily.
This commit is not necessarily the best approach to address this issue, but it
should resolve the buildfarm failures. We can revise later.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20210807190131.2bm24acbebl4wl6i@alap3.anarazel.de
Rather than trying to pick table aliases that won't conflict with
any possible user-defined matview column name, adjust the queries'
syntax so that the aliases are only used in places where they can't be
mistaken for column names. Mostly this consists of writing "alias.*"
not just "alias", which adds clarity for humans as well as machines.
We do have the issue that "SELECT alias.*" acts differently from
"SELECT alias", but we can use the same hack ruleutils.c uses for
whole-row variables in SELECT lists: write "alias.*::compositetype".
We might as well revert to the original aliases after doing this;
they're a bit easier to read.
Like 75d66d10e, back-patch to all supported branches.
Discussion: https://postgr.es/m/2488325.1628261320@sss.pgh.pa.us
This is a step toward storing stats in dynamic shared memory. As dynamic
shared memory segments are detached from just after before_shmem_exit()
callbacks are processed, but before on_shmem_exit() callbacks are, no stats
can be collected after before_shmem_exit() callbacks have been processed.
Parallel worker shutdown can cause stats to be emitted during DSM detach
callbacks, e.g. for SharedFileSet (which closes its files, which can causes
fd.c to emit stats about temporary files). Therefore parallel worker shutdown
needs to complete during the processing of before_shmem_exit callbacks.
One might think this problem could instead be solved by carefully ordering the
attaching to DSM segments, so that the pgstats segments get detached from
later than the parallel query ones. That turns out to not work because the
stats hash might need to grow which can cause new segments to be
allocated, which then will be detached from earlier.
There are two code changes:
First, call ParallelWorkerShutdown() via before_shmem_exit. That's a good idea
on its own, because other shutdown callbacks like ShutdownPostgres and
ShutdownAuxiliaryProcess are called via before_*.
Second, explicitly detach from the parallel query DSM segment, thereby
ensuring all stats are emitted during ParallelWorkerShutdown().
There are nicer solutions to these problems, but it's not obvious which of
those solutions is the correct one. As the shared memory stats work already is
a huge amount of work...
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20210405092914.mmxqe7j56lsjfsej@alap3.anarazel.de
Discussion: https://postgr.es/m/20210803023612.iziacxk5syn2r4ut@alap3.anarazel.de
Previously pgstat_initialize() was called in InitPostgres() and
AuxiliaryProcessMain(). As it turns out there was at least one case where we
reported stats before pgstat_initialize() was called, see
AutoVacWorkerMain()'s intentionally early call to pgstat_report_autovac().
This turns out to not be a problem with the current pgstat implementation as
pgstat_initialize() only registers a shutdown callback. But in the shared
memory based stats implementation we are working towards pgstat_initialize()
has to do more work.
After b406478b87 BaseInit() is a central place where initialization shared by
normal backends and auxiliary backends can be put. Obviously BaseInit() is
called before InitPostgres() registers ShutdownPostgres. Previously
ShutdownPostgres was the first before_shmem_exit callback, now that's commonly
pgstats. That should be fine.
Previously pgstat_initialize() was not called in bootstrap mode, but there
does not appear to be a need for that. It's now done unconditionally.
To detect future issues like this, assertions are added to a few places
verifying that the pgstat subsystem is initialized and not yet shut down.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20210405092914.mmxqe7j56lsjfsej@alap3.anarazel.de
Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
Casting a value that's already of a type with a specific typmod
to an unspecified typmod doesn't do anything so far as run-time
behavior is concerned. However, it really ought to change the
exposed type of the expression to match. Up to now,
coerce_type_typmod hasn't bothered with that, which creates gotchas
in contexts such as recursive unions. If for example one side of
the union is numeric(18,3), but it needs to be plain numeric to
match the other side, there's no direct way to express that.
This is easy enough to fix, by inserting a RelabelType to update the
exposed type of the expression. However, it's a bit nervous-making
to change this behavior, because it's stood for a really long time.
(I strongly suspect that it's like this in part because the logic
pre-dates the introduction of RelabelType in 7.0. The commit log
message for 57b30e8e2 is interesting reading here.) As a compromise,
we'll sneak the change into 14beta3, and consider back-patching to
stable branches if no complaints emerge in the next three months.
Discussion: https://postgr.es/m/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ=uWWWfQ@mail.gmail.com
Formerly, the numeric code tested whether an integer value of a larger
type would fit in a smaller type by casting it to the smaller type and
then testing if the reverse conversion produced the original value.
That's perfectly fine, except that it caused a test failure on
buildfarm animal castoroides, most likely due to a compiler bug.
Instead, do these tests by comparing against PG_INT16/32_MIN/MAX. That
matches existing code in other places, such as int84(), which is more
widely tested, and so is less likely to go wrong.
While at it, add regression tests covering the numeric-to-int8/4/2
conversions, and adjust the recently added tests to the style of
434ddfb79a (on the v11 branch) to make failures easier to diagnose.
Per buildfarm via Tom Lane, reviewed by Tom Lane.
Discussion: https://postgr.es/m/2394813.1628179479%40sss.pgh.pa.us
For EXEC_BACKEND InitProcess()/InitAuxiliaryProcess() needs to have been
called well before we call BaseInit(), as SubPostmasterMain() needs LWLocks to
work. Having the order of initialization differ between platforms makes it
unnecessarily hard to understand the system and to add initialization points
for new subsystems without a lot of duplication.
To be able to change the order, BaseInit() cannot trigger
CreateSharedMemoryAndSemaphores() anymore - obviously that needs to have
happened before we can call InitProcess(). It seems cleaner to create shared
memory explicitly in single user/bootstrap mode anyway.
After this change the separation of bufmgr initialization into
InitBufferPoolAccess() / InitBufferPoolBackend() is not meaningful anymore so
the latter is removed.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
This was an oversight in 07bf378509. While it does reduce the benefit of the
simplification due to that commit, it still seems like a win to me.
It seems like it might be a good idea to have a function mirroring
InitPostmasterChild() / InitStandaloneProcess() for postmaster in miscinit.c
to make it easier to keep initialization between the three possible
environment in sync.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20210805214109.lzfk3r3ae37bahmv@alap3.anarazel.de
For one, the existing location lead to somewhat awkward code in main(). For
another, the new location is easier to understand anyway.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
Neither is actually initialized as an auxiliary process, so it does not really
make sense to reserve a PGPROC etc for them.
This keeps checker mode implemented by exiting partway through bootstrap
mode. That might be worth changing at some point, perhaps if we ever extend
checker mode to be a more general tool.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
After the preceding commits the auxprocess code is independent from
bootstrap.c - so a dedicated file seems less confusing.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
Kept separate for ease of review, particularly because pgindent insists on
reflowing a few comments.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
There practically was no shared code between the two, once all the ifs are
removed. And it was quite confusing that aux processes weren't actually
started by the call to AuxiliaryProcessMain() in main().
There's more to do, AuxiliaryProcessMain() should move out of bootstrap.c, and
BootstrapModeMain() shouldn't use/be part of AuxProcType.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
It is confusing that aux processes are started with --forkboot, given that
bootstrap mode cannot run below postmaster.
Author: Andres Freund <andres@anarazel.de>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20210802164124.ufo5buo4apl6yuvs@alap3.anarazel.de
This fixes a long-standing bug when using to_char() to format a
numeric value in scientific notation -- if the value's exponent is
less than -NUMERIC_MAX_DISPLAY_SCALE-1 (-1001), it produced a
division-by-zero error.
The reason for this error was that get_str_from_var_sci() divides its
input by 10^exp, which it produced using power_var_int(). However, the
underflow test in power_var_int() causes it to return zero if the
result scale is too small. That's not a problem for power_var_int()'s
only other caller, power_var(), since that limits the rscale to 1000,
but in get_str_from_var_sci() the exponent can be much smaller,
requiring a much larger rscale. Fix by introducing a new function to
compute 10^exp directly, with no rscale limit. This also allows 10^exp
to be computed more efficiently, without any numeric multiplication,
division or rounding.
Discussion: https://postgr.es/m/CAEZATCWhojfH4whaqgUKBe8D5jNHB8ytzemL-PnRx+KCTyMXmg@mail.gmail.com
Up to now we did a PQconsumeInput() for each pipelined query, asking the OS
for more input - which it often won't have, as all results might already have
been sent. That turns out to have a noticeable performance impact.
Alvaro Herrera reviewed the idea to add the PQisBusy() check, but not this
concrete patch.
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20210720180039.23rivhdft3l4mayn@alap3.anarazel.de
Backpatch: 14, where libpq/pgbench pipelining was introduced.
These have been unrelated since bgwriter and checkpointer were split into two
processes in 806a2aee37. As there several pending patches (shared memory
stats, extending the set of tracked IO / buffer statistics) that are made a
bit more awkward by the grouping, split them. Done separately to make
reviewing easier.
This does *not* change the contents of pg_stat_bgwriter or move fields out of
bgwriter/checkpointer stats that arguably do not belong in either. However
pgstat_fetch_global() was renamed and split into
pgstat_fetch_stat_checkpointer() and pgstat_fetch_stat_bgwriter().
Author: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20210405092914.mmxqe7j56lsjfsej@alap3.anarazel.de
Oversight in commit 3499df0d, which generalized the reloption as a way
of giving users a way to consistently avoid VACUUM's index bypass
optimization.
Per off-list report from Nikolay Shaplov.
Backpatch: 14-, where index cleanup reloption was extended.
Commit a8fd13cab0 added support for prepared transactions to built-in
logical replication via a new option "two_phase" for a subscription. The
"two_phase" option was not allowed with the existing streaming option.
This commit permits the combination of "streaming" and "two_phase"
subscription options. It extends the pgoutput plugin and the subscriber
side code to add the prepare API for streaming transactions which will
apply the changes accumulated in the spool-file at prepare time.
Author: Peter Smith and Ajin Cherian
Reviewed-by: Vignesh C, Amit Kapila, Greg Nancarrow
Tested-By: Haiying Tang
Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru
Discussion: https://postgr.es/m/CAMGcDxeqEpWj3fTXwqhSwBdXd2RS9jzwWscO-XbeCfso6ts3+Q@mail.gmail.com
This patch adds new functions regexp_count(), regexp_instr(),
regexp_like(), and regexp_substr(), and extends regexp_replace()
with some new optional arguments. All these functions follow
the definitions used in Oracle, although there are small differences
in the regexp language due to using our own regexp engine -- most
notably, that the default newline-matching behavior is different.
Similar functions appear in DB2 and elsewhere, too. Aside from
easing portability, these functions are easier to use for certain
tasks than our existing regexp_match[es] functions.
Gilles Darold, heavily revised by me
Discussion: https://postgr.es/m/fc160ee0-c843-b024-29bb-97b5da61971f@darold.net
959d00e9d added the ability to make use of an Append node instead of a
MergeAppend when we wanted to perform a scan of a partitioned table and
the required sort order was the same as the partitioned keys and the
partitioned table was defined in such a way that earlier partitions were
guaranteed to only contain lower-order values than later partitions.
However, previously we didn't allow these ordered partition scans for
LIST partitioned table when there were any partitions that allowed
multiple Datums. This was a very cheap check to make and we could likely
have done a little better by checking if there were interleaved
partitions, but at the time we didn't have visibility about which
partitions were pruned, so we still may have disallowed cases where all
interleaved partitions were pruned.
Since 475dbd0b7, we now have knowledge of pruned partitions, we can do a
much better job inside partitions_are_ordered().
Here we pass which partitions survived partition pruning into
partitions_are_ordered() and, for LIST partitioning, have it check to see
if any live partitions exist that are also in the new "interleaved_parts"
field defined in PartitionBoundInfo.
For RANGE partitioning we can relax the code which caused the partitions
to be unordered if a DEFAULT partition existed. Since we now know which
partitions were pruned, partitions_are_ordered() now returns true when the
DEFAULT partition was pruned.
Reviewed-by: Amit Langote, Zhihong Yu
Discussion: https://postgr.es/m/CAApHDvrdoN_sXU52i=QDXe2k3WAo=EVry29r2+Tq2WYcn2xhEA@mail.gmail.com
For partitioned tables with large numbers of partitions where queries are
able to prune all but a very small number of partitions, the time spent in
the planner looping over RelOptInfo.part_rels checking for non-NULL
RelOptInfos could become a large portion of the overall planning time.
Here we add a Bitmapset that records the non-pruned partitions. This
allows us to more efficiently skip the pruned partitions by looping over
the Bitmapset.
This will cause a very slight slow down in cases where no or not many
partitions could be pruned, however, those cases are already slow to plan
anyway and the overhead of looping over the Bitmapset would be
unmeasurable when compared with the other tasks such as path creation for
a large number of partitions.
Reviewed-by: Amit Langote, Zhihong Yu
Discussion: https://postgr.es/m/CAApHDvqnPx6JnUuPwaf5ao38zczrAb9mxt9gj4U1EKFfd4AqLA@mail.gmail.com
Start up the checkpointer and bgwriter during crash recovery (except in
--single mode), as we do for replication. This wasn't done back in
commit cdd46c76 out of caution. Now it seems like a better idea to make
the environment as similar as possible in both cases. There may also be
some performance advantages.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Tested-by: Jakub Wartak <Jakub.Wartak@tomtom.com>
Discussion: https://postgr.es/m/CA%2BhUKGJ8NRsqgkZEnsnRc2MFROBV-jCnacbYvtpptK2A9YYp9Q%40mail.gmail.com
The comment didn't make sense anymore since at least 626eb02198. As it didn't
actually explain anything anyway, just remove it.
Author: Andres Freund <andres@anarazel.de>
I failed to account for the possibility that when
ExecAppendAsyncEventWait() notifies multiple async-capable nodes using
postgres_fdw, a preceding node might invoke process_pending_request() to
process a pending asynchronous request made by a succeeding node. In
that case the succeeding node should produce a tuple to return to the
parent Append node from tuples fetched by process_pending_request() when
notified. Repair.
Per buildfarm via Michael Paquier. Back-patch to v14, like the previous
commit.
Thanks to Tom Lane for testing.
Discussion: https://postgr.es/m/YQP0UPT8KmPiHTMs%40paquier.xyz
The test is expecting two prepared transactions corresponding to two
subscriptions but it waits to catch up for just one subscription. Fix it
by allowing to wait for both subscriptions.
Reported-by: Michael Paquier, as per buildfarm
Author: Ajin Cherian
Reviewed-By: Amit Kapila, Vignesh C, Peter Smith
Discussion: https://postgr.es/m/CAA4eK1+_0iNQ8Z=KVTjmmAqNX-hyv+1+fnZ-Yx8CVP=uAcekqw@mail.gmail.com
As of commit 84f5c2908, executing SQL commands (via SPI or otherwise)
requires having either an active Portal, or a caller-established
active snapshot. We were simply Assert'ing that that's the case.
But we've now had a couple different reports of people testing
extensions that didn't meet this requirement, and were confused by
the resulting crash. Let's convert the Assert to a test-and-elog,
in hopes of making the issue clearer for extension authors.
Per gripes from Liu Huailing and RekGRpth. Back-patch to v11,
like the prior commit.
Discussion: https://postgr.es/m/OSZPR01MB6215671E3C5956A034A080DFBEEC9@OSZPR01MB6215.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/17035-14607d308ac8643c@postgresql.org
This fixes a couple of related problems that arise when raising
numbers to very large powers.
Firstly, when raising a negative number to a very large integer power,
the result should be well-defined, but the previous code would only
cope if the exponent was small enough to go through power_var_int().
Otherwise it would throw an internal error, attempting to take the
logarithm of a negative number. Fix this by adding suitable handling
to the general case in power_var() to cope with negative bases,
checking for integer powers there.
Next, when raising a (positive or negative) number whose absolute
value is slightly less than 1 to a very large power, the result should
approach zero as the power is increased. However, in some cases, for
sufficiently large powers, this would lose all precision and return 1
instead of 0. This was due to the way that the local_rscale was being
calculated for the final full-precision calculation:
local_rscale = rscale + (int) val - ln_dweight + 8
The first two terms on the right hand side are meant to give the
number of significant digits required in the result ("val" being the
estimated result weight). However, this failed to account for the fact
that rscale is clipped to a maximum of NUMERIC_MAX_DISPLAY_SCALE
(1000), and the result weight might be less then -1000, causing their
sum to be negative, leading to a loss of precision. Fix this by
forcing the number of significant digits calculated to be nonnegative.
It's OK for it to be zero (when the result weight is less than -1000),
since the local_rscale value then includes a few extra digits to
ensure an accurate result.
Finally, add additional underflow checks to exp_var() and power_var(),
so that they consistently return zero for cases like this where the
result is indistinguishable from zero. Some paths through this code
already returned zero in such cases, but others were throwing overflow
errors.
Dean Rasheed, reviewed by Yugo Nagata.
Discussion: http://postgr.es/m/CAEZATCW6Dvq7+3wN3tt5jLj-FyOcUgT5xNoOqce5=6Su0bCR0w@mail.gmail.com
They are used in code that runs both during normal operation and during
WAL replay, and needs to behave differently during replay. Move them to
xlogutils.c, because that's where we have other helper functions used by
redo routines.
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/b3b71061-4919-e882-4857-27e370ab134a%40iki.fi
This reverts commit 6a2c532. fairywren and bowerbird failed those tests
because of incorrect versions of ZLIB linked to, causing errors like
SIGBREAKs that stopped buildfarm runs or EACCES failures when writing
compressed WAL segments.
Andrew Dunstan has done all the investigation here, so he deserves all
the credit for being able to enable those tests on Windows.
Discussion: https://postgr.es/m/9040d5ed-6462-66a4-07ac-2923785ae563@dunslane.net
It should always be the case that the last checkpoint record is still
readable, because otherwise, a crash would leave us in a situation
from which we can't recover. Therefore the test removed by this patch
should always succeed. For it to fail, either there has to be a serious
bug in the code someplace, or the user has to be manually modifying
pg_wal while crash recovery is running. If it's the first one, we
should fix the bug. If it's the second one, they should stop, or
anyway they're doing so at their own risk. In neither case does
a full checkpoint instead of an end-of-recovery record seem like a
clear winner. Furthermore, rarely-taken code paths are particularly
vulnerable to bugs, so let's simplify by getting rid of this one.
Discussion: http://postgr.es/m/CA+TgmoYmw==TOJ6EzYb_vcjyS09NkzrVKSyBKUUyo1zBEaJASA@mail.gmail.com
Those tests are not designed to fail, but if they do, like on some cases
for Windows because of ZLIB (?), they could remain stuck. Using
--no-loop makes the test fail immediately. The oldest test with
--endpos already did that.
Those tests have been added in ffc9dda.
Reviewed-by: Andrew Dunstan
Discussion: https://postgr.es/m/ec093ff1-a53c-0091-46a2-4537354b0dd4@dunslane.net
CheckpointLock was removed in commit d18e75664a, and commit ce197e91d0
updated a leftover comment in CreateCheckPoint, but there was another
copy of it in CreateRestartPoint still.
A pending asynchronous request is handled by process_pending_request(),
which previously not only processed an in-progress remote query but
performed ExecForeignScan() to produce a tuple to return to the local
server asynchronously from the result of the remote query. But that led
to a server crash when executing a query or led to an "InstrStartNode
called twice in a row" or "InstrEndLoop called on running node" failure
when doing EXPLAIN ANALYZE of it, in cases where the plan tree for it
contained multiple async-capable nodes accessing the same
initplan/subplan that contained multiple async-capable nodes scanning
the same foreign tables as for the parent async-capable nodes, as
reported by Andrey Lepikhov. The reason is that the second step in
process_pending_request() invoked when executing the initplan/subplan
for one of the parent async-capable nodes caused recursive execution of
the initplan/subplan for another of the parent async-capable nodes.
To fix, split process_pending_request() into the two steps and postpone
the second step until ForeignAsyncConfigureWait() is called for each of
the pending asynchronous requests. Also, in ExecAppendAsyncEventWait()
we assumed that FDWs would register at least one wait event in a
WaitEventSet created there when they were called from
ForeignAsyncConfigureWait() in that function, but allow FDWs to register
zero wait events in the WaitEventSet; modify ExecAppendAsyncEventWait()
to just return in that case.
Oversight in commit 27e1f1456. Back-patch to v14 where that commit went
in.
Andrey Lepikhov and Etsuro Fujita
Discussion: https://postgr.es/m/fe5eaa19-1704-e4a4-76ee-3b9d37ade399@postgrespro.ru
Buildfarm shows that this test has a further failure mode when a
checkpoint starts earlier than expected, so we detect a "checkpoint
completed" line that's not the one we want. Change the config to try
and prevent this.
Per buildfarm
While at it, update one comment that was forgotten in commit
d18e75664a.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20210729.162038.534808353849568395.horikyota.ntt@gmail.com
Commit ffa2e4670 changed libpq so that multiple error reports
occurring during one operation (a connection attempt or query)
are accumulated in conn->errorMessage, where before new ones
usually replaced any prior error. At least in theory, that makes
us more vulnerable to running out of memory for the errorMessage
buffer. If it did happen, the user would be left with just an
empty-string error report, which is pretty unhelpful.
We can improve this by relying on pqexpbuffer.c's existing "broken
buffer" convention to track whether we've hit OOM for the current
operation's error string, and then substituting a constant "out of
memory" string in the small number of places where the errorMessage
is read out.
While at it, apply the same method to similar OOM cases in
pqInternalNotice and pqGetErrorNotice3.
Back-patch to v14 where ffa2e4670 came in. In principle this could
go back further; but in view of the lack of field reports, the
hazard seems negligible in older branches.
Discussion: https://postgr.es/m/530153.1627425648@sss.pgh.pa.us
Certain versions of msys2/Windows have been observed to resolve symlinks
in perl2host rather than just follow them. This defeats using a
symlinked shorter path to a longer path, and makes certain tests fail.
We therefore call perl2host on the parent directory of the symlink and
thereafter just use that result.
Apply to release 14 where the problem has been observed.
Sometimes cygpath has been observed to return a path with a trailing
slash. That can cause problems, Also, make "cygpath" usage
consistent with "pwd -W" with respect to the use of forward slashes.
Backpatch to release 14 where the current code was introduced.
This is a non-functional change only to refactor code to extract some
replication logic into static functions.
This is done as preparation for the 2PC streaming patch which also shares
this common logic.
Author: Peter Smith
Reviewed-By: Amit Kapila
Discussion: https://postgr.es/m/CAHut+PuiSA8AiLcE2N5StzSKs46SQEP_vDOUD5fX2XCVtfZ7mQ@mail.gmail.com
The clientside log saved from the testrun was removed in 1caef31d9
but the entry in the .gitignore file remained. While this exists
in older branches as well, it's mostly a cosmetical fix so no back-
patching is done.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/F8E73040-BB6F-43BF-95B4-3CEC037BE856@yesql.se
There is only one constructor now for PostgresNode, with the idiomatic
name 'new'. The method is not exported by the class, and must be called
as "PostgresNode->new('name',[args])". All the TAP tests that use
PostgresNode are modified accordingly. Third party scripts will need
adjusting, which is a fairly mechanical process (I just used a sed
script).
This method will modify or delete an existing line in the config file
rather than simply appending to the file. This makes adjustment of files
for older versions much simpler and more compact.
This is now the default for pg_ctl, but having the flag here explicitly
does no harm and helps with backwards compatibility of the PostgresNode
module.
The following changes are done:
- In pg_archivecleanup, the cleanup of older WAL segments would never
fail immediately.
- In pgbench, the initialization of a thread barrier would not fail
hard.
- In pg_recvlogical, a stat() failure never got the call.
- In pg_basebackup, two chmod() reported a failure without exit()'ing
when unpacking some tar data freshly received. It may be possible to
continue writing some data even after this failure, but that could be
confusing to the user at the end.
These are arguably bugs, but they would happen for code paths where a
failure is unlikely going to happen, so no backpatch is done.
Reviewed-by: Robert Haas, Fabien Coelho
Discussion: https://postgr.es/m/YQDMdB+B68yePFeT@paquier.xyz
pg_verifybackup needs by default pg_waldump to check after a range of
WAL segments required for a backup, except if --no-parse-wal is
specified. The code checked for the presence of the binary pg_waldump
in an installation and reported an error, but it forgot to properly
exit(). This could lead to confusing errors reported.
Reviewed-by: Robert Haas, Fabien Coelho
Discussion: https://postgr.es/m/YQDMdB+B68yePFeT@paquier.xyz
Backpatch-through: 13
This adjusts the MSVC build scripts to look at the compile flags mentioned
in the Makefile to look for -D arguments in order to determine which
constants should be defined in Visual Studio builds.
One small anomaly that appeared as a result of this change is that the
Makefile for the ltree contrib module defined LOWER_NODE, but this was
not properly defined in the MSVC build scripts. This meant that MSVC
builds would differ in case sensitivity in the ltree module when
compared to builds using a make build environment. To maintain the same
behavior here we remove the -DLOWER_NODE from the Makefile and just always
define it in ltree.h for non-MSVC builds. We need to maintain the old
behavior here as this affects the on-disk compatibility of GiST indexes
when using the ltree type.
The only other resulting change here is that REFINT_VERBOSE is now defined
for the autoinc, insert_username and moddatetime contrib modules.
Previously on MSVC, this was only defined for the refint module. This
aligns the behavior to build environments using make as all 4 of these
modules share the same Makefile.
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
In order not to duplicate references and libraries in the Visual Studio
project files produced by the MSVC build scripts, have them check if a
particular reference or library already exists before adding the same one
again.
Reviewed-by: Álvaro Herrera, Andrew Dunstan, Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
Previously the 'includes' field was a string. It's slightly nicer to
manage this when it's defined as an array instead. This allows us to
more easily detect and eliminate duplicates.
Reviewed-by: Álvaro Herrera, Andrew Dunstan, Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
If a file is truncated, we must update minRecoveryPoint. Once a file is
truncated, there's no going back; it would not be safe to stop recovery
at a point earlier than that anymore.
Commit 7bffc9b7bf changed xact_redo_commit() so that it updates
minRecoveryPoint on truncation, but forgot to change xact_redo_abort().
Back-patch to all supported versions.
Reported-by: mengjuan.cmj@alibaba-inc.com
Author: Fujii Masao
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/b029fce3-4fac-4265-968e-16f36ff4d075.mengjuan.cmj@alibaba-inc.com
The logic used to support a change of access method for a table is
similar to changes for tablespace or relation persistence, requiring a
table rewrite with an exclusive lock of the relation changed. Table
rewrites done in ALTER TABLE already go through the table AM layer when
scanning tuples from the old relation and inserting them into the new
one, making this implementation straight-forward.
Note that partitioned tables are not supported as these have no access
methods defined.
Author: Justin Pryzby, Jeff Davis
Reviewed-by: Michael Paquier, Vignesh C
Discussion: https://postgr.es/m/20210228222530.GD20769@telsasoft.com
This appears to have been added way back in ee3b4188a but it's a little
unclear why the change made in that commit is even needed given that
320c7eb8c, dated 18 months earlier, added code to copy fmgroids.h to
src/include/utils.
amcheck seems to get away without adding the additional include directory,
so perhaps dblink can get away with it too.
This builds ok in my VS2017 environment, but the buildfarm may serve as a
reminder about why ee3b4188a was required. There's only one way to find
out for sure.
Discussion: https://postgr.es/m/CAApHDvpo6g5csCTjc_0C7DMvgFPomVb0Rh-AcW5afd=Ya=LRuw@mail.gmail.com
This changes the behavior of examining the pg_file_settings view after
changing a config option that requires restart. The user needs to know
that any change of such options does not take effect until a restart,
and this worked correctly if the line is edited without removing it.
However, for the case where the line is removed altogether, the flag
doesn't get set, because a flag was only set in set_config_option, but
that's not called for lines removed. Repair.
(Ref.: commits 62d16c7fc5 and a486e35706)
Author: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/202107262302.xsfdfc5sb7sh@alvherre.pgsql
We failed to deal with an UNKNOWN-type input for
anycompatiblemultirange; that should throw an error indicating
that we don't know how to resolve the multirange type.
We also failed to infer the type of an anycompatiblerange output
from an anycompatiblemultirange input or vice versa.
Per bug #17066 from Alexander Lakhin. Back-patch to v14
where multiranges were added.
Discussion: https://postgr.es/m/17066-16a37f6223a8470b@postgresql.org
Commit 48c5c9068 failed to allow for buildfarm animals that
force jit = on. I'm surprised that this hasn't come up
elsewhere in explain.sql, so turn it off for that whole
test script not just the one new test case.
Per buildfarm.
The error messages using the word "non-negative" are confusing
because it's ambiguous about whether it accepts zero or not.
This commit improves those error messages by replacing it with
less ambiguous word like "greater than zero" or
"greater than or equal to zero".
Also this commit added the note about the word "non-negative" to
the error message style guide, to help writing the new error messages.
When postgres_fdw option fetch_size was set to zero, previously
the error message "fetch_size requires a non-negative integer value"
was reported. This error message was outright buggy. Therefore
back-patch to all supported versions where such buggy error message
could be thrown.
Reported-by: Hou Zhijie
Author: Bharath Rupireddy
Reviewed-by: Kyotaro Horiguchi, Fujii Masao
Discussion: https://postgr.es/m/OS0PR01MB5716415335A06B489F1B3A8194569@OS0PR01MB5716.jpnprd01.prod.outlook.com
pg_event_trigger_ddl_commands used "pg_temp" to refer to any
temp schema, not only that of the current backend. This seems
like overreach. It's somewhat unlikely that DDL commands would
refer to temp objects of other sessions to begin with, but if they
do, "pg_temp" would be a most misleading way to display the action.
While this seems like a bug, it's not quite out of the realm of
possibility that somebody out there is expecting the current
behavior. Hence, fix in HEAD, but don't back-patch.
Discussion: https://postgr.es/m/CAAJ_b97W=QaGmag9AhWNbmx3uEYsNkXWL+OVW1_E1D3BtgWvtw@mail.gmail.com
This patch causes EXPLAIN output to refer to objects that are in
the current session's temp schema with the "pg_temp" schema alias
rather than that schema's actual name. This is useful for our own
testing purposes since it will stabilize EXPLAIN VERBOSE output
for such cases, allowing us to use that in regression tests.
It should be less confusing for end users too.
Since ruleutils.c needs to change behavior for this, the change
also leaks into a few other users of ruleutils.c, for example
pg_get_viewdef(). AFAICS that won't cause any problems.
We did find that aggressively trying to change this behavior
across-the-board would cause issues, but as long as "pg_temp"
only appears within generated SQL text, I think it'll be fine.
Along the way, make get_namespace_name_or_temp conform to the
same API as get_namespace_name, ie that it returns a palloc'd
string or NULL. The current behavior hasn't caused any bugs
since no callers attempt to pfree the result, but if it gets
more widespread usage that could become a problem.
Amul Sul, reviewed and extended by me
Discussion: https://postgr.es/m/CAAJ_b97W=QaGmag9AhWNbmx3uEYsNkXWL+OVW1_E1D3BtgWvtw@mail.gmail.com
Add pg_resetxlog -u option to set the oldest xid in pg_control.
Previously -x set this value be -2 billion less than the -x value.
However, this causes the server to immediately scan all relation's
relfrozenxid so it can advance pg_control's oldest xid to be inside the
autovacuum_freeze_max_age range, which is inefficient and might disrupt
diagnostic recovery. pg_upgrade will use this option to better create
the new cluster to match the old cluster.
Reported-by: Jason Harvey, Floris Van Nee
Discussion: https://postgr.es/m/20190615183759.GB239428@rfd.leadboat.com, 87da83168c644fd9aae38f546cc70295@opammb0562.comp.optiver.com
Author: Bertrand Drouvot
Backpatch-through: 9.6
A check in the ZLIB portion of the test to match the name of a
non-compressed partial segment with a completed compressed segment was
using m//, while a simple equality check is enough. This makes the test
a bit stricter without impacting its coverage.
Author: Kyotaro Horiguchi
Discussion: https://postgr.es/m/20210726.174622.826565852378770261.horikyota.ntt@gmail.com
strtoint(), via strtol(), would skip leading whitespaces but the same
rule was not applied for trailing whitespaces, leading to an
inconsistent behavior. Some tests are changed to cover more this area.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/YP5Pv0d13Ct+03ve@paquier.xyz
Coverity complained that my commit 80ba4bb383 added a dubious coding
for a consistency check that there isn't more than one row for a certain
tgrelid/tgparentid combination. But we don't check for that explicitly
anywhere else, and if we were to do it, it should be a full
shouldn't-happen elog not just an assert. It doesn't seem that this is
very important anyway, so remove it.
Discussion: https://postgr.es/m/1337562.1627224583@sss.pgh.pa.us
Commit ad600bba04 added psql command \dX listing extended statistics
objects, but it failed to consider search_path when selecting the
elements so some of the returned elements might be invisible.
The visibility was already considered for tab completion (added by
commit d99d58cdc8), so adding it to the query is fairly simple.
Reported and fix by Justin Pryzby, regression tests by me. Backpatch
to PostgreSQL 14, where \dX was introduced.
Batchpatch-through: 14
Author: Justin Pryzby
Reviewed-by: Tatsuro Yamada
Discussion: https://postgr.es/m/c027a541-5856-75a5-0868-341301e1624b%40nttcom.co.jp_1
Formerly, when specifying NUMERIC(precision, scale), the scale had to
be in the range [0, precision], which was per SQL spec. This commit
extends the range of allowed scales to [-1000, 1000], independent of
the precision (whose valid range remains [1, 1000]).
A negative scale implies rounding before the decimal point. For
example, a column might be declared with a scale of -3 to round values
to the nearest thousand. Note that the display scale remains
non-negative, so in this case the display scale will be zero, and all
digits before the decimal point will be displayed.
A scale greater than the precision supports fractional values with
zeros immediately after the decimal point.
Take the opportunity to tidy up the code that packs, unpacks and
validates the contents of a typmod integer, encapsulating it in a
small set of new inline functions.
Bump the catversion because the allowed contents of atttypmod have
changed for numeric columns. This isn't a change that requires a
re-initdb, but negative scale values in the typmod would confuse old
backends.
Dean Rasheed, with additional improvements by Tom Lane. Reviewed by
Tom Lane.
Discussion: https://postgr.es/m/CAEZATCWdNLgpKihmURF8nfofP0RFtAKJ7ktY6GcZOPnMfUoRqA@mail.gmail.com
Adjust the header comment in get_agg_clause_costs so that it matches what
the function currently does. No recursive searching has been done ever
since 0a2bc5d61. It also does not determine the aggtranstype like the
comment claimed. That's all done in preprocess_aggref().
preprocess_aggref also now determines the numOrderedAggs, so remove the
mention that get_agg_clause_costs also calculates "counts".
Normally, since this is just an adjustment of a comment it might not be
worth back-patching, but since this code is new to PG14 and that version
is still in beta, then it seems worth having the comments match.
Discussion: https://postgr.es/m/CAApHDvrrGrTJFPELrjx0CnDtz9B7Jy2XYW3Z2BKifAWLSaJYwQ@mail.gmail.com
Backpatch-though: 14
These have been introduced by 7fbe0c8, and could happen for
pg_basebackup and pg_receivewal.
Per report from Coverity for the ones in walmethods.c, I have spotted
the ones in receivelog.c after more review.
Backpatch-through: 10
The point of introducing the hash_mem_multiplier GUC was to let users
reproduce the old behavior of hash aggregation, i.e. that it could use
more than work_mem at need. However, the implementation failed to get
the job done on Win64, where work_mem is clamped to 2GB to protect
various places that calculate memory sizes using "long int". As
written, the same clamp was applied to hash_mem. This resulted in
severe performance regressions for queries requiring a bit more than
2GB for hash aggregation, as they now spill to disk and there's no
way to stop that.
Getting rid of the work_mem restriction seems like a good idea, but
it's a big job and could not conceivably be back-patched. However,
there's only a fairly small number of places that are concerned with
the hash_mem value, and it turns out to be possible to remove the
restriction there without too much code churn or any ABI breaks.
So, let's do that for now to fix the regression, and leave the
larger task for another day.
This patch does introduce a bit more infrastructure that should help
with the larger task, namely pg_bitutils.h support for working with
size_t values.
Per gripe from Laurent Hasson. Back-patch to v13 where the
behavior change came in.
Discussion: https://postgr.es/m/997817.1627074924@sss.pgh.pa.us
Discussion: https://postgr.es/m/MN2PR15MB25601E80A9B6D1BA6F592B1985E39@MN2PR15MB2560.namprd15.prod.outlook.com
5a1e1d8302 was a minimal bug fix for dc7420c2c9. To avoid future bugs of
that kind, deduplicate the choice of a relation's horizon into a new helper,
GlobalVisHorizonKindForRel().
As the code in question was only introduced in dc7420c2c9 it seems worth
backpatching this change as well, otherwise 14 will look different from all
other branches.
A different approach to this was suggested by Matthias van de Meent.
Author: Andres Freund
Discussion: https://postgr.es/m/20210621122919.2qhu3pfugxxp3cji@alap3.anarazel.de
Backpatch: 14, like 5a1e1d8302
We have an implementation restriction that PREPARE TRANSACTION can't
handle cases where both session-lifespan and transaction-lifespan locks
are held on the same lockable object. (That's because we'd otherwise
need to acquire a new PROCLOCK entry during post-prepare cleanup, which
is an operation that might fail. The situation can only arise with odd
usages of advisory locks, so removing the restriction is probably not
worth the amount of effort it would take.) AtPrepare_Locks attempted
to enforce this, but its logic was many bricks shy of a load, because
it only detected cases where the session and transaction locks had the
same lockmode. Locks of different modes on the same object would lead
to the rather unhelpful message "PANIC: we seem to have dropped a bit
somewhere".
To fix, build a transient hashtable with one entry per locktag,
not one per locktag + mode, and use that to detect conflicts.
Per bug #17122 from Alexander Pyhalov. This bug is ancient,
so back-patch to all supported branches.
Discussion: https://postgr.es/m/17122-04f3c32098a62233@postgresql.org
We previously took a hard-line attitude that callers should never print
a null string pointer, and doing so is worthy of an assertion failure
or crash. However, we've long since flushed out any easy-to-find bugs
of that nature. What remains is a lot of code that perhaps could fail
that way in hard-to-reach corner cases. For example, in something as
simple as
ereport(ERROR,
(errcode(ERRCODE_UNDEFINED_OBJECT),
errmsg("constraint \"%s\" for table \"%s\" does not exist",
conname, get_rel_name(relid))));
one must wonder whether it's completely guaranteed that get_rel_name
cannot return NULL in this context. If such a situation did occur,
the existing policy converts what might be a pretty minor bug into
a server crash condition. This is not good for robustness.
Hence, let's follow the lead of glibc and print "(null)" instead
of failing. We should, of course, still consider it a bug if that
behavior is reachable in ordinary use; but crashing seems less
desirable than not crashing.
This fix works across-the-board in v12 and up, where we always use
src/port/snprintf.c. Before that, on most platforms we're at the mercy
of the local libc, but it appears that Solaris 10 is the only supported
platform where we'd still get a crash. Most other platforms such as
*BSD, macOS, and Solaris 11 have adopted glibc's behavior at some
point. (AIX and HPUX just print "" not "(null)", but that's close
enough.) I've not checked what Windows' native printf would do, but
it doesn't matter because we've long used snprintf.c on that platform.
In v12 and up, also const-ify related code so that we're not casting
away const on the constant string. This is just neatnik-ism, since
next to no compilers will warn about that.
Discussion: https://postgr.es/m/17098-b960f3616c861f83@postgresql.org
Recently-added references to ParseState weren't covered by #include
references, creating unwanted ordering dependencies for users of
these headers.
Oversight in commit 2bfb50b3d. Per headerscheck/cpluspluscheck.
This fixes two compilation failures caused by 6f164e6. Interesting to
see that missing <limits.h> dies not fail in Linux or even Windows. On
MacOS, it fails, though.
Per various buildfarm members.
Most of the integer options for command-line binaries now make use of a
single routine able to do the job, fixing issues with the detection of
sloppy values caused for example by the use of atoi(), that fails on
strings beginning with numerical characters with junk trailing
characters.
This commit cuts down the number of strings requiring translation by 26
per my count, switching the code to have two error types for invalid and
out-of-range values instead.
Much more could be done here, with float or even int64 options, but
int32 was the most appealing case as it is possible to rely on strtol()
to do the job reliably. Note that there are some exceptions for now,
like pg_ctl or pg_upgrade that use their own logging logic. A couple of
negative TAP tests required some adjustments for the new errors
generated.
pg_dump and pg_restore tracked the maximum number of parallel jobs
within the option parsing. The code is refactored a bit to track that
in the code dedicated to parallelism instead.
Author: Kyotaro Horiguchi, Michael Paquier
Reviewed-by: David Rowley, Álvaro Herrera
Discussion: https://postgr.es/m/CALj2ACXqdG9WhqVoJ9zYf-iZt7sgK7Szv5USs=he6NnWQ2ofTA@mail.gmail.com
Animals running in Czech locale failed. I could try to find table names
that don't have this problem, but it seems simpler to just use the C
locale.
Per buildfarm
Renaming triggers on partitioned tables had two problems: first,
it did not recurse to renaming the triggers on the partitions; and
second, it failed to prohibit renaming clone triggers. Having triggers
with different names in partitions is pointless, and furthermore pg_dump
would not preserve names for partitions anyway.
Not backpatched -- making the ALTER TRIGGER throw an error in stable
versions might cause problems for existing scripts.
Co-authored-by: Arne Roland <A.Roland@index.de>
Co-authored-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Discussion: https://postgr.es/m/d0fd7040c2fb4de1a111b9d9ccc456b8@index.de
This reverts commit 91d395f, to avoid running those tests on Windows.
The tests are globally stable across all buildfarm members, except
fairywren (crash of pg_receivewal) and bowerdird (SIGBREAK preventing
the buildfarm run to complete). Those errors are rather strange, as
other hosts with very similar characteristics are able to run those
tests without breaking a sweat.
For now, disable those tests on Windows to turn back the buildfarm to
green.
Per discussion with Andrew Dunstan.
Discussion: https://postgr.es/m/9040d5ed-6462-66a4-07ac-2923785ae563@dunslane.net
Code inlined by LLVM can crash or fail with "Relocation type not
implemented yet!" if it tries to access thread local variables. Don't
inline such code.
Back-patch to 11, where LLVM arrived. Bug #16696.
Author: Dmitry Marakasov <amdmi3@amdmi3.ru>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/16696-29d944a33801fbfe@postgresql.org