Also add to the existing rather half-baked description of PROFILE,
which does exactly the same thing, but I think people use it differently.
Discussion: https://postgr.es/m/16461.1487361849@sss.pgh.pa.us
Make the typedefs for output plugins consistent with project style;
they were previously not even consistent with each other as to layout
or inclusion of parameter names. Make the documentation look the same,
and fix errors therein (missing and misdescribed parameters).
Back-patch because of the documentation bugs.
The loops in ExecHashJoinNewBatch(), ExecHashIncreaseNumBatches(), and
ExecHashRemoveNextSkewBucket() are all capable of iterating over many
tuples without ever doing a CHECK_FOR_INTERRUPTS, so that the backend
might fail to respond to SIGINT or SIGTERM for an unreasonably long time.
Fix that. In the case of ExecHashJoinNewBatch(), it seems useful to put
the added CHECK_FOR_INTERRUPTS into ExecHashJoinGetSavedTuple() rather
than directly in the loop, because that will also ensure that both
principal code paths through ExecHashJoinOuterGetTuple() will do a
CHECK_FOR_INTERRUPTS, which seems like a good idea to avoid surprises.
Back-patch to all supported branches.
Tom Lane and Thomas Munro
Discussion: https://postgr.es/m/6044.1487121720@sss.pgh.pa.us
It wouldn't complete "TO" after the variable name, which is certainly
minor enough. But since we do complete "TO" after "SET variable ...",
and since this case used to work pre-9.6, I think this is a bug.
Also, fix the query used to collect the variable names; whoever last
touched it evidently didn't understand how the pieces are supposed
to fit together. It accidentally worked anyway, because readline
ignores irrelevant completions, but it was randomly unlike the ones
around it, and could be a source of actual bugs if someone copied
it as a prototype for another query.
Jeff Janes noted that the error cursor position shown for some errors
would vary when operator_precedence_warning is turned on. We'd prefer
that option to have no undocumented effects, so this isn't desirable.
To fix, make sure that an AEXPR_PAREN node has the same exprLocation
as its child node.
(Note: it would be a little cheaper to use @2 here instead of an
exprLocation call, but there are cases where that wouldn't produce
the identical answer, so don't do it like that.)
Back-patch to 9.5 where this feature was introduced.
Discussion: https://postgr.es/m/CAMkU=1ykK+VhhcQ4Ky8KBo9FoaUJH3f3rDQB8TkTXi-ZsBRUkQ@mail.gmail.com
In combination with 569174f1be, which
taught the btree AM how to perform parallel index scans, this allows
parallel index scan plans on btree indexes. This infrastructure
should be general enough to support parallel index scans for other
index AMs as well, if someone updates them to support parallel
scans.
Amit Kapila, reviewed and tested by Anastasia Lubennikova, Tushar
Ahuja, and Haribabu Kommi, and me.
When min_parallel_relation_size was added, the only supported type
of parallel scan was a parallel sequential scan, but there are
pending patches for parallel index scan, parallel index-only scan,
and parallel bitmap heap scan. Those patches introduce two new
types of complications: first, what's relevant is not really the
total size of the relation but the portion of it that we will scan;
and second, index pages and heap pages shouldn't necessarily be
treated in exactly the same way. Typically, the number of index
pages will be quite small, but that doesn't necessarily mean that
a parallel index scan can't pay off.
Therefore, we introduce min_parallel_table_scan_size, which works
out a degree of parallelism for scans based on the number of table
pages that will be scanned (and which is therefore equivalent to
min_parallel_relation_size for parallel sequential scans) and also
min_parallel_index_scan_size which can be used to work out a degree
of parallelism based on the number of index pages that will be
scanned.
Amit Kapila and Robert Haas
Discussion: http://postgr.es/m/CAA4eK1KowGSYYVpd2qPpaPPA5R90r++QwDFbrRECTE9H_HvpOg@mail.gmail.com
Discussion: http://postgr.es/m/CAA4eK1+TnM4pXQbvn7OXqam+k_HZqb0ROZUMxOiL6DWJYCyYow@mail.gmail.com
Commit 85c11324ca renamed pg_resetxlog
to pg_resetwal, but didn't make pg_upgrade smart enough to cope with
the situation.
Michael Paquier, per a complaint from Jeff Janes
The core of the functionality was already implemented when
pg_import_system_collations was added. This just exposes it as an
option in the SQL command.
This isn't exposed to the optimizer or the executor yet; we'll add
support for those things in a separate patch. But this puts the
basic mechanism in place: several processes can attach to a parallel
btree index scan, and each one will get a subset of the tuples that
would have been produced by a non-parallel scan. Each index page
becomes the responsibility of a single worker, which then returns
all of the TIDs on that page.
Rahila Syed, Amit Kapila, Robert Haas, reviewed and tested by
Anastasia Lubennikova, Tushar Ahuja, and Haribabu Kommi.
This doesn't do anything to make Param nodes anything other than
parallel-restricted, so this only helps with uncorrelated subplans,
and it's not necessarily very cheap because each worker will run the
subplan separately (just as a Hash Join will build a separate copy of
the hash table in each participating process), but it's a first step
toward supporting cases that are more likely to help in practice, and
is occasionally useful on its own.
Amit Kapila, reviewed and tested by Rafia Sabih, Dilip Kumar, and
me.
Discussion: http://postgr.es/m/CAA4eK1+e8Z45D2n+rnDMDYsVEb5iW7jqaCH_tvPMYau=1Rru9w@mail.gmail.com
The xlog-specific headers need to be included in both frontend code -
specifically, pg_waldump - and the backend, but the remainder of the
private headers for each index are only needed by the backend. By
splitting the xlog stuff out into separate headers, pg_waldump pulls
in fewer backend headers, which is a good thing.
Patch by me, reviewed by Michael Paquier and Andres Freund, per a
complaint from Dilip Kumar.
Discussion: http://postgr.es/m/CA+TgmoZ=F=GkxV0YEv-A8tb+AEGy_Qa7GSiJ8deBKFATnzfEug@mail.gmail.com
I noticed while hacking on join UNION transforms that planner.c's
function get_base_rel_indexes() just duplicates the functionality of
get_relids_in_jointree(). It doesn't even have the excuse of being
older code :-(. Drop it and use the latter function instead.
This module was intended to ease migrations of applications that used
the pre-8.3 version of text search to the in-core version introduced
in that release. However, since all pre-8.3 releases of the database
have been out of support for more than 5 years at this point, we
expect that few people are depending on it at this point. If some
people still need it, nothing prevents it from being maintained as a
separate extension, outside of core.
Discussion: http://postgr.es/m/CA+Tgmob5R8aDHiFRTQsSJbT1oreKg2FOSBrC=2f4tqEH3dOMAg@mail.gmail.com
The ALTER TABLE ALTER TYPE implementation can issue DROP INDEX and
CREATE INDEX to refit existing indexes for the new column type. Since
this CREATE INDEX is an implementation detail of an index alteration,
the ensuing DefineIndex() should skip ACL checks specific to index
creation. It already skips the namespace ACL check. Make it skip the
tablespace ACL check, too. Back-patch to 9.2 (all supported versions).
Reviewed by Tom Lane.
This stores a data type, required to be an integer type, with the
sequence. The sequences min and max values default to the range
supported by the type, and they cannot be set to values exceeding that
range. The internal implementation of the sequence is not affected.
Change the serial types to create sequences of the appropriate type.
This makes sure that the min and max values of the sequence for a serial
column match the range of values supported by the table column. So the
sequence can no longer overflow the table column.
This also makes monitoring for sequence exhaustion/wraparound easier,
which currently requires various contortions to cross-reference the
sequences with the table columns they are used with.
This commit also effectively reverts the pg_sequence column reordering
in f3b421da5f, because the new seqtypid
column allows us to fill the hole in the struct and create a more
natural overall column ordering.
Reviewed-by: Steve Singer <steve@ssinger.info>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
The S/390 members of the buildfarm are showing failures indicating
that they're having trouble with the rint() calls I added yesterday.
There's no good reason for that, and I wonder if it is a compiler bug
similar to the one we worked around in d9476b838. Try to fix it using
the same method as before, namely to store the result of rint() back
into a "double" variable rather than immediately converting to int64.
(This isn't entirely waving a dead chicken, since on machines with
wider-than-double float registers, the extra store forces a width
conversion. I don't know if S/390 is like that, but it seems worth
trying.)
In passing, merge duplicate ereport() calls in float8_timestamptz().
Per buildfarm.
Commit f82ec32ac3 renamed the pg_xlog
directory to pg_wal. To make things consistent, and because "xlog" is
terrible terminology for either "transaction log" or "write-ahead log"
rename all SQL-callable functions that contain "xlog" in the name to
instead contain "wal". (Note that this may pose an upgrade hazard for
some users.)
Similarly, rename the xlog_position argument of the functions that
create slots to be called wal_position.
Discussion: https://www.postgresql.org/message-id/CA+Tgmob=YmA=H3DbW1YuOXnFVgBheRmyDkWcD9M8f=5bGWYEoQ@mail.gmail.com
Even if we don't emit definitions for SH_ALLOCATE and SH_FREE, we
still need prototypes. The user can't define them before including
simplehash.h because SH_TYPE isn't available yet.
For the allocator to be able to access private_data, it needs to
become an argument to SH_CREATE. Previously we relied on callers
to set that after returning from SH_CREATE, but SH_CREATE calls
SH_ALLOCATE before returning.
Dilip Kumar, reviewed by me.
It did that to verify that the page was an overflow page rather than
anything else, but that means that checking the status of all the
overflow bits requires reading the entire index. So don't do that.
The new code validates that the page is not a primary bucket page
or bitmap page by looking at the metapage, so that using this on
large numbers of pages can be reasonably efficient.
Ashutosh Sharma, per a complaint from me, and with further
modifications by me.
It's always been possible for index AMs to cache data across successive
amgettuple calls within a single SQL command: the IndexScanDesc.opaque
field is meant for precisely that. However, no comparable facility
exists for amortizing setup work across successive aminsert calls.
This patch adds such a feature and teaches GIN, GIST, and BRIN to use it
to amortize catalog lookups they'd previously been doing on every call.
(The other standard index AMs keep everything they need in the relcache,
so there's little to improve there.)
For GIN, the overall improvement in a statement that inserts many rows
can be as much as 10%, though it seems a bit less for the other two.
In addition, this makes a really significant difference in runtime
for CLOBBER_CACHE_ALWAYS tests, since in those builds the repeated
catalog lookups are vastly more expensive.
The reason this has been hard up to now is that the aminsert function is
not passed any useful place to cache per-statement data. What I chose to
do is to add suitable fields to struct IndexInfo and pass that to aminsert.
That's not widening the index AM API very much because IndexInfo is already
within the ken of ambuild; in fact, by passing the same info to aminsert
as to ambuild, this is really removing an inconsistency in the AM API.
Discussion: https://postgr.es/m/27568.1486508680@sss.pgh.pa.us
A proposed patch, also by Thomas and in the same thread, would change
the output order of these. Independent of the follow-up patches
getting committed, nailing down the order in these specific tests at
worst seems harmless.
Author: Thomas Munro
Discussion: https://postgr.es/m/CAEepm=1D4-tP7j7UAgT_j4ZX2j4Ehe1qgZQWFKBMb8F76UW5Rg@mail.gmail.com
When converting a float value to integer microseconds, we should be careful
to round the value to the nearest integer, typically with rint(); simply
assigning to an int64 variable will truncate, causing apparently off-by-one
values in cases that should work. Most places in the datetime code got
this right, but not these two.
float8_timestamptz() is new as of commit e511d878f (9.6). Previous
versions effectively depended on interval_mul() to do roundoff correctly,
which it does, so this fixes an accuracy regression in 9.6.
The problem in make_interval() dates to its introduction in 9.4. Aside
from being careful to round not truncate, let's incorporate the hours and
minutes inputs into the result with exact integer arithmetic, rather than
risk introducing roundoff error where there need not have been any.
float8_timestamptz() problem reported by Erik Nordström, though this is
not his proposed patch. make_interval() problem found by me.
Discussion: https://postgr.es/m/CAHuQZDS76jTYk3LydPbKpNfw9KbACmD=49dC4BrzHcfPv6yA1A@mail.gmail.com
When the new GUC wal_consistency_checking is set to a non-empty value,
it triggers recording of additional full-page images, which are
compared on the standby against the results of applying the WAL record
(without regard to those full-page images). Allowable differences
such as hints are masked out, and the resulting pages are compared;
any difference results in a FATAL error on the standby.
Kuntal Ghosh, based on earlier patches by Michael Paquier and Heikki
Linnakangas. Extensively reviewed and revised by Michael Paquier and
by me, with additional reviews and comments from Amit Kapila, Álvaro
Herrera, Simon Riggs, and Peter Eisentraut.
In the large DO block, collect row TIDs into array variables instead of
creating and dropping a pile of temporary tables. In a normal build,
this reduces the brin test script's runtime from about 1.1 sec to 0.4 sec
on my workstation. That's not all that exciting perhaps, but in a
CLOBBER_CACHE_ALWAYS test build, the runtime drops from 20 min to 17 min,
which is a little more useful. In combination with some other changes
I plan to propose, this will help provide a noticeable reduction in
cycle time for CLOBBER_CACHE_ALWAYS buildfarm critters.
This is infrastructure for a pending patch to allow parallel bitmap
heap scans.
Dilip Kumar, reviewed (in earlier versions) by Andres Freund and
(more recently) by me. Some further renaming by me, also.
This avoids a very significant amount of buffer manager traffic and
contention when scanning hash indexes, because it's no longer
necessary to lock and pin the metapage for every scan. We do need
some way of figuring out when the cache is too stale to use any more,
so that when we lock the primary bucket page to which the cached
metapage points us, we can tell whether a split has occurred since we
cached the metapage data. To do that, we use the hash_prevblkno field
in the primary bucket page, which would otherwise always be set to
InvalidBuffer.
This patch contains code so that it will continue working (although
less efficiently) with hash indexes built before this change, but
perhaps we should consider bumping the hash version and ripping out
the compatibility code. That decision can be made later, though.
Mithun Cy, reviewed by Jesper Pedersen, Amit Kapila, and by me.
Before committing, I made a number of cosmetic changes to the last
posted version of the patch, adjusted _hash_getcachedmetap to be more
careful about order of operation, and made some necessary updates to
the pageinspect documentation and regression tests.
Before, reading pg_sequences.last_value would fail unless the user had
appropriate sequence permissions, which would make the pg_sequences view
cumbersome to use. Instead, return null instead of the real value when
there are no permissions.
From: Michael Paquier <michael.paquier@gmail.com>
Reported-by: Shinoda, Noriyoshi <noriyoshi.shinoda@hpe.com>
The problem with the original coding here is that we might receive (and
clear) a relcache invalidation signal for the target relation down inside
one of the index_open calls we're doing. Since the target is open, we
would not drop the relcache entry, just reset its rd_indexvalid and
rd_indexlist fields. But RelationGetIndexAttrBitmap() kept going, and
would eventually cache and return potentially-obsolete attribute bitmaps.
The case where this matters is where the inval signal was from a CREATE
INDEX CONCURRENTLY telling us about a new index on a formerly-unindexed
column. (In all other cases, the lock we hold on the target rel should
prevent any concurrent change in index state.) Even just returning the
stale attribute bitmap is not such a problem, because it shouldn't matter
during the transaction in which we receive the signal. What hurts is
caching the stale data, because it can survive into later transactions,
breaking CREATE INDEX CONCURRENTLY's expectation that later transactions
will not create new broken HOT chains. The upshot is that there's a window
for building corrupted indexes during CREATE INDEX CONCURRENTLY.
This patch fixes the problem by rechecking that the set of index OIDs
is still the same at the end of RelationGetIndexAttrBitmap() as it was
at the start. If not, we loop back and try again. That's a little
more than is strictly necessary to fix the bug --- in principle, we
could return the stale data but not cache it --- but it seems like a
bad idea on general principles for relcache to return data it knows
is stale.
There might be more hazards of the same ilk, or there might be a better
way to fix this one, but this patch definitely improves matters and seems
unlikely to make anything worse. So let's push it into today's releases
even as we continue to study the problem.
Pavan Deolasee and myself
Discussion: https://postgr.es/m/CABOikdM2MUq9cyZJi1KyLmmkCereyGp5JQ4fuwKoyKEde_mzkQ@mail.gmail.com
Commit 665d1fad9 introduced rd_pkindex, and made RelationGetIndexList
responsible for updating it, but didn't bother to fix
RelationGetIndexList's header comment to say so.
Modify FETCH_COUNT to always have a defined value, like other control
variables, mainly so it will always appear in "\set" output.
Add hooks to force HISTSIZE to be defined and require it to have an
integer value. (I don't see any point in allowing it to be set to
non-integral values.)
Add hooks to force IGNOREEOF to be defined and require it to have an
integer value. Unlike the other cases, here we're trying to be
bug-compatible with a rather bogus externally-defined behavior, so I think
we need to continue to allow "\set IGNOREEOF whatever". Fix it so that
the substitution hook silently replace non-numeric values with "10",
so that the stored value always reflects what we're really doing.
Add a dummy assign hook for HISTFILE, just so it's always in
variables.c's list. We can't require it to be defined always, because
that would break the interaction with the PSQL_HISTORY environment
variable, so there isn't any change in visible behavior here.
Remove tab-complete.c's private list of known variable names, since that's
really a maintenance nuisance. Given the preceding changes, there are no
control variables it won't show anyway. This does mean that if for some
reason you've unset one of the status variables (DBNAME, HOST, etc), that
variable would not appear in tab completion for \set. But I think that's
fine, for at least two reasons: we shouldn't be encouraging people to use
those variables as regular variables, and if someone does do so anyway,
why shouldn't it act just like a regular variable?
Remove ugly and no-longer-used-anywhere GetVariableNum(). In general,
future additions of integer-valued control variables should follow the
paradigm of adding an assign hook using ParseVariableNum(), so there's
no reason to expect we'd need this again later.
Discussion: https://postgr.es/m/17516.1485973973@sss.pgh.pa.us
Coverity complained that we might pass a null pointer to strcmp()
if PQresultErrorField were to return NULL. That shouldn't be possible,
since the server is supposed to always provide some SQLSTATE or other
in an error message. But we usually defend against such hazards, and
it only takes a little more code to do so here.
There's no good reason to think this is a live bug, so no back-patch.
If we forcibly place a Material node atop a finished subplan, we need
to move any initPlans attached to the subplan up to the Material node,
in order to keep SS_finalize_plan() happy. I'd figured this out in
commit 7b67a0a49 for the case of materializing a cursor plan, but out of
an abundance of caution, I put the initPlan movement hack at the call
site for that case, rather than inside materialize_finished_plan().
That was the wrong thing, because it turns out to also be necessary for
the only other caller of materialize_finished_plan(), ie subselect.c.
We lacked any test cases that exposed the mistake, but bug#14524 from
Wei Congrui shows that it's possible to get an initPlan reference into
the top tlist in that case too, and then SS_finalize_plan() complains.
Hence, move the hack into materialize_finished_plan().
In HEAD, also relocate some recently-added tests in subselect.sql, which
I'd unthinkingly dropped into the middle of a sequence of related tests.
Report: https://postgr.es/m/20170202060020.1400.89021@wrigleys.postgresql.org
Given a targetlist like "srf(x), f(srf(x))", split_pathtarget_at_srfs()
decided that it needed two levels of ProjectSet nodes, failing to notice
that the two SRF calls are textually equal(). Because of that, setrefs.c
would convert the upper ProjectSet's tlist to "Var1, f(Var1)" (where Var1
represents a reference to the srf(x) output of the lower ProjectSet).
This triggered an assertion in nodeProjectSet.c complaining that it found
no SRFs to evaluate, as reported by Erik Rijkers.
What we want in such a case is to evaluate srf(x) only once and use a plain
Result node to compute "Var1, f(Var1)"; that gives results similar to what
previous versions produced, whereas allowing srf(x) to be evaluated again
in an upper ProjectSet would square the number of rows emitted.
Furthermore, even if the SRF calls aren't textually identical, we want them
to be evaluated in lockstep, because that's what happened in the old
implementation. But split_pathtarget_at_srfs() got this completely wrong,
using two levels of ProjectSet for a case like "srf(x), f(srf(y))".
Hence, rewrite split_pathtarget_at_srfs() from the ground up so that it
groups SRFs according to the depth of nesting of SRFs in their arguments.
This is pretty much how we envisioned that working originally, but I blew
it when it came to implementation.
In passing, optimize the case of target == input_target, which I noticed
is not only possible but quite common.
Discussion: https://postgr.es/m/dcbd2853c05d22088766553d60dc78c6@xs4all.nl
There is no particularly good reason to limit this value to 1000,
so increase the limit to INT_MAX / 2, the same limit we use for
shared_buffers. It's not clear how much practical effect larger
settings will have, but there seems no harm in letting people try it.
Jim Nasby, less a comment change I stripped out.
Discussion: http://postgr.es/m/f6e58a22-030b-eb8a-5457-f62fb08d701c@BlueTreble.com
Patch by Jesper Pedersen and Ashutosh Sharma, with some error handling
improvements by me. Tests from Peter Eisentraut. Reviewed by Álvaro
Herrera, Michael Paquier, Jesper Pedersen, Jeff Janes, Peter
Eisentraut, Amit Kapila, Mithun Cy, and me.
Discussion: http://postgr.es/m/e2ac6c58-b93f-9dd9-f4e6-d6d30add7fdf@redhat.com
Doing so doesn't seem to be within the purpose of the per user
connection limits, and has particularly unfortunate effects in
conjunction with parallel queries.
Backpatch to 9.6 where parallel queries were introduced.
David Rowley, reviewed by Robert Haas and Albe Laurenz.
Add CatalogTupleInsertWithInfo and CatalogTupleUpdateWithInfo to let
callers use the CatalogTupleXXX abstraction layer even in cases where
we want to share the results of CatalogOpenIndexes across multiple
inserts/updates for efficiency. This finishes the job begun in commit
2f5c9d9c9, by allowing some remaining simple_heap_insert/update
calls to be replaced. The abstraction layer is now complete enough
that we don't have to export CatalogIndexInsert at all anymore.
Also, this fixes several places in which 2f5c9d9c9 introduced performance
regressions by using retail CatalogTupleInsert or CatalogTupleUpdate even
though the previous coding had been able to amortize CatalogOpenIndexes
work across multiple tuples.
A possible future improvement is to arrange for the indexing.c functions
to cache the CatalogIndexState somewhere, maybe in the relcache, in which
case we could get rid of CatalogTupleInsertWithInfo and
CatalogTupleUpdateWithInfo again. But that's a task for another day.
Discussion: https://postgr.es/m/27502.1485981379@sss.pgh.pa.us
This extends the work done in commit 2f5c9d9c9 to provide a more nearly
complete abstraction layer hiding the details of index updating for catalog
changes. That commit only invented abstractions for catalog inserts and
updates, leaving nearby code for catalog deletes still calling the
heap-level routines directly. That seems rather ugly from here, and it
does little to help if we ever want to shift to a storage system in which
indexing work is needed at delete time.
Hence, create a wrapper function CatalogTupleDelete(), and replace calls
of simple_heap_delete() on catalog tuples with it. There are now very
few direct calls of [simple_]heap_delete remaining in the tree.
Discussion: https://postgr.es/m/462.1485902736@sss.pgh.pa.us
"\set" with no arguments displays all defined variables, but it does so
in the order that they appear in variables.c's list, which previously
was mostly creation order. That makes the list ugly and hard to find
things in, and it exposes some psql implementation details to users.
(For instance, ordinary variables will move to the bottom of the list
if unset and set again, but variables that have hooks won't.)
Fix that by keeping the list in alphabetical order at all times, which
isn't much more complicated than breaking out of the insertion search
loops once we reach an entry that should be after the one to be inserted.
Discussion: https://postgr.es/m/31785.1485900786@sss.pgh.pa.us
This commit improves on the results of commit 511ae628f in two ways:
1. It restores the historical behavior that "\set FOO" is interpreted
as setting FOO to "on", if FOO is a boolean control variable. We
already found one test script that was expecting that behavior, and
the psql documentation certainly does nothing to discourage people
from assuming that would work, since it often says just "if FOO is set"
when describing the effects of a boolean variable. However, now this
case will result in actually setting FOO to "on", not an empty string.
2. It arranges for an "\unset" of a control variable to set the value
back to its default value, rather than becoming apparently undefined.
The control variables are also initialized that way at psql startup.
In combination, these things guarantee that a control variable always
has a displayable value that reflects what psql is actually doing.
That is a pretty substantial usability improvement.
The implementation involves adding a second type of variable hook function
that is able to replace a proposed new value (including NULL) with another
one. We could alternatively have complicated the API of the assign hook,
but this way seems better since many variables can share the same
substitution hook function.
Also document the actual behavior of these variables more fully,
including covering assorted behaviors that were there before but
never documented.
This patch also includes some minor cleanup that should have been in
511ae628f but was missed.
Patch by me, but it owes a lot to discussions with Daniel Vérité.
Discussion: https://postgr.es/m/9572.1485821620@sss.pgh.pa.us
The rule is that if pg_authid.rolpassword begins with "md5" and has the
right length, it's an MD5 hash, otherwise it's a plaintext password. The
idiom has been to use isMD5() to check for that, but that gets awkward,
when we add new kinds of verifiers, like the verifiers for SCRAM
authentication in the pending SCRAM patch set. Replace isMD5() with a new
get_password_type() function, so that when new verifier types are added, we
don't need to remember to modify every place that currently calls isMD5(),
to also recognize the new kinds of verifiers.
Also, use the new plain_crypt_verify function in passwordcheck, so that it
doesn't need to know about MD5, or in the future, about other kinds of
hashes or password verifiers.
Reviewed by Michael Paquier and Peter Eisentraut.
Discussion: https://www.postgresql.org/message-id/2d07165c-1793-e243-a2a9-e45b624c7580@iki.fi
The "Simplify tape block format" commit ignored the rule that blocks
returned by ltsGetFreeBlock() must be written out in the same order, at
least in the first write pass. To fix, relax that requirement, by making
ltsWriteBlock() to detect if it's about to create a "hole" in the
underlying BufFile, and fill it with zeros instead.
Reported, analysed, and reviewed by Peter Geoghegan.
Discussion: https://www.postgresql.org/message-id/CAM3SWZRWdNtkhiG0GyiX_1mUAypiK3dV6-6542pYe2iEL-foTA@mail.gmail.com
The addition of a TestForOldSnapshot() call here has made the
referent of this comment slightly less clear, so move the comment
to compensate.
Amit Kapila (as part of the parallel index scan patch)
Split the existing CatalogUpdateIndexes into two different routines,
CatalogTupleInsert and CatalogTupleUpdate, which do both the heap
insert/update plus the index update. This removes over 300 lines of
boilerplate code all over src/backend/catalog/ and src/backend/commands.
The resulting code is much more pleasing to the eye.
Also, by encapsulating what happens in detail during an UPDATE, this
facilitates the upcoming WARM patch, which is going to add a few more
lines to the update case making the boilerplate even more boring.
The original CatalogUpdateIndexes is removed; there was only one use
left, and since it's just three lines, we can as well expand it in place
there. We could keep it, but WARM is going to break all the UPDATE
out-of-core callsites anyway, so there seems to be no benefit in doing
so.
Author: Pavan Deolasee
Discussion: https://www.postgr.es/m/CABOikdOcFYSZ4vA2gYfs=M2cdXzXX4qGHeEiW3fu9PCfkHLa2A@mail.gmail.com
In commit 23f34fa, we changed how ACLs were handled to use the new
pg_init_privs catalog and to dump out the ACL commands as REVOKE+GRANT
combinations instead of trying to REVOKE all rights always and then
GRANT back just the ones which were in place.
Unfortunately, the DEFAULT PRIVILEGES system didn't quite get the
correct treatment with this change and ended up (incorrectly) only
including positive GRANTs instead of both the REVOKEs and GRANTs
necessary to preserve the correct privileges.
There are only a couple cases where such REVOKEs are possible because,
generally speaking, there's few rights which exist on objects by
default to be revoked.
Examples of REVOKEs which weren't being correctly preserved are when
privileges are REVOKE'd from the creator/owner, like so:
ALTER DEFAULT PRIVILEGES
FOR ROLE myrole
REVOKE SELECT ON TABLES FROM myrole;
or when other default privileges are being revoked, such as EXECUTE
rights granted to public for functions:
ALTER DEFAULT PRIVILEGES
FOR ROLE myrole
REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;
Fix this by correctly working out what the correct REVOKE statements are
(if any) and dump them out, just as we do for everything else.
Noticed while developing additional regression tests for pg_dump, which
will be landing shortly.
Back-patch to 9.6 where the bug was introduced.
The pg_dump TAP tests have gotten pretty far from what perltidy thinks
they should be, so fix that, and in passing use long-form argument names
with arguments passed via "=" in a similar vein to 58da833.
No functional changes here, just whitespace and changing runs from
"-f" to "--file=", and similar.
As pointed out by Alvaro, we actually use perltidy on the perl scripts
in the source tree, so go back to the results of a perltidy run for the
test_pg_dump TAP script.
To make it look slightly less tragic, I changed most of the independent
arguments into long-form single arguments (eg: -f file.sql changed to be
--file=file.sql) to avoid having them confusingly split across lines due
to perltidy.
Back-patch to 9.6, as the last patch was.
next_token() oddly set its buffer space consumption limit to one before
the last char position in the buffer, not the last as you'd expect.
The reason is there was once an ugly kluge to mark keywords by appending
a newline to them, potentially requiring one more byte. Commit e5e2fc842
removed that kluge, but failed to notice that the length limit could be
increased.
Also, remove some vestigial handling of newline characters in the buffer.
That was left over from when this function read the file directly using
getc(). Commit 7f49a67f9 changed it to read from a buffer, from which
tokenize_file had already removed the only possible occurrence of newline,
but did not simplify this function in consequence.
Also, ensure that we don't return with *lineptr set to someplace past the
terminating '\0'; that would be catastrophic if a caller were to ask for
another token from the same line. This is just latent since no callers
actually do call again after a "false" return; but considering that it was
actually costing us extra code to do it wrong, we might as well make it
bulletproof.
Noted while reviewing pg_hba_file_rules patch.
This view is designed along the same lines as pg_file_settings, to wit
it shows what is currently in the file, not what the postmaster has
loaded as the active settings. That allows it to be used to pre-vet
edits before issuing SIGHUP. As with the earlier view, go out of our
way to allow errors in the file to be reflected in the view, to assist
that use-case.
(We might at some point invent a view to show the current active settings,
but this is not that patch; and it's not trivial to do.)
Haribabu Kommi, reviewed by Ashutosh Bapat, Michael Paquier, Simon Riggs,
and myself
Discussion: https://postgr.es/m/CAJrrPGerH4jiwpcXT1-46QXUDmNp2QDrG9+-Tek_xC8APHShYw@mail.gmail.com
Quite a few of our built-in system views were not exercised anywhere
in the regression tests. This is perhaps not so exciting for the ones
that are simple projections/joins of system catalogs, but for the ones
that are wrappers for set-returning C functions, the omission translates
directly to lack of test coverage for those functions.
In many cases, the reason for the omission is that the view doesn't have
much to do with any specific SQL feature, so there's no natural place to
test it. To remedy that, invent a new script sysviews.sql that's dedicated
to testing SRF-based views. Move a couple of tests that did fit this
charter into the new script, and add simple "count(*)" based tests of
other views within the charter. That's enough to ensure we at least
exercise the main code path through the SRF, although it does little to
prove that the output is sane.
More could be done here, no doubt, and I hope someone will think about
how we can test these views more thoroughly. But this is a starting
point.
Discussion: https://postgr.es/m/19359.1485723741@sss.pgh.pa.us
Previously, if the user set a special variable such as ECHO to an
unrecognized value, psql would bleat but store the new value anyway, and
then fall back to a default setting for the behavior controlled by the
variable. This was agreed to be a not particularly good idea. With
this patch, invalid values result in an error message and no change in
state.
(But this applies only to variables that affect psql's behavior; purely
informational variables such as ENCODING can still be set to random
values.)
To do this, modify the API for psql's assign-hook functions so that they
can return an OK/not OK result, and give them the responsibility for
printing error messages when they reject a value. Adjust the APIs for
ParseVariableBool and ParseVariableNum to support the new behavior
conveniently.
In passing, document the variable VERSION, which had somehow escaped that.
And improve the quite-inadequate commenting in psql/variables.c.
Daniel Vérité, reviewed by Rahila Syed, some further tweaking by me
Discussion: https://postgr.es/m/7356e741-fa59-4146-a8eb-cf95fd6b21fb@mm
DST law changes in northern Cyprus (new zone Asia/Famagusta), Russia (new
zone Europe/Saratov), Tonga, Antarctica/Casey. Historical corrections for
Asia/Aqtau, Asia/Atyrau, Asia/Gaza, Asia/Hebron, Italy, Malta. Replace
invented zone abbreviation "TOT" for Tonga with numeric UTC offset; but
as in the past, we'll keep accepting "TOT" for input.
In commit 6c268df, pg_init_privs was added to track the initial
privileges of catalog objects and extensions. Unfortunately, that
commit didn't include understanding of ALTER EXTENSION ADD/DROP, which
allows the objects associated with an extension to be changed after the
initial CREATE EXTENSION script has been run.
The result of this meant that ACLs for objects added through
ALTER EXTENSION ADD were not recorded into pg_init_privs and we would
end up including those ACLs in pg_dump when we shouldn't have.
This commit corrects that by making sure to have pg_init_privs updated
when ALTER EXTENSION ADD/DROP is run, recording the permissions as they
are at ALTER EXTENSION ADD time, and removing any if/when ALTER
EXTENSION DROP is called.
This issue was pointed out by Moshe Jacobson as commentary on bug #14456
(which was actually a bug about versions prior to 9.6 not handling
custom ACLs on extensions correctly, an issue now addressed with
pg_init_privs in 9.6).
Back-patch to 9.6 where pg_init_privs was introduced.
The formatting of the perl hashes used in the TAP tests for test_pg_dump
was rather horribly inconsistent and made it more difficult than it
really should have been to add new tests or adjust what tests are for
what runs, etc.
Reformat to clean that all up.
Whitespace-only changes.
Currently, we only need this logic in order to cost a Bitmap Heap
Scan. But a pending patch for Parallel Bitmap Heap Scan also uses
it to help figure out how many workers to use for the scan, which
has to be determined prior to costing. So, move the logic to
a separate function to make that easier.
Dilip Kumar. The patch series of which this is a part has been
reviewed by Andres Freund, Amit Khendekar, Tushar Ahuja, Rafia
Sabih, Haribabu Kommi, and me; it is not clear from the email
discussion which of those people have looked specifically at this
part.
Discussion: http://postgr.es/m/CAFiTN-v3QYNJEZnnmKCeATuLbN-h9tMVfeEF0+BrouYDqjXgwg@mail.gmail.com
tokenize_file() now returns a single list of TokenizedLine structs,
carrying the same information as before. We were otherwise going to grow a
fourth list to deal with error messages, and that was getting a bit silly.
Haribabu Kommi, revised a bit by me
Discussion: https://postgr.es/m/CAJrrPGfbgbKsjYp=bgZXhMcgxoaGSoBb9fyjrDoOW_YymXv1Kw@mail.gmail.com
When I wrote commit ab1f0c822, I really missed the castNode() macro that
Peter E. had proposed shortly before. This back-fills the uses I would
have put it to. It's probably not all that significant, but there are
more assertions here than there were before, and conceivably they will
help catch any bugs associated with those representation changes.
I left behind a number of usages like "(Query *) copyObject(query_var)".
Those could have been converted as well, but Peter has proposed another
notational improvement that would handle copyObject cases automatically,
so I let that be for now.
The new function allows to cast from one NodeTag based type to
another, while asserting that the conversion is valid. This replaces
the common pattern of doing a cast and a Assert(IsA(ptr, type))
close-by.
As this seems likely to be used pervasively, we decided to backpatch
this change the addition of this macro. Otherwise backpatched fixes
are more likely not to work on back-branches.
On branches before 9.6, where we do not yet rely on inline functions
being available, the type assertion is only performed if PG_USE_INLINE
support is detected. The cast obviously is performed regardless.
For the benefit of verifying the macro compiles in the back-branches,
this commit contains a single use of the new macro. On master, a
somewhat larger conversion will be committed separately.
Author: Peter Eisentraut and Andres Freund
Reviewed-By: Tom Lane
Discussion: https://postgr.es/m/c5d387d9-3440-f5e0-f9d4-71d53b9fbe52@2ndquadrant.com
Backpatch: 9.2-
Our current DDL only allows a database name to be specified in COMMENT
ON DATABASE, which Andrew Dunstan reports to make this test fail on the
buildfarm. Remove the line until we gain a DDL command that allows the
current database to be operated on without having the specify it by
name.
Backpatch to 9.5, where these tests appeared.
Discussion: https://postgr.es/m/e6084b89-07a7-7e57-51ee-d7b8fc9ec864@2ndQuadrant.com
We maintained two separate expected files because log_cnt could be one
of two values. Rewrite the test so that we only need one file.
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Hot_standby_feedback could be reset by reload and worked correctly, but if
the server was restarted rather than reloaded the xmin was not reset.
Force reset always if hot_standby_feedback is enabled at startup.
Ants Aasma, Craig Ringer
Reported-by: Ants Aasma
!foo means "the tsvector does not contain foo", and therefore it should
match an empty tsvector. ts_match_vq() overenthusiastically supposed
that an empty tsvector could never match any query, so it forcibly
returned FALSE, the wrong answer. Remove the premature optimization.
Our behavior on this point was inconsistent, because while seqscans and
GIST index searches both failed to match empty tsvectors, GIN index
searches would find them, since GIN scans don't rely on ts_match_vq().
That makes this certainly a bug, not a debatable definition disagreement,
so back-patch to all supported branches.
Report and diagnosis by Tom Dunstan (bug #14515); added test cases by me.
Discussion: https://postgr.es/m/20170126025524.1434.97828@wrigleys.postgresql.org
This improves readability a bit and may make future improvements easier.
In passing, make sure that the JB_ROOT_IS_XXX macros deliver boolean (0/1)
results; the previous coding was a bug hazard, though no actual bugs are
known.
Nikita Glukhov, extended a bit by me
Discussion: https://postgr.es/m/9e21a39c-c1d7-b9b5-44a0-c5345a5029f6@postgrespro.ru
There's really no situation where we don't want these unknown-to-text
conversions to happen. The alternative is failure anyway, and the one
caller that was passing "false" did so only because it expected the
case could not arise. Might as well simplify the code.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
Previously, type "unknown" was labeled as a base type in pg_type, which
perhaps had some sense to it because you were allowed to create tables with
unknown-type columns. But now that we don't allow that, it makes more
sense to label it a pseudo-type. This has the additional effects of
forbidding use of "unknown" as a domain base type, cast source or target
type, PL function argument or result type, or plpgsql local variable type;
all of which seem like good holes to plug.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
Previously, we left such literals alone if the query or subquery had
no properties forcing a type decision to be made (such as an ORDER BY or
DISTINCT clause using that output column). This meant that "unknown" could
be an exposed output column type, which has never been a great idea because
it could result in strange failures later on. For example, an outer query
that tried to do any operations on an unknown-type subquery output would
generally fail with some weird error like "failed to find conversion
function from unknown to text" or "could not determine which collation to
use for string comparison". Also, if the case occurred in a CREATE VIEW's
query then the view would have an unknown-type column, causing similar
failures in queries trying to use the view.
To fix, at the tail end of parse analysis of a query, forcibly convert any
remaining "unknown" literals in its SELECT or RETURNING list to type text.
However, provide a switch to suppress that, and use it in the cases of
SELECT inside a set operation or INSERT command. In those cases we already
had type resolution rules that make use of context information from outside
the subquery proper, and we don't want to change that behavior.
Also, change creation of an unknown-type column in a relation from a
warning to a hard error. The error should be unreachable now in CREATE
VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown"
in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because
it's nothing but a foot-gun.
This change creates a pg_upgrade failure case: a matview that contains an
unknown-type column can't be pg_upgraded, because reparsing the matview's
defining query will now decide that the column is of type text, which
doesn't match the cstring-like storage that the old materialized column
would actually have. Add a checking pass to detect that. While at it,
we can detect tables or composite types that would fail, essentially
for free. Those would fail safely anyway later on, but we might as
well fail earlier.
This patch is by me, but it owes something to previous investigations
by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for
review.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
According to the comments in tupconvert.c, it's necessary to perform
tuple conversion when either table has OIDs, and this was previously
checked by ensuring that the tdtypeid value matched between the tables
in question. However, that's overly stringent: we have access to
tdhasoid and can test directly whether OIDs are present, which lets us
avoid conversion in cases where the type OIDs are different but the
tuple descriptors are entirely the same (and neither has OIDs). This
is useful to the partitioning code, which can thereby avoid converting
tuples when inserting into a partition whose columns appear in the
same order as the parent columns, the normal case. It's possible
for the tuple routing code to avoid some additional overhead in this
case as well, so do that, too.
It's not clear whether it would be OK to skip this when both tables
have OIDs: do callers count on this to build a new tuple (losing the
previous OID) in such instances? Until we figure it out, leave the
behavior in that case alone.
Amit Langote, reviewed by me.
Commit 587cda35c added a test to updatable_views.sql that created
tables named the same as tables used by the concurrent inherit.sql
script. Unsurprisingly, this results in random failures.
Pick different names.
Per buildfarm.
In the new code for selecting sequence data from pg_sequence, set the
schema to pg_catalog instead of the sequences own schema, and refer to
the sequence by OID instead of name, which was missing a schema
qualification.
Reported-by: Stephen Frost <sfrost@snowman.net>
Formerly an alternate password file could only be selected via the
environment variable PGPASSFILE; now it can also be selected via a
new connection parameter "passfile", corresponding to the conventions
for most other connection parameters. There was some concern about
this creating a security weakness, but it was agreed that that argument
was pretty thin, and there are clear use-cases for handling password
files this way.
Julian Markwort, reviewed by Fabien Coelho, some adjustments by me
Discussion: https://postgr.es/m/a4b4f4f1-7b58-a0e8-5268-5f7db8e8ccaa@uni-muenster.de
This is useful infrastructure for an upcoming proposed patch to
allow the WAL segment size to be changed at initdb time; tools like
pg_basebackup need the ability to interrogate the server setting.
But it also doesn't seem like a bad thing to have independently of
that; it may find other uses in the future.
Robert Haas and Beena Emerson. (The original patch here was by
Beena, but I rewrote it to such a degree that most of the code
being committed here is mine.)
Discussion: http://postgr.es/m/CA+TgmobNo4qz06wHEmy9DszAre3dYx-WNhHSCbU9SAwf+9Ft6g@mail.gmail.com
If you create a DestReciver of type DestRemote and try to use it from
a replication connection that is not bound to a specific daabase, or
any other hypothetical type of backend that is not bound to a specific
database, it will fail because it doesn't have a pg_proc catalog to
look up properties of the types being printed. In general, that's
an unavoidable problem, but we can hardwire the properties of a few
builtin types in order to support utility commands. This new
DestReceiver of type DestRemoteSimple does just that.
Patch by me, reviewed by Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmobNo4qz06wHEmy9DszAre3dYx-WNhHSCbU9SAwf+9Ft6g@mail.gmail.com
This patch doesn't actually make any index AM parallel-aware, but it
provides the necessary functions at the AM layer to do so.
Rahila Syed, Amit Kapila, Robert Haas
Previously, ExecInitModifyTable was missing handling for WITH CHECK
OPTION, and view_query_is_auto_updatable was missing handling for
RELKIND_PARTITIONED_TABLE.
Amit Langote, reviewed by me.
In 2ac3ef7a01, we changed things so that
it's possible for a different TupleTableSlot to be used for partitioned
tables at successively lower levels. If we do end up changing the slot
from the original, we must update ecxt_scantuple to point to the new one
for partition key of the tuple to be computed correctly.
Reported by Rajkumar Raghuwanshi. Patch by Amit Langote.
Discussion: http://postgr.es/m/CAKcux6%3Dm1qyqB2k6cjniuMMrYXb75O-MB4qGQMu8zg-iGGLjDw%40mail.gmail.com
We've accumulated quite a bit of stuff with which pgindent is not
quite happy in this code; clean it up to provide a less-annoying base
for future pgindent runs.
The code here previously tried to call the partitioning operator, but
really the right thing to do (and the safe thing to do) is use
datumIsEqual().
Amit Langote, but I expanded the comment and fixed a compiler warning.
For some reason that is lost in history, a descending sequence would
default its minimum value to -2^63+1 (-PG_INT64_MAX) instead of
-2^63 (PG_INT64_MIN), even though explicitly specifying a minimum value
of -2^63 would work. Fix this inconsistency by using the full range by
default.
Reported-by: Daniel Verite <daniel@manitou-mail.org>
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
initdb used to warn about that, but it was changed to an error in
pg_import_system_locales, but some build farm members failed because of
that. Change it back to a warning.
Vacuum truncation scan can be sped up on rotating media by prefetching
blocks in forward direction. That makes the blocks already present in
memory by the time they are needed, while also letting OS read-ahead
kick in.
The truncate scan has been measured to be five times faster than without
this patch (that was on a slow disk, but it shouldn't hurt on fast
disks.)
Author: Álvaro Herrera, loosely based on a submission by Claudio Freire
Discussion: https://postgr.es/m/CAGTBQpa6NFGO_6g_y_7zQx8L9GcHDSQKYdo1tGuh791z6PYgEg@mail.gmail.com
Joining three tables only takes two join nodes. I think when I (tgl)
wrote this, I was envisioning possible additional joins; but since the
example doesn't show any fourth table, it's just confusing to write
a third join node.
Etsuro Fujita
Discussion: https://postgr.es/m/e6cfbaa3-af02-1abc-c25e-8fa5c6bc4e21@lab.ntt.co.jp
This appears to be necessary to fix a failure seen on buildfarm member
sittella. It shouldn't be necessary according to the letter of the C
standard, because we don't change the values of these variables within
the PG_TRY blocks; but somehow gcc 4.7.2 is dropping the ball.
Discussion: https://postgr.es/m/17555.1485179975@sss.pgh.pa.us
Missing a destroyPQExpBuffer() in the early exit branch. The early
exits aren't really necessary. Most similar functions just proceed
running the rest of the code zero times and clean up at the end.
Project style is to put things in this order, for the good and sufficient
reason that you often need the typedefs in the function declarations.
There already was one function declaration that needed a typedef, which
was randomly placed away from all the other static function declarations
in consequence. And the submitted patch for better json_populate_record
functionality jumped through even more hoops in order to preserve this
bad idea.
This patch only moves lines from point A to point B, no other changes.
Coverity complained quite properly that commit ea15e1867 had introduced
unreachable code into ExecGather(); to wit, it was no longer possible to
iterate the final for-loop more or less than once. So remove the for().
In passing, clean up a couple of comments, and make better use of a local
variable.
Turns out this has been broken for years and we'd not noticed. The one
case that was getting exercised in the buildfarm, or probably anywhere
else, was postgres_fdw.sl's reference to libpq.sl; and it turns out that
that was always going to libpq.sl in the actual installation directory
not the temporary install. We'd not noticed because the buildfarm script
does "make install" before it tests contrib. However, the recent addition
of a logical-replication test to the core regression scripts resulted in
trying to use libpqwalreceiver.sl before "make install" happens, and that
failed for lack of finding libpq.sl, as shown by failures on buildfarm
members gaur and pademelon.
There are two changes needed to fix it: the magic environment variable to
specify shlib search path at runtime is SHLIB_PATH not LD_LIBRARY_PATH,
and the shlib link command needs to specify the +s switch else the library
will not honor SHLIB_PATH.
I'm not quite sure why buildfarm members anole and gharial (HPUX 11) didn't
show the same failure. Consulting man pages on the web says that HPUX 11
honors both LD_LIBRARY_PATH and SHLIB_PATH, which would explain half of it,
and the rather confusing wording I've been able to find suggests that +s
might effectively be the default in HPUX 11. But it seems at least as
likely that there's just a libpq.so installed in /usr/lib on that machine;
as long as it's not too ancient, that would satisfy the test. In any case
I do not think this patch will break HPUX 11.
At the moment I don't see a need to back-patch this, since it only matters
for testing purposes, not to mention that HPUX 10 is probably dead in the
real world anyway.
When (1) autovacuum = off and (2) there's at least one database with
an XID age greater than autovacuum_freeze_max_age and (3) all tables
in that database that need vacuuming are already being processed by a
worker and (4) the autovacuum launcher is started, a kind of infinite
loop occurs. The launcher starts a worker and immediately exits. The
worker, finding no worker to do, immediately starts the launcher,
supposedly so that the next database can be processed. But because
datfrozenxid for that database hasn't been advanced yet, the new
worker gets put right back into the same database as the old one,
where it once again starts the launcher and exits. High-speed ping
pong ensues.
There are several possible ways to break the cycle; this seems like
the safest one.
Amit Khandekar (code) and Robert Haas (comments), reviewed by
Álvaro Herrera.
Discussion: http://postgr.es/m/CAJ3gD9eWejf72HKquKSzax0r+epS=nAbQKNnykkMA0E8c+rMDg@mail.gmail.com
If either bound is infinite, then we shouldn't even try to perform a
comparison of the values themselves. Rearrange the logic so that
we don't.
Per buildfarm member skink and Tom Lane.
This was forgotten in 665d1fad99 and
caused the whole buildfarm to become red for a little while.
Author: Petr Jelínek
Also fix a typo in a nearby error message.
pgoutput evidently needs to be built without -DBUILDING_DLL. (It seems
like a pretty bad idea that these makefiles need to know exactly where
all the shlibs are in the tree, or maybe what's bad is putting them under
src/backend/. But right now is not the time to redesign that.)
Also, remove "override CPPFLAGS" in pgoutput's Makefile. I don't think
that that actually has any bad consequences, but it's certainly useless
in a directory that has no .h files, and it might be contributing to the
failure somehow.
Per buildfarm.
A pgbench meta command can now be continued onto additional line(s) of a
script file by writing backslash-return. The continuation marker is
equivalent to white space in that it separates tokens.
Eventually it'd be nice to have the same thing in psql, but that will
be a much larger project.
Fabien Coelho, reviewed by Rafia Sabih
Discussion: https://postgr.es/m/alpine.DEB.2.20.1610031049310.19411@lancre
The publication test didn't drop all the publications it was creating
when it was probably intending to do that. There is still a bug with
dependency tracking in there, but this should at least quiet down the
build farm.