In REFRESH MATERIALIZED VIEW command, CONCURRENTLY option is only
allowed if there is at least one unique index with no WHERE clause on
one or more columns of the matview. Previously, concurrent refresh
checked the existence of a unique index on the matview after filling
the data to new snapshot, i.e., after calling refresh_matview_datafill().
So, when there was no unique index, we could need to wait a long time
before we detected that and got the error. It was a waste of time.
To eliminate such wasting time, this commit changes concurrent refresh
so that it checks the existence of a unique index at the beginning of
the refresh operation, i.e., before starting any time-consuming jobs.
If CONCURRENTLY option is not allowed due to lack of a unique index,
concurrent refresh can immediately detect it and emit an error.
Author: Masahiko Sawada
Reviewed-by: Michael Paquier, Fujii Masao
The API spec for this function was changed completely (and for the better)
by commit 3cba8999b3, but it didn't bother
with anything as mundane as updating the comments.
NextXID has been rendered in the form of a pg_lsn even though it
really is not. This can cause confusion, so change the format from
%u/%u to %u:%u, per discussion on hackers.
Complaint by me, patch by me and Bruce, reviewed by Michael Paquier
and Alvaro. Applied to HEAD only.
Author: Joe Conway, Bruce Momjian
Reviewed-by: Michael Paquier, Alvaro Herrera
Backpatch-through: master
The previous value of 5s is inadequate for the buildfarm's
CLOBBER_CACHE_ALWAYS animals: they take long enough to do the is-it-waiting
queries that the timeout expires, allowing the database state to change,
before isolationtester is done looking. Perhaps 10s will be enough.
(If it isn't, I'm inclined to reduce the number of sessions involved.)
This mostly reverts commit 9c9782f066.
I left in the parts that rearranged removal of completed waiting steps;
but the idea of not rechecking a step's blocked-ness isn't working.
There is no reason to have the per-thread logfile file pointer as a
separate parameter in various functions: it's much simpler to put it in
the per-thread state struct instead, which is already being passed to
all functions that need the log file anyway. Change the callsites in
which it was used as a boolean to test whether logging is active, so
that they use the use_log global variable instead.
No backpatch, even though this exists since commit a887c486d5 of March
2010, because this is just for cleanliness' sake and the surrounding
code has been modified a lot recently anyway.
It turns out that there is a second race condition in the new deadlock-hard
test: once the deadlock detector fires, it's uncertain whether step s7a8 or
step s8a1 will report first, because killing s8's transaction unblocks s7.
So far, s7 has only been seen to report first in CLOBBER_CACHE_ALWAYS
builds, but it's pretty reproducible there, and in theory it should
sometimes occur in normal builds too. If s7 were a bit slower than usual,
that could also break the test, since the existing expected-file assumes
that we'll see s7a8 report the first time we check it after s8a1 completes.
To fix, add a post-lock delay to s7a8.
If we're retrying a step, then we already decided it was blocked on a lock,
and there's no need to recheck that. The original coding of commit
38f8bdcac4 resulted in a large number of
is-it-waiting queries when dealing with multiple concurrently-blocked
sessions, which is fairly pointless and also results in test failures in
CLOBBER_CACHE_ALWAYS builds, where the is-it-waiting query is quite slow.
This definition also permits appending pg_sleep() calls to steps where it's
needed to control the order of finish of concurrent steps. Before, that
did not work nicely because we'd decide that a step performing a sleep was
not blocked and hang up waiting for it to finish, rather than noticing the
completion of the concurrent step we're supposed to notice first.
In passing, revise handling of removal of completed waiting steps
to make it a bit less messy.
An extensible node is always tagged T_Extensible, but the extnodename
field identifies it more specifically; it may also include arbitrary
private data. Extensible nodes can be copied, tested for equality,
serialized, and deserialized, but the core system doesn't know
anything about them otherwise. Some extensions may find it useful to
include these nodes in fdw_private or custom_private lists in lieu of
arm-wrestling their data into a format that the core code can
understand.
Along the way, so as not to burden the authors of such extensible
node types too much, expose the functions for writing serialized
tokens, and for serializing and deserializing bitmapsets.
KaiGai Kohei, per a design suggested by me. Reviewed by Andres Freund
and by me, and further edited by me.
The new deadlock-soft-2 test has a timing dependency too: it supposes
that isolationtester will detect step s1b as waiting before the deadlock
detector runs and grants it the lock. Adjust deadlock_timeout to ensure
that that's true even in CLOBBER_CACHE_ALWAYS builds, where the wait
detection query is quite slow. Per buildfarm member jaguarundi.
If we ever get around to allowing functional dependency to be proven
from other things besides simple primary keys, this code will need to
be rethought, but that was true anyway. In the meantime, we might as
well not have two very-similar routines for scanning pg_constraint.
David Rowley, reviewed by Julien Rouhaud
If a GROUP BY clause includes all columns of a non-deferred primary key,
as well as other columns of the same relation, those other columns are
redundant and can be dropped from the grouping; the pkey is enough to
ensure that each row of the table corresponds to a separate group.
Getting rid of the excess columns will reduce the cost of the sorting or
hashing needed to implement GROUP BY, and can indeed remove the need for
a sort step altogether.
This seems worth testing for since many query authors are not aware of
the GROUP-BY-primary-key exception to the rule about queries not being
allowed to reference non-grouped-by columns in their targetlists or
HAVING clauses. Thus, redundant GROUP BY items are not uncommon. Also,
we can make the test pretty cheap in most queries where it won't help
by not looking up a rel's primary key until we've found that at least
two of its columns are in GROUP BY.
David Rowley, reviewed by Julien Rouhaud
A pending patch requires exporting a function returning Bitmapset from
catalog/pg_constraint.c. As things stand, that would mean including
nodes/bitmapset.h in pg_constraint.h, which might be hazardous for the
client-side includability of that header. It's not entirely clear whether
any client-side code needs to include pg_constraint.h, but it seems prudent
to assume that there is some such code somewhere. Therefore, split off the
function definitions into a new file pg_constraint_fn.h, similarly to what
we've done for some other catalog header files.
Historically this message has been emitted at the end of ShutdownXLOG().
That's not an insane place for it in a standalone backend, but in the
postmaster environment we've grown a fair amount of stuff that happens
later, including archiver/walsender shutdown, stats collector shutdown,
etc. Recent buildfarm experimentation showed that on slower machines
there could be many seconds' delay between finishing ShutdownXLOG() and
actual postmaster exit. That's fairly confusing, both for testing
purposes and for DBAs. Hence, move the code that prints this message
into UnlinkLockFiles(), so that it comes out just after we remove the
postmaster's pidfile. That is a more appropriate definition of "is shut
down" from the point of view of "pg_ctl stop", for example. In general,
removing the pidfile should be the last externally-visible action of
either a postmaster or a standalone backend; compare commit
d73d14c271 for instance. So this seems
like a reasonably future-proof approach.
This finishes the work - spread across many commits over the last
several months - of putting each type of lock other than the named
individual locks into a separate tranche.
Amit Kapila
The original formulation of 4c9864b9b4
was extremely timing-sensitive, because it arranged for the deadlock
detector to be running (and possibly unblocking the current query)
at almost exactly the same time as isolationtester would be probing
to see if the query is blocked. The committed expected-file assumed
that the deadlock detection would finish first, but we see the opposite
on both fast and slow buildfarm animals. Adjust the deadlock timeout
settings to make it predictable that isolationtester *will* see the
query as waiting before deadlock detection unblocks it.
I used a 5s timeout for the same reasons mentioned in
a7921f71a3.
Fix a few oversights in 38f8bdcac4:
don't leak memory in run_permutation(), remember when we've issued
a cancel rather than issuing another one every 10ms,
fix some typos in comments.
Commit 0e141c0fbb introduced a new
facility to reduce ProcArrayLock contention by clearing several XIDs
from the ProcArray under a single lock acquisition. The names
initially chosen were deemed not to be very good choices, so commit
4aec49899e renamed them. But now it
seems like we still didn't get it right. A pending patch wants to
add similar infrastructure for batching CLOG updates, so the names
need to be clear enough to allow a new set of structure members with
a related purpose.
Amit Kapila
This allows testing of deadlock scenarios. Scenarios that would
previously have been considered invalid are now simply taken as a
scenario in which more than one backend will wait.
This is a necessary prerequisite for forthcoming changes to allow deadlock
scenarios to be tested by the isolation tester. It is also a good idea on
general principle, since these scenarios add no useful test coverage not
provided by other scenarios, but do to take time to execute.
Thirty seconds was not consistently enough for promotion to complete on
buildfarm members sungazer and tern. Experiments suggest 43s would have
been enough. Back-patch to 9.5, where pg_rewind was introduced.
Many automated test suites call pg_ctl. Buildfarm members axolotl,
hornet, mandrill, shearwater, sungazer and tern have failed when server
shutdown took longer than the pg_ctl default 60s timeout. This addition
permits slow hosts to easily raise the timeout without us editing a
--timeout argument into every test suite pg_ctl call. Back-patch to 9.1
(all supported versions) for the sake of automated testing.
Reviewed by Tom Lane.
It turns out that on FreeBSD-derived platforms (including OS X), the
*scanf() family of functions is pretty much brain-dead about multibyte
characters. In particular it will apply isspace() to individual bytes
of input even when those bytes are part of a multibyte character, thus
allowing false recognition of a field-terminating space.
We appear to have little alternative other than instituting a coding
rule that *scanf() is not to be used if the input string might contain
multibyte characters. (There was some discussion of relying on "%ls",
but that probably just moves the portability problem somewhere else,
and besides it doesn't fully prevent BSD *scanf() from using isspace().)
This patch is a down payment on that: it gets rid of use of sscanf()
to parse ispell dictionary files, which are certainly at great risk
of having a problem. The code is cleaner this way anyway, though
a bit longer.
In passing, improve a few comments.
Report and patch by Artur Zakirov, reviewed and somewhat tweaked by me.
Back-patch to all supported branches.
As of commit c1772ad922, there's no
longer any way of requesting additional LWLocks in the main tranche,
so we don't need NumLWLocks() or LWLockAssign() any more. Also,
some of the allocation counters that we had previously aren't needed
any more either.
Amit Kapila
Further investigation says that there may be some slow operations after
we've finished ShutdownXLOG(), so add some more log messages to try to
isolate that. This is all temporary code too.
If we've got a relatively straightforward join between two tables,
this pushes that join down to the remote server instead of fetching
the rows for each table and performing the join locally. Some cases
are not handled yet, such as SEMI and ANTI joins. Also, we don't
yet attempt to create presorted join paths or parameterized join
paths even though these options do get tried for a base relation
scan. Nevertheless, this seems likely to be a very significant win
in many practical cases.
Shigeru Hanada and Ashutosh Bapat, reviewed by Robert Haas, with
additional review at various points by Tom Lane, Etsuro Fujita,
KaiGai Kohei, and Jeevan Chalke.
Early returns from the buildfarm show that there's a bit of a gap in the
logging I added in 3971f64843: the portion of CreateCheckPoint()
after CheckPointGuts() can take a fair amount of time. Add a few more
log messages in that section of code. This too shall be reverted later.
This is a quick hack, due to be reverted when its purpose has been served,
to try to gather information about why some of the buildfarm critters
regularly fail with "postmaster does not shut down" complaints. Maybe they
are just really overloaded, but maybe something else is going on. Hence,
instrument pg_ctl to print the current time when it starts waiting for
postmaster shutdown and when it gives up, and add a lot of logging of the
current time in the server's checkpoint and shutdown code paths.
No attempt has been made to make this pretty. I'm not even totally sure
if it will build on Windows, but we'll soon find out.
Since pgindent treats typedef names as global, the original coding of
b47b4dbf68 would have had rather nasty effects on the formatting
of other files in which "string" is used as a variable or field name.
Use a less generic name for this typedef, and rename some other
identifiers to match.
Peter Geoghegan, per gripe from me
Previously, our regex code defined CHR_MAX as 0xfffffffe, which is a
bad choice because it is outside the range of type "celt" (int32).
Characters approaching that limit could lead to infinite loops in logic
such as "for (c = a; c <= b; c++)" where c is of type celt but the
range bounds are chr. Such loops will work safely only if CHR_MAX+1
is representable in celt, since c must advance to beyond b before the
loop will exit.
Fortunately, there seems no reason not to restrict CHR_MAX to 0x7ffffffe.
It's highly unlikely that Unicode will ever assign codes that high, and
none of our other backend encodings need characters beyond that either.
In addition to modifying the macro, we have to explicitly enforce character
range restrictions on the values of \u, \U, and \x escape sequences, else
the limit is trivially bypassed.
Also, the code for expanding case-independent character ranges in bracket
expressions had a potential integer overflow in its calculation of the
number of characters it could generate, which could lead to allocating too
small a character vector and then overwriting memory. An attacker with the
ability to supply arbitrary regex patterns could easily cause transient DOS
via server crashes, and the possibility for privilege escalation has not
been ruled out.
Quite aside from the integer-overflow problem, the range expansion code was
unnecessarily inefficient in that it always produced a result consisting of
individual characters, abandoning the knowledge that we had a range to
start with. If the input range is large, this requires excessive memory.
Change it so that the original range is reported as-is, and then we add on
any case-equivalent characters that are outside that range. With this
approach, we can bound the number of individual characters allowed without
sacrificing much. This patch allows at most 100000 individual characters,
which I believe to be more than the number of case pairs existing in
Unicode, so that the restriction will never be hit in practice.
It's still possible for range() to take awhile given a large character code
range, so also add statement-cancel detection to its loop. The downstream
function dovec() also lacked cancel detection, and could take a long time
given a large output from range().
Per fuzz testing by Greg Stark. Back-patch to all supported branches.
Security: CVE-2016-0773
Commit 7f46eaf added the regression test which checks that
gin_clean_pending_list() cleans up the GIN pending list and returns >0.
This usually works fine. But if autovacuum comes along and cleans
the list before gin_clean_pending_list() starts, the function may
return 0, and then the regression test may fail.
To fix the problem, this commit disables autovacuum on the target
index of gin_clean_pending_list() by setting autovacuum_enabled
reloption to off when creating the table.
Also this commit sets gin_pending_list_limit reloption to 4MB on
the target index. Otherwise when running "make installcheck" with
small gin_pending_list_limit GUC, insertions of data may trigger
the cleanup of pending list before gin_clean_pending_list() starts
and the function may return 0. This could cause the regression test
to fail.
Per buildfarm member spoonbill.
Reported-By: Tom Lane
In 61444bfb we started to allow HAVING clauses to be fully pushed down
into WHERE, even when grouping sets are in use. That turns out not to
work correctly, because grouping sets can "produce" NULLs, meaning that
filtering in WHERE and HAVING can have different results, even when no
aggregates or volatile functions are involved.
Instead only allow pushdown of empty grouping sets.
It'd be nice to do better, but the exact mechanics of deciding which
cases are safe are still being debated. It's important to give correct
results till we find a good solution, and such a solution might not be
appropriate for backpatching anyway.
Bug: #13863
Reported-By: 'wrb'
Diagnosed-By: Dean Rasheed
Author: Andrew Gierth
Reviewed-By: Dean Rasheed and Andres Freund
Discussion: 20160113183558.12989.56904@wrigleys.postgresql.org
Backpatch: 9.5, where grouping sets were introduced